#### スケジュール

#### 基調講演

- 基調講演1 (10:10 - 11:10, Thursday, Oct 25th):

甘利俊一 (理化学研究所 脳神経科学研究センター) - 基調講演2 (9:40 - 10:40, Friday, Oct 26th):

Maneesh Sahani (Gatsby Computational Neuroscience Unit, UCL)

[K-1] Statistical Neurodynamics of Deep Networks: Signal Transformations and Fisher Information

Abstract: Statistical neurodynamics studies macroscopic behaviors of randomly connected neural networks. We consider feedforward deep layered networks where input signals are processed layer by layer. The manifold of input signals is embedded in a higher-dimensional manifold as a curved submanifold or reduced to a lower-dimensional manifold. We prove that the manifold is transformed conformally, keeping the original shape. We study the enlargement ratio and curvature of the embedded manifold. The distance between two signals changes, and eventually converges to a constant, provided the number of neurons and the number of layers are infinitely large. In reality, they are finite so there remain frustrations. We study the effect of finiteness.

Considering a deep network as a stochastic machine transforming input signals to output signals. we can discuss the Fisher information matrix of the network parameters. We prove that the Fisher information matrix is nearly block-diagonal except for small off-diagonal elements. We further show that a block corresponding to each unit is nearly diagonal except for bias-weight interaction terms. This makes it easy to implement natural gradient learning without matrix inversion. This justifies the quasi-diagonal natural gradient proposed by Yann Ollivier.

[K-2] Computing with Distributed Distributional Codes: Convergent Inference in Brains and Machines?

Abstract: A long-standing aspiration has been to discover a single theory that both underpins an understanding of computation in the nervous system, and forms the basis of effective machine learning and adaptive computation. However, despite the very substantial strides achieved in supervised learning over the past decade---and some hints of connections between the solutions found by such systems and neural representation---there is still a sense that these machines work quite differently to brains. Part of the dissonance lies in the unrealism of backpropagation, and the undifferentiated structure of most "neural-network" architectures. But beyond that, the very problems being solved seem different. In particular, the brain's capacity for flexible inference: to parse and understand the components of an environment even if they are unfamiliar, or to reliably plan an action for the first time, appear to depend on learning a form of causal representation, most probably with little or no supervision. Such causal representation is also far more than the problem of density estimation that has dominated recent work in unsupervised machine learning, and which has similarly depended on backpropagation and unrealistic architecture.

After reviewing this situation I will outline recent work synthesizing a number of older ideas in both theoretical neuroscience and machine learning, which we hope will begin to lay the groundwork for a common theory of neural and machine inference.

#### チュートリアル

- チュートリアル1 (14:00 - 16:30, Wednesday, Oct 24th):

松尾豊ラボ （東京大学） - チュートリアル2 (17:00 - 19:00, Wednesday, Oct 24th):

神谷之康 （京都大学） - チュートリアル3 (17:00 - 19:00, Wednesday, Oct 24th):

尾形哲也 （早稲田大学, 産業技術総合研究所）

[T-1] Deep Learning and Intelligence: Neuro-perspective and Recent Trends

Abstract: Since its rise over ten years [1], deep neural networks (DNNs) have shown significant performance improvements in various fields, including image processing, audio processing, and natural language processing. This tutorial introduces deep learning from both neural-perspective and computational perspective.

In the first part of the tutorial, we review the historical connections between neuroscience and computer science. In particular, we focus on the mechanism of the attention system that allows the artificial agents to focus on the task while keeping awareness of the surrounding. We demonstrate the advantage of this approach in two fields of research, 1) Robotic vision system in human environments, 2) Hard attention system with CNN to deal with large-size medical image analysis.

From the computational perspective, one often explains the power of DNNs by the term "representation learning", indicating that DNNs uncover the complicated variations in data to provide powerful representations that are useful for classification tasks. In the second part of this tutorial, we summarize the recent building blocks of DNNs, including convolutional operations, LSTMs, VAEs, GANs, and meta-learning and explain how DNNs learn meaningful representations by leveraging deep(hierarchical) priors and end-to-end training.

[1] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep beliefnets. Neural computation , 18(7):1527–1554, 2006.

[T-2] Brain-DNN Homology and its Applications

Abstract: Deep neural networks (DNNs) are computational models inspired by the functions of neurons and synapses of the brain. They have been studied as general-purpose machine learning models apart from neuroscience research. However, recent studies have recast the understanding of the relationship between the brain and DNNs. In image recognition using a DNN, for instance, by optimizing the hierarchical network composed of convolutional and nonlinear computation using large scale image data, features having different levels of complexity are automatically extracted at each DNN layer. Recent work has revealed that these features can be quantitatively correlated with characteristic representations found in monkey and human visual cortex. Our group focuses on the homology between the brain and DNNｓ from the viewpoint of decoding, and aims to elucidate neural representations of visual and internal imagery. In this presentation, I give a general overview of how DNNs are “inspired” by the brain and neurons in computational terms, and then explains methods for analyzing information representations at each DNN unit learned via the training with large scale natural image data. Based on these, I show how to convert human brain activity into a DNN signal pattern and then into an image, which is a reconstruction of a stimulus or an internal imagery.

[T-3] Deep Neural Models for Robot Systems based on Predictive Learning

Abstract: In recent years, image, audio-speech, and natural language processing systems, etc., particularly systems using deep learning show great performance on both recognition and generation. Most of these systems use large-scale data that is already in the computerized cloud. Though there are some examples of applications of deep learning in real world, such as robots and autonomous cars, these researches mainly focused on only the use of image processing such as object recognition, position recognition, etc. In this tutorial, some algorithms of deep learning and libraries are introduced first. And we introduce our researches of application based on the concept of "predictive learning" for multi-modal integration and robot behavior learning etc. Finally, we will introduce the concept of the "cognitive developmental robotics" which is the process of development of a real-world cognition mechanism to enhance the studies of deep learning with robotics.

#### シンポジウム

- シンポジウム1 (13:30 - 15:00, Thursday, Oct 25th)： 記号創発ロボティクス
- [S1-1] Tadahiro Taniguchi (Organizer): Toward cognitive architecture for symbol emergence in robotics: convergence of probabilistic generative models and deep learning
- [S1-2] Tetsuya Ogata: Neural models for linguistic and behavioral integration learning in robots
- [S1-3] Takayuki Nagai: Multimodal Categorization via Deep Neural Networks
- シンポジウム2 (14:00 - 15:30, Friday, Oct 26th)： 全脳アーキテクチャ
- [S2-1] Hiroshi Yamakawa (Organizer): Strategy to build beneficial general-purpose intelligence inspired by brain
- [S2-2] Kotone Itaya: BriCA Kernel: Cognitive computing platform for large-scale distributed memory environments
- [S2-3] Masahiko Osawa: Development of biologically inspired artificial general intelligence navigated by circuits associated with tasks
- [S2-4] Kosuke Miyoshi: Do top-down predictions of time series lead to sparse disentanglement?
- [S2-5] Seiya Satoh: visualization of morphism tuples of equivalence structures
- シンポジウム3 (10:00 - 11:30, Saturday, Oct 27th)： ニューラルネットワーク学習の視点から脳の計算原理を探る
- [S3-1] Taro Toyoizumi (Organizer): An optimization approach to understand biological searches and learning
- [S3-2] Jun-nosuke Teramae: A supervised learning rule as a stabilization mechanism of arbitral fixed points of hidden neurons
- [S3-3] Tomoyasu Horikawa: Decoding of seen and imagined contents from the human brain via deep neural network representation
- 口頭発表 1 11:10 - 12:30, Thursday, Oct 25th
- O1-1
- Characteristic Whisker Movements Reflect the Internal State of Mice Related to Reward Anticipation
- Kota Mizutani (Osaka University, Nagoya University), Jumpei Ozaki (Nara Institute of Science and Technology), Junichiro Yoshimoto (Nara Institute of Science and Technology), Takayuki Yamashita (Nagoya University)
- O1-2
- Humans Achieve Bayesian Optimality in Controlling Risk-Return Tradeoff of Coincident Timing Task
- QIRUI YAO (University of Electro-Communications)
- O1-3
- Estimating Synaptic Connections from Parallel Spike Trains
- Ryota Kobayashi (National Institute of Informatics)*; Shuhei Kurita (Kyoto University), Masanori Kitano (Ritsumeikan University), Kenji Mizuseki (Osaka City University), Barry J. Richmond (NIMH/NIH/DHHS), Shigeru Shinomoto (Kyoto University)
- O1-4
- Explaining Behavioral Data of Visual Material Discrimination with a Neural Network for Natural Image Recognition
- Takuya Koumura (NTT Communication Science Laboratories), Masataka Sawayama (NTT Communication Science Laboratories), Shin’ya Nishida (NTT Communication Science Laboratories)
- 口頭発表 2 11:00 – 12:00, Friday, Oct 26th
- O2-1
- Visuomotor Associative Learning under the Predictive Coding Framework: a Neuro-robotics Experiment
- Jungsik Hwang (Okinawa Institute of Science and Technology), Jun Tani (Okinawa Institute of Science and Technology Graduate University)
- O2-2
- Measuring the Convolution Neural Network similarities trained with different dataset using SVCCA
- Toya Teramoto (University of Electro-Communications), Hayaru Shouno (Graduate School of Informatics and Engineering, The University of Electro-Communications)
- O2-3
- Hierarchical Competitive Learning in Convolutional Neural Networks
- Takashi Shinozaki (NICT CiNet)
- 口頭発表 3 9:00-10:00, Saturday, Oct 27th
- O3-1
- Observation and Analyses of the Dynamics of the Whole Head Nervous System in C. elegans
- Yuichi Iino (The University of Tokyo)
- O3-2
- Multisensory Integration in the HBP Neurorobotics Platform
- Florian Walter (Technical University of Munich), Fabrice O. Morin (Technical University of Munich), Alois Knoll (Robotics and Embedded Systems)
- O3-3
- Phase Synchrony in Symbolic Communication: Effect of Order of Messaging Bearing Intention
- Masayuki Fujiwara (JAIST), Takashi Hashimoto (JAIST), Guanhong Li (JAIST), Jiro Okuda (Kyoto Sangyo University), Takeshi Konno (Kanazawa Institute of Technology), Kazuyuki Samejima (Tamagawa University), Junya Morita (Shizuoka University)