#### Schedule (tentative)

- Technical Programs（PDF）

#### Keynote Lecture

- Keynote Lecture 1: Shun-ichi Amari (RIKEN Center for Brain Science)
- Keynote Lecture 2: Maneesh Sahani (Gatsby Computational Neuroscience Unit, UCL)

Statistical Neurodynamics of Deep Networks: Signal Transformations and Fisher Information

Abstract: Statistical neurodynamics studies macroscopic behaviors of randomly connected neural networks. We consider feedforward deep layered networks where input signals are processed layer by layer. The manifold of input signals is embedded in a higher-dimensional manifold as a curved submanifold or reduced to a lower-dimensional manifold. We prove that the manifold is transformed conformally, keeping the original shape. We study the enlargement ratio and curvature of the embedded manifold. The distance between two signals changes, and eventually converges to a constant, provided the number of neurons and the number of layers are infinitely large. In reality, they are finite so there remain frustrations. We study the effect of finiteness.

Considering a deep network as a stochastic machine transforming input signals to output signals. we can discuss the Fisher information matrix of the network parameters. We prove that the Fisher information matrix is nearly block-diagonal except for small off-diagonal elements. We further show that a block corresponding to each unit is nearly diagonal except for bias-weight interaction terms. This makes it easy to implement natural gradient learning without matrix inversion. This justifies the quasi-diagonal natural gradient proposed by Yann Ollivier.

Computing with Distributed Distributional Codes: Convergent Inference in Brains and Machines?

Abstract: A long-standing aspiration has been to discover a single theory that both underpins an understanding of computation in the nervous system, and forms the basis of effective machine learning and adaptive computation. However, despite the very substantial strides achieved in supervised learning over the past decade---and some hints of connections between the solutions found by such systems and neural representation---there is still a sense that these machines work quite differently to brains. Part of the dissonance lies in the unrealism of backpropagation, and the undifferentiated structure of most "neural-network" architectures. But beyond that, the very problems being solved seem different. In particular, the brain's capacity for flexible inference: to parse and understand the components of an environment even if they are unfamiliar, or to reliably plan an action for the first time, appear to depend on learning a form of causal representation, most probably with little or no supervision. Such causal representation is also far more than the problem of density estimation that has dominated recent work in unsupervised machine learning, and which has similarly depended on backpropagation and unrealistic architecture.

After reviewing this situation I will outline recent work synthesizing a number of older ideas in both theoretical neuroscience and machine learning, which we hope will begin to lay the groundwork for a common theory of neural and machine inference.

#### Tutorials

- Tutorial 1: Yutaka Matsuo Lab (Univ. Tokyo)
- Tutorial 2: Yukiyasu Kamitani (Kyoto Univ.)
- Tutorial 3: Tetsuya Ogata (Waseda Univ. and AIST)

Deep Learning and Intelligence: Neuro-perspective and Recent Trends

Abstract: Since its rise over ten years [1], deep neural networks (DNNs) have shown significant performance improvements in various fields, including image processing, audio processing, and natural language processing. This tutorial introduces deep learning from both neural-perspective and computational perspective.

In the first part of the tutorial, we review the historical connections between neuroscience and computer science. In particular, we focus on the mechanism of the attention system that allows the artificial agents to focus on the task while keeping awareness of the surrounding. We demonstrate the advantage of this approach in two fields of research, 1) Robotic vision system in human environments, 2) Hard attention system with CNN to deal with large-size medical image analysis.

From the computational perspective, one often explains the power of DNNs by the term "representation learning", indicating that DNNs uncover the complicated variations in data to provide powerful representations that are useful for classification tasks. In the second part of this tutorial, we summarize the recent building blocks of DNNs, including convolutional operations, LSTMs, VAEs, GANs, and meta-learning and explain how DNNs learn meaningful representations by leveraging deep(hierarchical) priors and end-to-end training.

[1] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep beliefnets. Neural computation , 18(7):1527–1554, 2006.

Brain-DNN Homology and its Applications

Abstract: Deep neural networks (DNNs) are computational models inspired by the functions of neurons and synapses of the brain. They have been studied as general-purpose machine learning models apart from neuroscience research. However, recent studies have recast the understanding of the relationship between the brain and DNNs. In image recognition using a DNN, for instance, by optimizing the hierarchical network composed of convolutional and nonlinear computation using large scale image data, features having different levels of complexity are automatically extracted at each DNN layer. Recent work has revealed that these features can be quantitatively correlated with characteristic representations found in monkey and human visual cortex. Our group focuses on the homology between the brain and DNNｓ from the viewpoint of decoding, and aims to elucidate neural representations of visual and internal imagery. In this presentation, I give a general overview of how DNNs are “inspired” by the brain and neurons in computational terms, and then explains methods for analyzing information representations at each DNN unit learned via the training with large scale natural image data. Based on these, I show how to convert human brain activity into a DNN signal pattern and then into an image, which is a reconstruction of a stimulus or an internal imagery.

Deep Neural Models for Robot Systems based on Predictive Learning

Abstract: In recent years, image, audio-speech, and natural language processing systems, etc., particularly systems using deep learning show great performance on both recognition and generation. Most of these systems use large-scale data that is already in the computerized cloud. Though there are some examples of applications of deep learning in real world, such as robots and autonomous cars, these researches mainly focused on only the use of image processing such as object recognition, position recognition, etc. In this tutorial, some algorithms of deep learning and libraries are introduced first. And we introduce our researches of application based on the concept of "predictive learning" for multi-modal integration and robot behavior learning etc. Finally, we will introduce the concept of the "cognitive developmental robotics" which is the process of development of a real-world cognition mechanism to enhance the studies of deep learning with robotics.

#### Symposia

- Symposium 1: Symbol Emergence in Robotics
- Tadahiro Taniguchi (Organizer): Toward cognitive architecture for symbol emergence in robotics: convergence of probabilistic generative models and deep learning
- Tetsuya Ogata: Neural models for linguistic and behavioral integration learning in robots
- Takayuki Nagai: Multimodal Categorization via Deep Neural Networks
- Symposium 2: Whole-Brain Architecture
- Hiroshi Yamakawa (Organizer): Strategy to build beneficial general-purpose intelligence inspired by brain
- Kotone Itaya: BriCA as a large-scale distributed memory computing kernel for machine learning systems
- Masahiko Osawa: Development of biologically inspired artificial general intelligence navigated by circuits associated with tasks
- Kosuke Miyoshi: Do top-down predictions of time series lead to sparse disentanglement?
- Seiya Sato: visualization of morphism tuples of equivalence structures
- Symposium 3: Studying the Brain from the Viewpoint of Neural Network Learning
- Taro Toyoizumi (Organizer): An optimization approach to understand biological searches and learning
- Jun-nosuke Teramae: A supervised learning rule as a stabilization mechanism of arbitral fixed points of hidden neurons
- Tomoyasu horikawa: Decoding of seen and imagined contents from the human brain via deep neural network representation