第21回 冬のワークショップ

複雑な認知と意思決定 "Complex cognition and decision-making"

日程:2022年1月13日(水)

参加登録:
Online *参加登録はこちらから。
締切は1/7(金)正午です。

スケジュール:

 1月12日(水)
複雑な認知と意思決定
"Complex cognition and decision-making"
12日 午前セッション
9:10 -10:00 大前彰吾 (Baylor College of Medicine)
10:00-10:30 Breakout Session
10:40-11:30 田中秀宣 (Physics & Informatics Lab, NTT Research)
11:30-12:00 Breakout Session
12日 午後セッション
14:00-14:50 Mingbo Cai (東京大学 国際高等研究所 ニューロインテリジェンス国際研究機構)
14:50-15:20 Breakout Session
15:30-16:20 Tianming Yang(Chinese Academy of Sciences)
16:20-16:50 Breakout Session




Abstracts and References:

大前彰吾
Shogo Ohmae
Baylor College of Medicine


Network dynamics of the cerebellum in processing time: The role of the cerebellar feedback connections from the output layer to the input layer.

Network dynamics is the core of the mechanism by which the network of neurons realizes advanced functions of the brain such as decision making. In particular, network dynamics for processing of time requires the optimization not only of the endpoint of network behavior but also of its time course. Therefore, I have been focusing on the network dynamics of the cerebellum, which is considered to play a particularly important role in processing time of less than a second. According to the conventional cerebellar theory, the circuit from the input layer to the output layer (i.e. granule cell-Purkinje cell-cerebellar nucleus) forms a feedforward circuit and lacks complex network dynamics. However, this idea had not been experimentally verified, so I conducted a series of experimental tests. First, I demonstrated that the cerebellum is indeed involved in the generation of times of less than a second. Second, based on recent anatomical findings of abundant feedback connections from the output layer to the input layer, I found that these feedback connections play a major and essential role in the processing of time of cerebellar Purkinje cells. Finally, I created an artificial network model that imitates the cerebellar circuit and demonstrated that the model can generate network dynamics with sufficient features for processing of time (e.g. attractive trajectory and point attractor). These results require revisions of the classical computational principles of the cerebellum and pave the way for new research on the information processing produced by network dynamics of the recurrent cerebellar circuit.



田中秀宣
Hidenori Tanaka
Physics & Informatics Lab, NTT Research


From deep learning to mechanistic understanding in neuroscience: How biological vision predicts future events

Recently, deep neural networks have become popular tools in neuroscience to predict responses of biological neurons to sensory input. However, the success of such models with millions of parameters raises profound questions about the very nature of explanation in neuroscience. Are we simply replacing one complex system (the brain) with another (an artificial network), without understanding either? Moreover, beyond achieving good benchmarks, are the deep network's internal computational mechanisms for generating neural responses the same as those in the brain? In this talk, I will share our recent efforts to combine modern attribution methods and mathematical modeling to distill deep learning models into neuroscientific insights. We develop and apply systematic model reduction on artificial network models of the retina, revealing internal computational mechanisms of how the retina predicts future events. Thus overall, our work provides a new roadmap for neuroscience to go from neural recording data to neuroscientific hypotheses via model reduction on deep learning models.


Mingbo Cai
東京大学 国際高等研究所 ニューロインテリジェンス国際研究機構
The University of Tokyo International Research Center for Neurointelligence


Learning internal models of the world: brain and machine

The brain only has direct access to raw signals provided by sensory receptors about the world, yet inside our mind, we have an abstract representation of the structure of the environment and latent rules governing how the environment evolves. In other words, we live in a self-constructed model of the world. How does the brain construct such a model and how can we achieve it with neural networks? We study this from two angles: building deep learning models to learn representations of the environment taking inspiration from the constraint faced by infants, and studying how adults learn unknown rules of tasks through trial-and-error. I will first show that by treating objects as latent causes of images that allow efficient prediction of future visual inputs, the ability to infer objects from 2D images can emerge without supervision. This provides one way to bridge neural networks with symbolic representation. I will then show that when facing tasks with complex latent rules, the brain strategically trades off between serial hypothesis testing and reinforcement learning to discover the correct latent rules and improve rewards.



Tianming Yang
Chinese Academy of Sciences


Monkey Plays Pac-Man with Compositional Strategies and Hierarchical Decision-making

Humans can often handle daunting tasks with ease by developing a set of strategies to reduce decision making into simpler problems. The ability to use heuristic strategies demands an advanced level of intelligence and has not been demonstrated in animals. Here, we trained macaque monkeys to play the classic video game Pac-Man. The monkeys’ decision-making may be described with a strategy-based hierarchical decision-making model with over 90% accuracy. The model reveals that the monkeys adopted the take-the-best heuristic by using one dominating strategy for their decision-making at a time and formed compound strategies by assembling the basis strategies to handle particular game situations. With the model, the computationally complex but fully quantifiable Pac-Man behavior paradigm provides a new approach to understanding animals’ advanced cognition.




 
©2022 Mechanism of Brain and Mind