第20回 冬のワークショップ

脳と人工知能 "Brain and artificial intelligence "

日程:2020年1月8日(水)-10日(金)

会場:
ルスツリゾート(北海道蛇田郡留寿都村字泉川13)
  http://www.rusutsu.co.jp/
会場地図:
ノースウイング コンベンションホール 18番ホール 

スケジュール:

 1月8日(水)-10日(金)
脳と人工知能"Brain and artificial intelligence"
8日 スペシャルセッション
18:10 - 19:00 Richard Zemel (Vector Institute for Artificial Intelligence)
19:10 - 20:00 Minho Lee (Kyungpook National University)
20:10 - 21:00 銅谷賢治 (沖縄科学技術大学院大学)
21:00-23:00 ポスターセッション
9日 トピックセッション
15:30 - 16:20 渡辺正峰 (東京大学)
16:30 - 17:20 西本伸志(NICT CiNet)
17:30 - 18:20 Mackenzie W. Mathis (Harvard University)
20:00-23:00 ポスターセッション
10日 トピックセッション
8:50-9:00 ポスター発表賞の紹介
9:00 - 9:50 Alexander Mathis (Harvard University)
10:00 - 10:50 筒井健一郎(東北大学)
11:00 - 11:50 渡部 喬光 (理化学研究所 脳神経科学研究センター)




Abstracts and References:

Richard Zemel
Vector Institute for Artificial Intelligence
Computer Science and Industrial Research Chair in Machine Learning, University of Toronto


Controlling the Black Box: Learning Manipulable and Fair Representations

TMachine learning models, and more specifically deep neural networks, are achieving state-of-the-art performance on difficult pattern-recognition tasks such as object recognition, speech recognition, drug discovery, and more. However, deep networks are notoriously difficult to understand, both in how they arrive at and how to affect their responses. As these systems become more prevalent in real-world applications it is essential to allow users to exert more control over the learning system. In particular a wide range of applications can be facilitated by exerting some structure over the learned representations, to enable users to manipulate, interpret, and in some cases obfuscate the representations. In this talk I will discuss recent work that makes some steps towards these goals, allowing users to interact with and control representations.

References:
1. Learning latent subspaces in variational autoencoders. Jack Klys, Jake Snell, Richard Zemel. Neural Information Processing Systems (2018).
http://www.cs.toronto.edu/~zemel/documents/Conditional_Subspace_VAE_all.pdf
2. Learning adversarially fair and transferable representations. David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel. International Conference on Machine Learning (2018).
http://www.cs.toronto.edu/~zemel/documents/laftr-icml.pdf
3. Excessive invariance causes adversarial vulnerability. Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge. International Conference on Learning Representations (2019).
https://arxiv.org/pdf/1811.00401.pdf



Minho Lee
Kyungpook Natinal University


Perception-Action Cycle for Autonomous Learning of AI Agent

Deep learning has taken a place not only in many professional academic fields but also in industrial areas. Recent breakthroughs have adopted deep learning methods, eventually outperforming the state-of-art algorithms in many engineering applications. However, deep learning still has many limitations for real world applications including data collection and labeling issues, lack of interpretability, design issue with good generalization ability, stability and plasticity dilemma, etc. How does the brain learn to transform sensory data into accurate perceptual information simultaneously, while learning to solve complex behavioral tasks ?. Poor quality perceptual information can affect the learning process for solving behavioral tasks,while actions can influence the perceptual judgments. The perceptual problem can be solved by observing the effects of actions that preserve a physical invariance or retrieval memory invariance. In this presentation, I propose that the brain exploits action to improve perception and this updated perceptual process is used to improve behavioral quality in a cyclic process. By using this perception-action cycle, human can develop their intellectual power by themselves with a life-long learning mechanism. In this talk, I will present a few attempts to solve the current limitation of deep learning through the perception-action cycle. The proposed methods can generate new labeled data based on a self QA (question-answer) generative AI agent, and the AI agent can further get smarter by learning with the new labeled data. The smarter AI can produce better labeled training data based on the self QA agent. I will also try to introduce a real application of the perception-action cycle for the NAO humanoid robot. Experimental results show that the synergy and interaction of perception-action cycle are essential for autonomous development of AI agent.

References :
1. Y. Jin and M. Lee, “Enhancing Binocular Depth Estimation Based on Proactive Perception and Action Cyclic Learning for an Autonomous Developmental Robot”, IEEE Transactions on Systems Man bernetics-Systems, vol. 49, no. 1, pp.169~180, 2019.
2. S. Kim, Z. Yu and M. Lee “Understanding Human Intention by Connecting Perception and Action Learning in Artificial Agents”, Neural Networks, vol. 92, pp. 29~38, 2017


銅谷賢治
Kenji Doya
沖縄科学技術大学院大学
Okinawa Institute of Science and Technology Graduate University


What can we further learn from the brain for artificial intelligence?

Deep learning and reinforcement learning are prime examples of how brain-inspired computing architectures can benefit artificial intelligence. But what else can we learn from the brain for bringing artificial intelligence to the next level? The brain can be seen as a multi-agent system composed of heterogeneous learners using different representations and algorithms. In navigation and control, the use of allocentric, egocentric, and intrinsic state representations offer different advantages. In reinforcement learning, the choice or mixture of model-free and model-based algorithms critically affects data efficiency and computational costs. Animals and humans appear to be able to utilize multiple representations and algorithms in highly flexible ways. How the brain realizes flexible selection and combination of relevant modules for a given situation is a major open problem in neuroscience and its solution should help developments of more flexible, general artificial intelligence.



渡辺正峰
Masataka Watanabe
東京大学大学院工学系研究科 システム創成学専攻
Department of Systems Innovation,School of Engineering,University of Tokyo


意識を科学する  - 自然則、人工意識、主観テスト -
Scientific Approach to Consciousness   - Law of Nature, Artificial Consciousness and its Subjective Test -

The main goal of the talk is to introduce an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and subjectively test whether conscious experience is evoked in the device's visual hemifield [1-2]. That is, we replace one hemisphere of our own brain with a mechanical hemisphere which candidate hypotheses of consciousness are implemented, and subjectively test whether we perceive a unified consciousness—say, whether the right and left visual fields appear as a unified whole. If we do, then we can conclude that consciousness resides in the mechanical hemisphere, and that it is linked with the consciousness in the remaining biological one. The test is valid under a widely believed assumption that intact non-split brain functions as a “master-master” system in regard to visual consciousness.
 The outline of presentation is as follows. Firstly, to provide the very definition of consciousness and as an introduction to experimental consciousness studies, I will briefly mention my own previous work [3-5]. Secondly, I show that we may overcome the so-called Hard Problem of consciousness by introducing novel laws of nature. Next, I discuss that validation of such laws of nature for consciousness requires an “analysis by synthesis” approach, say, an attempt to construct artificial consciousness. The above proposed subjective test for artificial consciousness is critical for the approach, - we may not build flying machines by trial and error in space, where we don’t have the means to test it -, and possesses the potential to transform consciousness research into proper science, where hypothesis formulation and experimental validation is repeated to shed light on natural phenomena.

References :
1. Watanabe, M. “A Turing test for visual qualia: an experimental method to test various hypotheses on consciousness.” Talk presented at Towards a Science of Consciousness 21-26 April 2014, Tucson: online abstract 124
2. Watanabe, M. “Neural consciousness, machine consciousness (in Japanese, original title 「脳の意識 機械の意識」)2017, Chuou-Kouronsha, (English and Chinese version available within the year)
3.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.
4.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Current Biology, 2013. 23(21): p. 2146-50.
5. Watanabe, M., Nagaoka, S., Kirchberger, L., Poyraz, E., Lowe, S., Uysal, B.; Vaiceliunaite, A., Totah, N., Logothetis, N., Busse, L., Kastner, S. Mouse primary visual cortex in not part of the reverberant neural circuitry critical for visual perception. (in revision)



西本伸志
Shinji Nishimoto
1) 情報通信研究機構 脳情報通信融合研究センター
2) 大阪大学大学院医学系研究科
3) 大阪大学大学院生命機能研究科
1) Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT)
2) Graduate School of Medicine, Osaka University
3) Graduate School of Frontier Biosciences, Osaka University


Modeling and decoding semantic and cognitive representations in the human brain using data-driven features

Our daily life is realized by the complex orchestrations of diverse brain functions, including perception, decision, and action. One of the central goals in systems neuroscience is to reveal the complete perceptual and cognitive representations underlying such diverse functions. In our previous studies, we have revealed the cortical organization of visual and semantic representations using voxel-wise modeling approaches. In these studies, human brain activity was measured using functional MRI (fMRI) and has been modeled and decoded using classes of pre-designed visual and language-based features. Recently, we have further extended our approach to examine more comprehensive cortical representations through data-driven features derived from massive database. For example, we have decoded the semantic experiences from human brain by modeling movie-evoked brain activity via distributed representations of words, or word2vec. We have also examined cortex-wide cognitive representations by modeling brain activity evoked by more than 100 cognitive tasks using latent features derived from massive fMRI data repository, or Neurosynth. These studies provided one of the most comprehensive views of semantic and cognitive representations under naturalistic perceptual and cognitive conditions. I will discuss the implications, potential applications, and outlooks using these approaches.

References :
1. Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, Gallant JL. (2011) Reconstructing visual experiences from brain activity evoked by natural movies. Curr Biol. 21(19):1641-6
2. Çukur T, Nishimoto S, Huth AG, Gallant JL. (2013) Attention during natural vision warps semantic representation across the human brain. Nat Neurosci. 16(6):763-70
3. Nishida S, Nishimoto S. (2018) Decoding naturalistic experiences from human brain activity via distributed representations of words, Neuroimage 180(Pt A):232-242.
4. Nakai T, Nishimoto S. (2019) Data-driven models reveal the organization of diverse cognitive functions in the brain, bioRxiv, doi: 10.1101/614081



Mackenzie W. Mathis
Rowland Fellow, Harvard University
EPFL


Somatosensory processing: insights from networks to neurons

Our motor outputs are constantly re-calibrated to adapt to systematic perturbations. Somatosensory cortex (S1) has recently been shown to be important for learning to adapt to systematic perturbations, but the underlying neural computations remain unclear (Mathis et al 2017). To better study the proprioceptive system we have developed a neural network model of sensory processing. If trained on a task, cortical-neuron-like (S1) representations emerge in these models, providing us with a powerful system to compare artificial neural networks to real neurons across the proprioceptive hierarchy. I will discuss ongoing efforts at combing these tools to perform a systematic comparison between brains and machines.

Alexander Mathis
Harvard University


Deep Learning Tools for the Analysis of Behavior

Quantifying behavior is crucial for many applications across the life sciences and engineering. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming and computationally challenging. I will present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. I will show that for both pretrained and networks trained from random initializations, better ImageNet-performing architectures perform better for pose estimation, with a substantial improvement on out-of-domain data when pretrained on ImageNet. I will illustrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors from egg-laying flies to hunting cheetahs.

References :
1. Alexander Mathis, Mert Yüksekgönül, Byron Rogers, Matthias Bethge, Mackenzie W. Mathis: Pretraining boosts out-of-domain robustness for pose estimation.
2. Mackenzie W. Mathis, Alexander Mathis: Deep learning tools for the measurement of animal behavior in neuroscience.


筒井健一郎
Ken-ichiro Tsutsui
東北大学 大学院生命科学研究科 脳神経システム分野
Laboratory of Systems Neuroscience, Tohoku University Graduate School of Life Sciences


行動の柔軟性を支える前頭連合野機能 - 神経活動の計測・モデリングおよび操作による解析
Prefrontal cortex underpinning the flexibility of behavior: investigation by the recording, modeling, and manipulation of neural activity

The prefrontal cortex is known to be responsible for flexible behavioral adaptation, however, its neural mechanisms on the single-neuron or the local circuit level is yet to be known. We trained macaque monkeys to perform tasks that require quick adaptation of behavior, with reference to value of objects fluctuating on a trial-by-trial basis (Ref 1, 3), or object categories stored in long-term memory and implicit rule that change after random number of trials (Ref 2, 4). Single-unit recording during the performance of these tasks revealed that within the prefrontal cortex such information is explicitly represented in the firing frequency of the individual neurons. Based on those single-unit data, we constructed a dynamical computational model of the prefrontal neural circuit (in preparation). Furthermore, we performed neural interventions by using trains cranial magnetic stimulation (TMS). By suppressing the local neural activity within the prefrontal cortex by applying low-frequency repetitive TMS, flexible behavioral control according to category and rule information was impaired, whereas the trial and error type of learning remained intact (in preparation).

References :
1. Grabenhorst F, Tsutsui KI, Kobayashi S, Schultz W. Primate prefrontal neurons signal economic risk derived from the statistics of recent reward experience. Elife 8: e44838 (2019)
2. Hosokawa T, Honda Y, Yamada M, Romero MDC, Iijima T, Tsutsui KI. Behavioral evidence for the use of functional categories during group reversal task performance in monkeys. Sci Rep. 8: 15878 (2018)
3. Tsutsui KI, Grabenhorst F, Kobayashi S, Schultz W. A dynamic code for economic object valuation in prefrontal cortex neurons. Nat Commun. 7: 12554 (2016)
4. Tsutsui K, Hosokawa T, Yamada M, Iijima T. Representation of Functional Category in the Monkey Prefrontal Cortex and Its Rule-Dependent Use for Behavioral Selection. J Neurosci. 36: 3038-48 (2016)

渡部 喬光
Takamitsu Watanabe
理化学研究所 脳神経科学研究センター
RIKEN Centre for Brian Science


大脳ネットワークと神経活動ダイナミクス,および精神疾患の関連について
Brain connectomes, neural dynamics and psychiatric disorders in human.

Human minds are considered to be underpinned by global and local neural dynamics on large-scale brain network structures. Here, I will present associations between these three layers of human brains: cognitions, neural dynamics and brain connectomes. First, I will talk about a data-driven approach to identify transitory global neural dynamics underlying flexible and adaptive human minds. Also, I will demonstrate how local neural dynamics affect complex cognitive and behavioural activities. Next, this talk will present computational findings on how such critical neural dynamics are determined by brain connectomes. Finally, by combining these observations with case studies of psychiatric disorders, I will illustrate a picture of the associations between brain connectome, global/local neural dynamics and typical/atypical human minds. These data-driven approaches and dynamics-based concept are expected to bring novel insights to some overlooked cognitive functions in healthy humans and help us to build more biological and trans-diagnostic computational psychiatry for a wide range of prevalent mental disorders.

References :
1. Watanabe, T., et al. (2019). "A Neuroanatomical Substrate Linking Perceptual Stability to Cognitive Rigidity in Autism." J Neurosci 39(33): 6540-6554.
2. Watanabe, T., et al. (2019). "Atypical intrinsic neural timescale in autism." Elife 8.
3. Watanabe, T. and G. Rees (2017). "Brain network dynamics in high-functioning individuals with autism." Nat Commun 8: 16048.
4. Watanabe, T., et al. (2014). "Energy landscape and dynamics of brain activity during human bistable perception." Nat Commun 5: 4765.




 
©2019 Mechanism of Brain and Mind