subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link
subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link
subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link
subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link
subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link
subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link
subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link
subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link

  

第10回 夏のワークショップ

「言語とコミュニケーション」

スケジュール:

15:00-15:50

The neural basis of human language
酒井 邦嘉(東京大学)

16:00-16:50

The evolution of vocal communication through coupled oscillations.
Asif A. Ghazanfar (Princeton University)

17:00-17:50

Neural correlates of finite-state song syntax in Bengalese finches
岡ノ谷一夫(理化学研究所・JST/ERATO 岡ノ谷情動情報プロジェクト)

アブストラクト・関連論文:

酒井 邦嘉
Kuniyoshi L. Sakai

"The neural basis of human language"

There is a tacit assumption in neuroscience from the genetic to the systemic level, which holds that the biological foundations of humans are essentially similar to those of nonhuman primates, and that even human language can be understood by extending experiments with monkeys and apes. However, it has been known that human language is radically different from what is known as animal communication, and that infinite sentences can be generated from a finite set of sounds, signs, or letters. Recent advances in functional neuroimaging have significantly contributed to systems level analyses of brain development. In my talk, I will explain the current understanding of how the “final state” of language acquisition is represented in the mature brain, and summarize recent findings on cortical plasticity for second language (L2) acquisition, focusing particularly on the function of the grammar center in the left frontal cortex. The approach to reveal linguistic processes in terms of not only behavioral changes but anatomical/functional brain changes takes a first step toward a new era in the neuroscience of human language.

References:
Sakai, K. L.: Language acquisition and brain development. Science 310, 815-819 (2005).

Sakai, K. L., Nauchi, A., Tatsuno, Y., Hirano, K., Muraishi, Y., Kimura, M., Bostwick, M. & Yusa, N.: Distinct roles of left inferior frontal regions that explain individual differences in second language acquisition. Hum. Brain Mapp. in press, DOI: 10.1002/hbm.20681 (2008).

Nauchi, A. & Sakai, K. L.: Greater leftward lateralization of the inferior frontal gyrus in second language learners with higher syntactic abilities. Hum. Brain Mapp. in press, DOI: 10.1002/hbm.20790 (2009).

Kinno, R., Muragaki, Y., Hori, T., Maruyama, T., Kawamura, M. & Sakai, K. L.: Agrammatic comprehension caused by a glioma in the left frontal cortex. Brain Language in press, DOI: 10.1016/j.bandl.2009.05.001 (2009).

Link to references


Asif A. Ghazanfar
"The evolution of vocal communication through coupled oscillations"

Determining the substrates required for the evolution of human speech is a difficult task as most traits thought to give rise to human speech― vocal production apparatus and the brain― do not fossilize. However, by comparing the behavior and biology of extant primates with humans, one can deduce the behavioral capacities of extinct common ancestors, allowing the identification of homologies and providing clues as to the adaptive functions of such behaviors. With regard to vocal communication, one aspect of speech evolution that is consistently overlooked is that speech is not a purely auditory phenomenon, but rather a fully integrated multi-sensory-motor behavior. Using the behavior and neurobiology of monkey agents as a comparative model system, we are investigating the multiple over-lapping and time-locked sensory (and potentially, motor) systems that enable primates (including humans) to communicate in the vocal domain. Our data suggest that vocal communication arises through the coupling of multiple oscillations that operate on different timescales. The facial dynamics and vocal acoustics of the signaler are linked and take the form of a coupled slow oscillation. These signals, in turn, couple with on-going oscillations in the receiver’s auditory cortex. These auditory cortical oscillations then modulate faster oscillations, which in turn couple to parallel oscillations in other brain regions, such as the superior temporal sulcus. We hypothesize that the oscillatory structure present in the facial dynamics and vocal acoustics exploit the structure of neural oscillations and that vocal communication emerges from these multiple oscillatory couplings. As each locus of coupling is a putative substrate for the evolution of speech in humans, it is unlikely that speech evolved solely through changes in key brain structures or the development of new ones.

References:

Link to references


岡ノ谷 一夫
Kazuo Okanoya

"Neural correlates of finite-state song syntax in Bengalese finches"

Male Bengalese finches sing songs with multiple branching patterns in their sequences. These songs can be expressed by finite-state song syntax, a formal linguistic model for string sequencing that has finite number of states which are connected by certain probabilities. When one state changes to another, a string of song elements are produced. Juvenile Bengalese finches learn their songs from conspecific males, and detailed analyses of song copying revealed that song branching often occurs when juveniles learned song parts from multiple tutors. Thus, when they learn songs Bengalese finches perform splicing and editing, and when they sing songs they are executing state transition dynamics.
We are trying to discern neural mechanisms supporting these dynamics. In-situ hybridization studies in juvenile and adult Bengalese finches revealed expression patterns of functional molecules are congruent with the idea that modular dynamics (each song note) develops earlier than the system dynamics (song syntax). Lesion studies performed at hierarchically organized song control nuclei suggested that song control is also organized hierarchically, afferent nuclei (NIf and HVC) control general organization of songs, while the motor nucleus (RA) controls articulation of each song note. Electrophysiological studies at the HVC revealed population coding of song sequences. These results were interpreted as the HVC representing sequential dynamics and RA responsible for production of each isolated song note. A neural network model that implements hidden Markov processes was developed to further study the production and learning mechanisms of finite-state song syntax in Bengalese finches.
As a conclusion, we suggest that the finite-state song syntax could be studied as a biologically purified model of formal linguistic system. Although human language is unique to humans, mechanisms for some aspects of language, especially dynamic, syntactical sequencing, could be studied in songbirds.


References:
Katahira K., Okanoya K., and Okada M. (2007). A neural network model for generating complex birdsong syntax. Biological Cybernetics, 97, 441-448.

Matsunaga, E. & Okanoya, K. (2008) Expression analysis of cadherins in the songbird brain: relationship to vocal system development. Journal of Comparative Neurology, 508, 329-342.

Nishikawa J., Okada M., & Okanoya K. (2008) Population coding of song element sequence in the Bengalese finch HVC. European Journal of Neuroscience, 27, 3273-3283.

Okanoya K. (2007) Language evolution and an emergent property. Current Opinion in Neurobiology, 17, 271-276.

Link to references


©2005 Mechanism of Brain and Mind