Source:http://linkedlifedata.com/resource/pubmed/id/15901781
Switch to
Predicate | Object |
---|---|
rdf:type | |
lifeskim:mentions | |
pubmed:issue |
20
|
pubmed:dateCreated |
2005-5-19
|
pubmed:abstractText |
In the social world, multiple sensory channels are used concurrently to facilitate communication. Among human and nonhuman primates, faces and voices are the primary means of transmitting social signals (Adolphs, 2003; Ghazanfar and Santos, 2004). Primates recognize the correspondence between species-specific facial and vocal expressions (Massaro, 1998; Ghazanfar and Logothetis, 2003; Izumi and Kojima, 2004), and these visual and auditory channels can be integrated into unified percepts to enhance detection and discrimination. Where and how such communication signals are integrated at the neural level are poorly understood. In particular, it is unclear what role "unimodal" sensory areas, such as the auditory cortex, may play. We recorded local field potential activity, the signal that best correlates with human imaging and event-related potential signals, in both the core and lateral belt regions of the auditory cortex in awake behaving rhesus monkeys while they viewed vocalizing conspecifics. We demonstrate unequivocally that the primate auditory cortex integrates facial and vocal signals through enhancement and suppression of field potentials in both the core and lateral belt regions. The majority of these multisensory responses were specific to face/voice integration, and the lateral belt region shows a greater frequency of multisensory integration than the core region. These multisensory processes in the auditory cortex likely occur via reciprocal interactions with the superior temporal sulcus.
|
pubmed:language |
eng
|
pubmed:journal | |
pubmed:citationSubset |
IM
|
pubmed:status |
MEDLINE
|
pubmed:month |
May
|
pubmed:issn |
1529-2401
|
pubmed:author | |
pubmed:issnType |
Electronic
|
pubmed:day |
18
|
pubmed:volume |
25
|
pubmed:owner |
NLM
|
pubmed:authorsComplete |
Y
|
pubmed:pagination |
5004-12
|
pubmed:dateRevised |
2006-11-15
|
pubmed:meshHeading |
pubmed-meshheading:15901781-Acoustic Stimulation,
pubmed-meshheading:15901781-Analysis of Variance,
pubmed-meshheading:15901781-Animals,
pubmed-meshheading:15901781-Auditory Cortex,
pubmed-meshheading:15901781-Auditory Perception,
pubmed-meshheading:15901781-Brain Mapping,
pubmed-meshheading:15901781-Electroencephalography,
pubmed-meshheading:15901781-Evoked Potentials, Auditory,
pubmed-meshheading:15901781-Facial Expression,
pubmed-meshheading:15901781-Macaca mulatta,
pubmed-meshheading:15901781-Male,
pubmed-meshheading:15901781-Models, Neurological,
pubmed-meshheading:15901781-Pattern Recognition, Visual,
pubmed-meshheading:15901781-Photic Stimulation,
pubmed-meshheading:15901781-Reaction Time,
pubmed-meshheading:15901781-Time Factors,
pubmed-meshheading:15901781-Voice
|
pubmed:year |
2005
|
pubmed:articleTitle |
Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex.
|
pubmed:affiliation |
Max Planck Institute for Biological Cybernetics, 72076 Tuebingen, Germany. asifg@princeton.edu
|
pubmed:publicationType |
Journal Article,
Comparative Study,
Research Support, Non-U.S. Gov't
|