Statements in which the resource exists as a subject.
PredicateObject
rdf:type
lifeskim:mentions
pubmed:issue
17
pubmed:dateCreated
2003-11-19
pubmed:abstractText
This fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site. Consistent with response properties of multisensory integration sites, enhanced activity in middle and superior temporal gyrus/sulcus was greatest when concordant audiovisual stimuli were presented with acoustic noise. Activity found in brain regions involved with planning and execution of speech production in response to visual speech presented with degraded or absent auditory stimulation, is consistent with the use of an additional pathway through which speech perception is facilitated by a process of internally simulating the intended speech act of the observed speaker.
pubmed:language
eng
pubmed:journal
pubmed:citationSubset
IM
pubmed:status
MEDLINE
pubmed:month
Dec
pubmed:issn
0959-4965
pubmed:author
pubmed:issnType
Print
pubmed:day
2
pubmed:volume
14
pubmed:owner
NLM
pubmed:authorsComplete
Y
pubmed:pagination
2213-8
pubmed:dateRevised
2006-11-15
pubmed:meshHeading
pubmed:year
2003
pubmed:articleTitle
Neural processes underlying perceptual enhancement by visual speech gestures.
pubmed:affiliation
Human Information Science Laboratories, ATR International, Kyoto, Japan. dcallan@atr.co.jp
pubmed:publicationType
Journal Article, Research Support, Non-U.S. Gov't