Statements in which the resource exists as a subject.
PredicateObject
rdf:type
lifeskim:mentions
pubmed:issue
2
pubmed:dateCreated
2004-5-18
pubmed:abstractText
Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances sensitivity to acoustic information, decreasing the auditory detection threshold of speech embedded in noise [J. Acoust. Soc. Am. 109 (2001) 2272; J. Acoust. Soc. Am. 108 (2000) 1197]. However, detection is different from comprehension, and it remains to be seen whether improved sensitivity also results in an intelligibility gain in audio-visual speech perception. In this work, we use an original paradigm to show that seeing the speaker's lips enables the listener to hear better and hence to understand better. The audio-visual stimuli used here could not be differentiated by lip reading per se since they contained exactly the same lip gesture matched with different compatible speech sounds. Nevertheless, the noise-masked stimuli were more intelligible in the audio-visual condition than in the audio-only condition due to the contribution of visual information to the extraction of acoustic cues. Replacing the lip gesture by a non-speech visual input with exactly the same time course, providing the same temporal cues for extraction, removed the intelligibility benefit. This early contribution to audio-visual speech identification is discussed in relationships with recent neurophysiological data on audio-visual perception.
pubmed:language
eng
pubmed:journal
pubmed:citationSubset
IM
pubmed:status
MEDLINE
pubmed:month
Sep
pubmed:issn
0010-0277
pubmed:author
pubmed:issnType
Print
pubmed:volume
93
pubmed:owner
NLM
pubmed:authorsComplete
Y
pubmed:pagination
B69-78
pubmed:dateRevised
2006-11-15
pubmed:meshHeading
pubmed:year
2004
pubmed:articleTitle
Seeing to hear better: evidence for early audio-visual interactions in speech identification.
pubmed:affiliation
Institut de la Communication Parlée, CNRS-INPG-Université Stendhal, 46 Av. Félix Viallet, 38031 Grenoble 1, France. schwartz@icp.inpg.fr
pubmed:publicationType
Journal Article, Research Support, Non-U.S. Gov't