Statements in which the resource exists as a subject.
PredicateObject
rdf:type
lifeskim:mentions
pubmed:issue
3
pubmed:dateCreated
2006-4-4
pubmed:abstractText
Do listeners process noisy speech by taking advantage of "glimpses"-spectrotemporal regions in which the target signal is least affected by the background? This study used an automatic speech recognition system, adapted for use with partially specified inputs, to identify consonants in noise. Twelve masking conditions were chosen to create a range of glimpse sizes. Several different glimpsing models were employed, differing in the local signal-to-noise ratio (SNR) used for detection, the minimum glimpse size, and the use of information in the masked regions. Recognition results were compared with behavioral data. A quantitative analysis demonstrated that the proportion of the time-frequency plane glimpsed is a good predictor of intelligibility. Recognition scores in each noise condition confirmed that sufficient information exists in glimpses to support consonant identification. Close fits to listeners' performance were obtained at two local SNR thresholds: one at around 8 dB and another in the range -5 to -2 dB. A transmitted information analysis revealed that cues to voicing are degraded more in the model than in human auditory processing.
pubmed:language
eng
pubmed:journal
pubmed:citationSubset
IM
pubmed:status
MEDLINE
pubmed:month
Mar
pubmed:issn
0001-4966
pubmed:author
pubmed:issnType
Print
pubmed:volume
119
pubmed:owner
NLM
pubmed:authorsComplete
Y
pubmed:pagination
1562-73
pubmed:meshHeading
pubmed:year
2006
pubmed:articleTitle
A glimpsing model of speech perception in noise.
pubmed:affiliation
Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield, S1 4DP, United Kingdom. m.cooke@dcs.shef.ac.uk
pubmed:publicationType
Journal Article, Research Support, Non-U.S. Gov't