Source:http://linkedlifedata.com/resource/pubmed/id/10746367
Switch to
Predicate | Object |
---|---|
rdf:type | |
lifeskim:mentions | |
pubmed:dateCreated |
2000-4-27
|
pubmed:abstractText |
This paper describes a way of using intonation and dialog context to improve the performance of an automatic speech recognition (ASR) system. Our experiments were run on the DCIEM Maptask corpus, a corpus of spontaneous task-oriented dialog speech. This corpus has been tagged according to a dialog analysis scheme that assigns each utterance to one of 12 "move types," such as "acknowledge," "query-yes/no" or "instruct." Most ASR systems use a bigram language model to constrain the possible sequences of words that might be recognized. Here we use a separate bigram language model for each move type. We show that when the "correct" move-specific language model is used for each utterance in the test set, the word error rate of the recognizer drops. Of course when the recognizer is run on previously unseen data, it cannot know in advance what move type the speaker has just produced. To determine the move type we use an intonation model combined with a dialog model that puts constraints on possible sequences of move types, as well as the speech recognizer likelihoods for the different move-specific models. In the full recognition system, the combination of automatic move type recognition with the move specific language models reduces the overall word error rate by a small but significant amount when compared with a baseline system that does not take intonation or dialog acts into account. Interestingly, the word error improvement is restricted to "initiating" move types, where word recognition is important. In "response" move types, where the important information is conveyed by the move type itself--for example, positive versus negative response--there is no word error improvement, but recognition of the response types themselves is good. The paper discusses the intonation model, the language models, and the dialog model in detail and describes the architecture in which they are combined.
|
pubmed:language |
eng
|
pubmed:journal | |
pubmed:citationSubset |
IM
|
pubmed:status |
MEDLINE
|
pubmed:issn |
0023-8309
|
pubmed:author | |
pubmed:issnType |
Print
|
pubmed:volume |
41 ( Pt 3-4)
|
pubmed:owner |
NLM
|
pubmed:authorsComplete |
Y
|
pubmed:pagination |
493-512
|
pubmed:dateRevised |
2006-11-15
|
pubmed:meshHeading |
pubmed-meshheading:10746367-Humans,
pubmed-meshheading:10746367-Natural Language Processing,
pubmed-meshheading:10746367-Phonetics,
pubmed-meshheading:10746367-Psycholinguistics,
pubmed-meshheading:10746367-Semantics,
pubmed-meshheading:10746367-Sound Spectrography,
pubmed-meshheading:10746367-Speech Acoustics,
pubmed-meshheading:10746367-Speech Perception,
pubmed-meshheading:10746367-Verbal Behavior
|
pubmed:articleTitle |
Intonation and dialog context as constraints for speech recognition.
|
pubmed:affiliation |
Center for Speech Technology Research, University of Edinburgh, U.K. pault@cstr.ed.ac.uk
|
pubmed:publicationType |
Journal Article,
Research Support, Non-U.S. Gov't
|