Statements in which the resource exists.
SubjectPredicateObjectContext
pubmed-article:18238087rdf:typepubmed:Citationlld:pubmed
pubmed-article:18238087lifeskim:mentionsumls-concept:C0037817lld:lifeskim
pubmed-article:18238087lifeskim:mentionsumls-concept:C0546881lld:lifeskim
pubmed-article:18238087lifeskim:mentionsumls-concept:C2346753lld:lifeskim
pubmed-article:18238087lifeskim:mentionsumls-concept:C0175681lld:lifeskim
pubmed-article:18238087lifeskim:mentionsumls-concept:C1709059lld:lifeskim
pubmed-article:18238087lifeskim:mentionsumls-concept:C0080141lld:lifeskim
pubmed-article:18238087lifeskim:mentionsumls-concept:C1705938lld:lifeskim
pubmed-article:18238087lifeskim:mentionsumls-concept:C1527178lld:lifeskim
pubmed-article:18238087pubmed:issue5lld:pubmed
pubmed-article:18238087pubmed:dateCreated2008-2-1lld:pubmed
pubmed-article:18238087pubmed:abstractTextSegregating speech from one monaural recording has proven to be very challenging. Monaural segregation of voiced speech has been studied in previous systems that incorporate auditory scene analysis principles. A major problem for these systems is their inability to deal with the high-frequency part of speech. Psychoacoustic evidence suggests that different perceptual mechanisms are involved in handling resolved and unresolved harmonics. We propose a novel system for voiced speech segregation that segregates resolved and unresolved harmonics differently. For resolved harmonics, the system generates segments based on temporal continuity and cross-channel correlation, and groups them according to their periodicities. For unresolved harmonics, it generates segments based on common amplitude modulation (AM) in addition to temporal continuity and groups them according to AM rates. Underlying the segregation process is a pitch contour that is first estimated from speech segregated according to dominant pitch and then adjusted according to psychoacoustic constraints. Our system is systematically evaluated and compared with pervious systems, and it yields substantially better performance, especially for the high-frequency part of speech.lld:pubmed
pubmed-article:18238087pubmed:languageenglld:pubmed
pubmed-article:18238087pubmed:journalhttp://linkedlifedata.com/r...lld:pubmed
pubmed-article:18238087pubmed:statusPubMed-not-MEDLINElld:pubmed
pubmed-article:18238087pubmed:monthSeplld:pubmed
pubmed-article:18238087pubmed:issn1045-9227lld:pubmed
pubmed-article:18238087pubmed:authorpubmed-author:WangDeliangDlld:pubmed
pubmed-article:18238087pubmed:authorpubmed-author:HuGuoningGlld:pubmed
pubmed-article:18238087pubmed:issnTypePrintlld:pubmed
pubmed-article:18238087pubmed:volume15lld:pubmed
pubmed-article:18238087pubmed:ownerNLMlld:pubmed
pubmed-article:18238087pubmed:authorsCompleteYlld:pubmed
pubmed-article:18238087pubmed:pagination1135-50lld:pubmed
pubmed-article:18238087pubmed:year2004lld:pubmed
pubmed-article:18238087pubmed:articleTitleMonaural speech segregation based on pitch tracking and amplitude modulation.lld:pubmed
pubmed-article:18238087pubmed:affiliationBiophys. Program, Ohio State Univ., Columbus, OH, USA.lld:pubmed
pubmed-article:18238087pubmed:publicationTypeJournal Articlelld:pubmed
http://linkedlifedata.com/r...pubmed:referesTopubmed-article:18238087lld:pubmed
http://linkedlifedata.com/r...pubmed:referesTopubmed-article:18238087lld:pubmed
http://linkedlifedata.com/r...pubmed:referesTopubmed-article:18238087lld:pubmed
http://linkedlifedata.com/r...pubmed:referesTopubmed-article:18238087lld:pubmed