Pronunciation learning for automatic speech recognition
Learning pronunciation for automatic speech recognition
Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.
MetadataShow full item record
In many ways, the lexicon remains the Achilles heel of modern automatic speech recognizers (ASRs). Unlike stochastic acoustic and language models that learn the values of their parameters from training data, the baseform pronunciations of words in an ASR vocabulary are typically specified manually, and do not change, unless they are edited by an expert. Our work presents a novel generative framework that uses speech data to learn stochastic lexicons, thereby taking a step towards alleviating the need for manual intervention and automnatically learning high-quality baseform pronunciations for words. We test our model on a variety of domains: an isolated-word telephone speech corpus, a weather query corpus and an academic lecture corpus. We show significant improvements of 25%, 15% and 2% over expert-pronunciation lexicons, respectively. We also show that further improvements can be made by combining our pronunciation learning framework with acoustic model training.
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 99-101).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.