Show simple item record

dc.contributor.advisorJames R. Glass.en_US
dc.contributor.authorLee, Chia-ying (Chia-ying Jackie)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2015-01-20T17:59:24Z
dc.date.available2015-01-20T17:59:24Z
dc.date.copyright2014en_US
dc.date.issued2014en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/93065
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 169-188).en_US
dc.description.abstractThe ability to infer linguistic structures from noisy speech streams seems to be an innate human capability. However, reproducing the same ability in machines has remained a challenging task. In this thesis, we address this task, and develop a class of probabilistic models that discover the latent linguistic structures of a language directly from acoustic signals. In particular, we explore a nonparametric Bayesian framework for automatically acquiring a phone-like inventory of a language. In addition, we integrate our phone discovery model with adaptor grammars, a nonparametric Bayesian extension of probabilistic context-free grammars, to induce hierarchical linguistic structures, including sub-word and word-like units, directly from speech signals. When tested on a variety of speech corpora containing different acoustic conditions, domains, and languages, these models consistently demonstrate an ability to learn highly meaningful linguistic structures. In addition to learning sub-word and word-like units, we apply these models to the problem of one-shot learning tasks for spoken words, and our results confirm the importance of inducing intrinsic speech structures for learning spoken words from just one or a few examples. We also show that by leveraging the linguistic units our models discover, we can automatically infer the hidden coding scheme between the written and spoken forms of a language from a transcribed speech corpus. Learning such a coding scheme enables us to develop a completely data-driven approach to creating a pronunciation dictionary for the basis of phone-based speech recognition. This approach contrasts sharply with the typical method of creating such a dictionary by human experts, which can be a time-consuming and expensive endeavor. Our experiments show that automatically derived lexicons allow us to build speech recognizers that consistently perform closely to supervised speech recognizers, which should enable more rapid development of speech recognition capability for low-resource languages.en_US
dc.description.statementofresponsibilityby Chia-ying (Jackie) Lee.en_US
dc.format.extent188 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleDiscovering linguistic structures in speech : models and applicationsen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc900000825en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record