Show simple item record

dc.contributor.advisorJames R. Glass.en_US
dc.contributor.authorHsu, Wei-Ning, Ph. D. Massachusetts Institute of Technologyen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2018-09-17T15:55:42Z
dc.date.available2018-09-17T15:55:42Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/118059
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 121-128).en_US
dc.description.abstractDespite recent successes in machine learning, artificial intelligence is still far from matching human intelligence in many ways. Two important aspects are transferability and amount of supervision required. Take speech recognition for example: while humans can easily adapt to a new accent without explicit supervision (i.e., ground truth transcripts for speech of a new accent), current machine learning techniques still struggle with such a scenario. We argue that an essential component of human learning is unsupervised or weakly supervised representation learning, which transforms input signals to low dimensional representations that facilitate subsequent structured learning and knowledge acquisition. In this thesis, we develop unsupervised representation learning frameworks for speech data. We start with investigating an existing variational autoencoder (VAE) model for learning latent representations, and derive novel latent space operations for speech transformation. The transformation method is applied to unsupervised domain adaptation problems, which addresses the transferability issues of supervised machine learning framework. We then extend the VAE models, and propose a novel factorized hierarchical variational autoencoder (FHVAE), which better models a generative process of sequential data, and learns not only disentangled, but also interpretable latent representations without any supervision. By leveraging the interpretability, we demonstrate that such representations can be applied to a wide range of tasks, including but not limited to: voice conversion, denoising, speaker verification, speaker invariant phonetic feature extraction, and noise invariant phonetic feature extraction. In the last part of this thesis, we examine scalability issues regarding the original FHVAE training algorithm in terms of runtime, memory, and optimization stability. Based on our analysis, we propose a hierarchical sampling algorithm for training, which enables training of FHVAE models on arbitrarily large datasets.en_US
dc.description.statementofresponsibilityby Wei-Ning Hsu.en_US
dc.format.extent128 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleUnsupervised learning of disentangled representations for speech with neural variational inference modelsen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1051460462en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record