Show simple item record

dc.contributor.advisorMunther A. Dahleh.en_US
dc.contributor.authorHuang, Qingqing, Ph. D. Massachusetts Institute of Technologyen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2017-03-10T15:05:59Z
dc.date.available2017-03-10T15:05:59Z
dc.date.copyright2016en_US
dc.date.issued2016en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/107337
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 261-274).en_US
dc.description.abstractWe study the statistical learning problems for a class of probabilistic models called mixture models. Mixture models are usually used to model settings where the observed data consists of different sub-populations, yet we only have access to a limited number of samples of the pooled data. It includes many widely used models such as Gaussian mixtures models, Hidden Markov Models, and topic models. We focus on parametric learning: given unlabeled data generated according to a mixture model, infer about the parameters of the underlying model. The hierarchical structure of the probabilistic model leads to non-convexity of the likelihood function in the model parameters, thus imposing great challenges in finding statistically efficient and computationally efficient solutions. We start with a simple, yet general setup of mixture model in the first part. We study the problem of estimating a low rank M x M matrix which represents a discrete distribution over M2 outcomes, given access to sample drawn according to the distribution. We propose a learning algorithm that accurately recovers the underlying matrix using 9(M) number of samples, which immediately lead to improved learning algorithms for various mixture models including topic models and HMMs. We show that the linear sample complexity is actually optimal in the min-max sense. There are "hard" mixture models for which there exist worst case lower bounds of sample complexity that scale exponentially in the model dimensions. In the second part, we study Gaussian mixture models and HMMs. We propose new learning algorithms with polynomial runtime. We leverage techniques in probabilistic analysis to prove that worst case instances are actually rare, and our algorithm can efficiently handle all the non-worst case instances. In the third part, we study the problem of super-resolution. Despite the lower bound for any deterministic algorithm, we propose a new randomized algorithm which complexity scales only quadratically in all dimensions, and show that it can handle any instance with high probability over the randomization.en_US
dc.description.statementofresponsibilityby Qingqing Huang.en_US
dc.format.extent274 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleEfficient algorithms for learning mixture modelsen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc972905777en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record