Show simple item record

dc.contributor.advisorConstantinos Daskalakis.en_US
dc.contributor.authorKamath, Gautam (Gautam Chetan)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2015-01-20T15:30:36Z
dc.date.available2015-01-20T15:30:36Z
dc.date.copyright2014en_US
dc.date.issued2014en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/92966
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 91-95).en_US
dc.description.abstractWe explore a number of problems related to learning and covering structured distributions: Hypothesis Selection: We provide an improved and generalized algorithm for selecting a good candidate distribution from among competing hypotheses. Namely, given a collection of ... hypotheses containing at least one candidate that is ...-close to an unknown distribution, our algorithm outputs a candidate which is ...-close to the distribution. The algorithm requires ... samples from the unknown distribution and ... time, which improves previous such results (such as the Scheffé estimator) from a quadratic dependence of the running time on ... to quasilinear. Given the wide use of such results for the purpose of hypothesis selection, our improved algorithm implies immediate improvements to any such use. Proper Learning Gaussian Mixture Models: We describe an algorithm for properly learning mixtures of two single-dimensional Gaussians without any separability assumptions. Given ... samples from an unknown mixture, our algorithm outputs a mixture that is ...-close in total variation distance, in time ... Our sample complexity is optimal up to logarithmic factors, and significantly improves upon both Kalai et al., whose algorithm has a prohibitive dependence on 1/..., and Feldman et al., whose algorithm requires bounds on the mixture parameters and depends pseudo-polynomially in these parameters. Covering Poisson Multinomial Distributions: We provide a sparse ..-cover for the set of Poisson Multinomial Distributions. Specifically, we describe a set of ... distributions such that any Poisson Multinomial Distribution of size ?? and dimension ... is ...-close to a distribution in the set. This is a significant sparsification over the previous best-known ...-cover due to Daskalakis and Papadimitriou [24], which is of size ..., where ... is polynomial in ... and exponential in ... This cover also implies an algorithm for learning Poisson Multinomial Distributions with a sample complexity which is polynomial in ... and log ...en_US
dc.description.statementofresponsibilityby Gautam Kamath.en_US
dc.format.extent95 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleOn Learning and Covering Structured Distributionsen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc900006537en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record