Show simple item record

dc.contributor.authorDaskalakis, C
dc.contributor.authorDiakonikolas, I
dc.contributor.authorServedio, RA
dc.date.accessioned2022-06-14T13:19:32Z
dc.date.available2022-06-14T13:19:32Z
dc.date.issued2014-12-31
dc.identifier.urihttps://hdl.handle.net/1721.1/143115
dc.description.abstract© 2014 Constantinos Daskalakis, Ilias Diakonikolas, and Rocco A. Servedio. A k-modal probability distribution over the discrete domain {1;……,n} is one whose histogram has at most k “peaks” and “valleys.” Such distributions are natural generalizations of monotone (k = 0) and unimodal (k = 1) probability distributions, which have been intensively studied in probability theory and statistics. In this paper we consider the problem of learning (i. e., performing density estimation of) an unknown k-modal distribution with respect to the L1 distance. The learning algorithm is given access to independent samples drawn from an unknown k-modal distribution p, and it must output a hypothesis distribution bp such that with high probability the total variation distance between p and bp is at most e. Our main goal is to obtain computationally efficient algorithms for this problem that use (close to) an information-theoretically optimal number of samples. We give an efficient algorithm for this problem that runs in time poly(k; log(n),1/ε). For k ≤ Õ(logn), the number of samples used by our algorithm is very close (within an Õ(log(1/ε) factor) to being information-theoretically optimal. Prior to this work computationally efficient algorithms were known only for the cases k = 0;1 (Birgé 1987, 1997). A novel feature of our approach is that our learning algorithm crucially uses a new algorithm for property testing of probability distributions as a key subroutine. The learning algorithm uses the property tester to efficiently decompose the k-modal distribution into k (near-)monotone distributions, which are easier to learn.en_US
dc.language.isoen
dc.publisherTheory of Computing Exchangeen_US
dc.relation.isversionof10.4086/toc.2014.v010a020en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceTheory of Computingen_US
dc.titleLearning k-modal distributions via testingen_US
dc.typeArticleen_US
dc.identifier.citationDaskalakis, C, Diakonikolas, I and Servedio, RA. 2014. "Learning k-modal distributions via testing." Theory of Computing, 10 (1).
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.relation.journalTheory of Computingen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-06-14T12:25:57Z
dspace.orderedauthorsDaskalakis, C; Diakonikolas, I; Servedio, RAen_US
dspace.date.submission2022-06-14T12:25:58Z
mit.journal.volume10en_US
mit.journal.issue1en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record