MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Learning k-modal distributions via testing

Author(s)
Daskalakis, C; Diakonikolas, I; Servedio, RA
Thumbnail
DownloadPublished version (394.0Kb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/
Metadata
Show full item record
Abstract
© 2014 Constantinos Daskalakis, Ilias Diakonikolas, and Rocco A. Servedio. A k-modal probability distribution over the discrete domain {1;……,n} is one whose histogram has at most k “peaks” and “valleys.” Such distributions are natural generalizations of monotone (k = 0) and unimodal (k = 1) probability distributions, which have been intensively studied in probability theory and statistics. In this paper we consider the problem of learning (i. e., performing density estimation of) an unknown k-modal distribution with respect to the L1 distance. The learning algorithm is given access to independent samples drawn from an unknown k-modal distribution p, and it must output a hypothesis distribution bp such that with high probability the total variation distance between p and bp is at most e. Our main goal is to obtain computationally efficient algorithms for this problem that use (close to) an information-theoretically optimal number of samples. We give an efficient algorithm for this problem that runs in time poly(k; log(n),1/ε). For k ≤ Õ(logn), the number of samples used by our algorithm is very close (within an Õ(log(1/ε) factor) to being information-theoretically optimal. Prior to this work computationally efficient algorithms were known only for the cases k = 0;1 (Birgé 1987, 1997). A novel feature of our approach is that our learning algorithm crucially uses a new algorithm for property testing of probability distributions as a key subroutine. The learning algorithm uses the property tester to efficiently decompose the k-modal distribution into k (near-)monotone distributions, which are easier to learn.
Date issued
2014-12-31
URI
https://hdl.handle.net/1721.1/143115
Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Journal
Theory of Computing
Publisher
Theory of Computing Exchange
Citation
Daskalakis, C, Diakonikolas, I and Servedio, RA. 2014. "Learning k-modal distributions via testing." Theory of Computing, 10 (1).
Version: Final published version

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.