dc.contributor.advisor | Gregory Wornell. | en_US |
dc.contributor.author | Ajjanagadde, Ganesh | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2017-12-20T17:23:55Z | |
dc.date.available | 2017-12-20T17:23:55Z | |
dc.date.copyright | 2016 | en_US |
dc.date.issued | 2016 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/112818 | |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016. | en_US |
dc.description | This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. | en_US |
dc.description | Cataloged from student-submitted PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 51-53). | en_US |
dc.description.abstract | This thesis explores the problems of learning analysis of variance (ANOVA) decompositions over GF(2) and R, as well as a general regression setup. For the problem of learning ANOVA decompositions, we obtain fundamental limits in the case of GF(2) under both sparsity and degree structures. We show how the degree or sparsity level is a useful measure of the complexity of such models, and in particular how the statistical complexity ranges from linear to exponential in the dimension, thus forming a "learning hierarchy". Furthermore, we discuss the problem in both an "adaptive" as well as a "one-shot" setting, where in the adaptive case query choice can depend on the entire past history. Somewhat surprisingly, we show that the "adaptive" setting does not yield significant statistical gains. In the case of R, under query access, we demonstrate an approach that achieves a similar hierarchy of complexity with respect to the dimension. For the general regression setting, we outline a viewpoint that captures a variety of popular methods based on locality and partitioning of some kind. We demonstrate how "data independent" partitioning may still yield statistically consistent estimators, and illustrate this by a lattice based partitioning approach. | en_US |
dc.description.statementofresponsibility | by Ganesh Ajjanagadde. | en_US |
dc.format.extent | 53 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | A learning hierarchy for classification and regression | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.oclc | 1014171329 | en_US |