Show simple item record

dc.contributor.authorBarak, Boaz
dc.contributor.authorSteurer, David
dc.contributor.authorKelner, Jonathan Adam
dc.date.accessioned2016-10-28T16:40:59Z
dc.date.available2016-10-28T16:40:59Z
dc.date.issued2015-06
dc.identifier.isbn9781450335362
dc.identifier.urihttp://hdl.handle.net/1721.1/105133
dc.description.abstractWe give a new approach to the dictionary learning (also known as “sparse coding”) problem of recovering an unknown n × m matrix A (for m ≥ n) from examples of the form [y = Ax + e], where x is a random vector in R[superscript m] with at most τ m nonzero coordinates, and e is a random noise vector in R[superscript n] with bounded magnitude. For the case m = O(n), our algorithm recovers every column of A within arbitrarily good constant accuracy in time m[superscript O(log m/log(τ[superscript −1]))], in particular achieving polynomial time if τ = m[superscript −δ] for any δ > 0, and time m[superscript O(log m)] if τ is (a sufficiently small) constant. Prior algorithms with comparable assumptions on the distribution required the vector x to be much sparser—at most √n nonzero coordinates—and there were intrinsic barriers preventing these algorithms from applying for denser x. We achieve this by designing an algorithm for noisy tensor decomposition that can recover, under quite general conditions, an approximate rank-one decomposition of a tensor T, given access to a tensor T[supserscript ′] that is τ-close to T in the spectral norm (when considered as a matrix). To our knowledge, this is the first algorithm for tensor decomposition that works in the constant spectral-norm noise regime, where there is no guarantee that the local optima of T and T[superscript ′] have similar structures. Our algorithm is based on a novel approach to using and analyzing the Sum of Squares semidefinite programming hierarchy (Parrilo 2000, Lasserre 2001), and it can be viewed as an indication of the utility of this very general and powerful tool for unsupervised learning problems.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (grant 1111109)en_US
dc.language.isoen_US
dc.publisherAssociation for Computing Machineryen_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/2746539.2746605en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleDictionary Learning and Tensor Decomposition via the Sum-of-Squares Methoden_US
dc.typeArticleen_US
dc.identifier.citationBarak, Boaz, Jonathan A. Kelner, and David Steurer. “Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method.” ACM Press, 2015. 143–151.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mathematicsen_US
dc.contributor.mitauthorKelner, Jonathan Adam
dc.relation.journalProceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing - STOC '15en_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsBarak, Boaz; Kelner, Jonathan A.; Steurer, Daviden_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-4257-4198
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record