Show simple item record

dc.contributor.authorArora, Sanjeev
dc.contributor.authorGe, Rong
dc.contributor.authorMa, Tengyu
dc.contributor.authorMoitra, Ankur
dc.date.accessioned2018-05-30T15:57:02Z
dc.date.available2018-05-30T15:57:02Z
dc.date.issued2015
dc.identifier.issn1938-7228
dc.identifier.urihttp://hdl.handle.net/1721.1/115969
dc.description.abstractSparse coding is a basic task in many fields including signal processing, neuroscience and machine learning where the goal is to learn a basis that enables a sparse representation of a given set of data, if one exists. Its standard formulation is as a non-convex optimization problem which is solved in practice by heuristics based on alternating minimization. Re- cent work has resulted in several algorithms for sparse coding with provable guarantees, but somewhat surprisingly these are outperformed by the simple alternating minimization heuristics. Here we give a general framework for understanding alternating minimization which we leverage to analyze existing heuristics and to design new ones also with provable guarantees. Some of these algorithms seem implementable on simple neural architectures, which was the original motivation of Olshausen and Field (1997a) in introducing sparse coding. We also give the first efficient algorithm for sparse coding that works almost up to the information theoretic limit for sparse recovery on incoherent dictionaries. All previous algorithms that approached or surpassed this limit run in time exponential in some natural parameter. Finally, our algorithms improve upon the sample complexity of existing approaches. We believe that our analysis framework will have applications in other settings where simple iterative algorithms are used.en_US
dc.publisherProceedings of Machine Learning Researchen_US
dc.relation.isversionofhttp://proceedings.mlr.press/v40/en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceJournal of Machine Learning Researchen_US
dc.titleSimple, efficient, and neural algorithms for sparse codingen_US
dc.typeArticleen_US
dc.identifier.citationArora, Sanjeev et al. "Simple, efficient, and neural algorithms for sparse coding." Proceedings of Machine Learning Research 40 (2015): 113-149 © 2015 The Authorsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mathematicsen_US
dc.contributor.mitauthorMoitra, Ankur
dc.relation.journalProceedings of Machine Learning Researchen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2018-05-29T15:09:44Z
dspace.orderedauthorsArora, Sanjeev; Ge, Rong; Ma, Tengyu; Moitra, Ankuren_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0001-7047-0495
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record