Show simple item record

dc.contributor.authorAgarwal, Alekh
dc.contributor.authorNegahban, Sahand N.
dc.contributor.authorWainwright, Martin J.
dc.date.accessioned2013-01-10T18:31:05Z
dc.date.available2013-01-10T18:31:05Z
dc.date.issued2012-08
dc.date.submitted2012-03
dc.identifier.issn0090-5364
dc.identifier.urihttp://hdl.handle.net/1721.1/76239
dc.descriptionMarch 6, 2012en_US
dc.description.abstractWe analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation [bar through "X" symbol] of the sum of an (approximately) low rank matrix Θ⋆ with a second matrix Γ⋆ endowed with a complementary form of low-dimensional structure; this set-up includes many statistical models of interest, including factor analysis, multi-task regression and robust covariance estimation. We derive a general theorem that bounds the Frobenius norm error for an estimate of the pair (Θ⋆,Γ⋆) obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. Our results use a “spikiness” condition that is related to, but milder than, singular vector incoherence. We specialize our general result to two cases that have been studied in past work: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields nonasymptotic Frobenius error bounds for both deterministic and stochastic noise matrices, and applies to matrices Θ⋆ that can be exactly or approximately low rank, and matrices Γ⋆ that can be exactly or approximately sparse. Moreover, for the case of stochastic noise matrices and the identity observation operator, we establish matching lower bounds on the minimax error. The sharpness of our nonasymptotic predictions is confirmed by numerical simulations.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant CDI-0941742)en_US
dc.language.isoen_US
dc.publisherInstitute of Mathematical Statisticsen_US
dc.relation.isversionofhttp://dx.doi.org/10.1214/12-aos1000en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceInstitute of Mathematical Statisticsen_US
dc.titleNoisy matrix decomposition via convex relaxation: Optimal rates in high dimensionsen_US
dc.typeArticleen_US
dc.identifier.citationAgarwal, Alekh, Sahand Negahban, and Martin J. Wainwright. “Noisy Matrix Decomposition via Convex Relaxation: Optimal Rates in High Dimensions.” The Annals of Statistics 40.2 (2012): 1171–1197. © 2012 Institute of Mathematical Statisticsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.mitauthorNegahban, Sahand N.
dc.relation.journalThe Annals of Statisticsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsAgarwal, Alekh; Negahban, Sahand; Wainwright, Martin J.en
dspace.mitauthor.errortrue
mit.licensePUBLISHER_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record