Show simple item record

dc.contributor.authorAgarwal, Alekh
dc.contributor.authorNegahban, Sahand N.
dc.contributor.authorWainwright, Martin J.
dc.date.accessioned2013-04-25T16:38:29Z
dc.date.available2013-04-25T16:38:29Z
dc.date.issued2012-10
dc.identifier.issn0090-5364
dc.identifier.urihttp://hdl.handle.net/1721.1/78602
dc.description.abstractMany statistical M-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the ambient dimension d to grow with (and possibly exceed) the sample size n. Our theory identifies conditions under which projected gradient descent enjoys globally linear convergence up to the statistical precision of the model, meaning the typical distance between the true unknown parameter θ[superscript ∗] and an optimal solution [ˆ over θ]. By establishing these conditions with high probability for numerous statistical models, our analysis applies to a wide range of M-estimators, including sparse linear regression using Lasso; group Lasso for block sparsity; log-linear models with regularization; low-rank matrix recovery using nuclear norm regularization; and matrix decomposition using a combination of the nuclear and ℓ[subscript 1] norms. Overall, our analysis reveals interesting connections between statistical and computational efficiency in high-dimensional estimation.en_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research (Grant AFOSR-09NL184)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (NSF-CDI-0941742)en_US
dc.language.isoen_US
dc.publisherInstitute of Mathematical Statisticsen_US
dc.relation.isversionofhttp://dx.doi.org/10.1214/12-AOS1032SUPPen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceInstitute of Mathematical Statisticsen_US
dc.titleFast global convergence of gradient methods for high-dimensional statistical recoveryen_US
dc.typeArticleen_US
dc.identifier.citationAgarwal, Alekh, Sahand Negahban, and Martin J. Wainwright. "Fast global convergence of gradient methods for high-dimensional statistical recovery." Annals of Statistics 40.5 (2012): 2452-2482. ©Institute of Mathematical Statisticsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.mitauthorNegahban, Sahand N.
dc.relation.journalAnnals of Statisticsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
mit.licensePUBLISHER_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record