Show simple item record

dc.contributor.authorBelloni, Alexandre
dc.contributor.authorChernozhukov, Victor V.
dc.date.accessioned2012-10-05T14:21:40Z
dc.date.available2012-10-05T14:21:40Z
dc.date.issued2013-05
dc.date.submitted2011-08
dc.identifier.issn1350-7265
dc.identifier.urihttp://hdl.handle.net/1721.1/73648
dc.descriptionhttp://arxiv.org/abs/1001.0188en_US
dc.description.abstractWe study post-model selection estimators which apply ordinary least squares (ols) to the model selected by first-step penalized estimators. It is well known that lasso can estimate the nonparametric regression function at nearly the oracle rate, and is thus hard to improve upon. We show that ols post lasso estimator performs at least as well as lasso in terms of the rate of convergence, and has the advantage of a smaller bias. Remarkably, this performance occurs even if the lasso-based model selection "fails" in the sense of missing some components of the "true" regression model. By the "true" model we mean here the best $s$-dimensional approximation to the nonparametric regression function chosen by the oracle. Furthermore, ols post lasso estimator can perform strictly better than lasso, i.e. a strictly faster rate of convergence, if the lasso-based model selection correctly includes all components of the "true" model as a subset and also achieves sufficient sparsity. In the extreme case, when lasso perfectly selects the "true" model, the ols post lasso estimator becomes the oracle estimator. An important ingredient in our analysis is a new sparsity bound on the dimension of the model selected by lasso which guarantees that this dimension is at most of the same order as the dimension of the "true" model. Moreover, our analysis is not limited to the lasso estimator acting as selector in the first step, but also applies to any other estimator, for example various forms of thresholded lasso, with good rates and good sparsity properties. Our analysis covers both traditional thresholding and a new practical, data-driven thresholding scheme that induces maximal sparsity subject to maintaining a certain goodness-of-fit. The latter scheme has theoretical guarantees similar to those of lasso or ols post lasso, but it dominates these procedures in a wide variety of experiments.en_US
dc.description.sponsorshipNational Science Foundation (U.S.)en_US
dc.language.isoen_US
dc.publisherBernoulli Society for Mathematical Statistics and Probabilityen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike 3.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourceMIT web domainen_US
dc.titleLeast Squares After Model Selection in High-dimensional Sparse Modelsen_US
dc.typeArticleen_US
dc.identifier.citationBelloni, Alexandre and Victor V. Chernozhukov. "Least Squares After Model Selection in High-dimensional Sparse Models." Bernoulli, Vol. 19, No. 2, May 2013.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Economicsen_US
dc.contributor.approverChernozhukov, Victor V.
dc.contributor.mitauthorChernozhukov, Victor V.
dc.relation.journalBernoullien_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsBelloni, Alexandre; Chernozhukov, Victor V.en_US
dc.identifier.orcidhttps://orcid.org/0000-0002-3250-6714
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record