MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Sparse learning : statistical and optimization perspectives

Author(s)
Dedieu, Antoine
Thumbnail
DownloadFull printable version (8.564Mb)
Other Contributors
Massachusetts Institute of Technology. Operations Research Center.
Advisor
Rahul Mazumder.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
In this thesis, we study the computational and statistical aspects of several sparse models when the number of samples and/or features is large. We propose new statistical estimators and build new computational algorithms - borrowing tools and techniques from areas of convex and discrete optimization. First, we explore an Lq-regularized version of the Best Subset selection procedure which mitigates the poor statistical performance of the best-subsets estimator in the low SNR regimes. The statistical and empirical properties of the estimator are explored, especially when compared to best-subsets selection, Lasso and Ridge. Second, we propose new computational algorithms for a family of penalized linear Support Vector Machine (SVM) problem with a hinge loss function and sparsity-inducing regularizations. Our methods bring together techniques from Column (and Constraint) Generation and modern First Order methods for non-smooth convex optimization. These two components complement each others' strengths, leading to improvements of 2 orders of magnitude when compared to commercial LP solvers. Third, we present a novel framework inspired by Hierarchical Bayesian modeling to predict user session-length on on-line streaming services. The time spent by a user on a platform depends upon user-specific latent variables which are learned via hierarchical shrinkage. Our framework incorporates flexible parametric/nonparametric models on the covariates and outperforms state-of- the-art estimators in terms of efficiency and predictive performance on real world datasets from the internet radio company Pandora Media Inc.
Description
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 101-109).
 
Date issued
2018
URI
http://hdl.handle.net/1721.1/119354
Department
Massachusetts Institute of Technology. Operations Research Center; Sloan School of Management
Publisher
Massachusetts Institute of Technology
Keywords
Operations Research Center.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.