Show simple item record

dc.contributor.authorMazumder, Rahul
dc.date.accessioned2021-03-17T22:25:11Z
dc.date.available2021-03-17T22:25:11Z
dc.date.issued2020-10
dc.date.submitted2020-06
dc.identifier.issn1052-6234
dc.identifier.urihttps://hdl.handle.net/1721.1/130168
dc.description.abstractThe Gradient Boosting Machine (GBM) introduced by Friedman [J. H. Friedman, Ann. Statist., 29 (2001), pp. 1189-1232] is a powerful supervised learning algorithm that is very widely used in practice-it routinely features as a leading algorithm in machine learning competitions such as Kaggle and the KDDCup. In spite of the usefulness of GBM in practice, our current theoretical understanding of this method is rather limited. In this work, we propose the Randomized Gradient Boosting Machine (RGBM), which leads to substantial computational gains compared to GBM by using a randomization scheme to reduce search in the space of weak learners. We derive novel computational guarantees for RGBM. We also provide a principled guideline towards better step-size selection in RGBM that does not require a line search. Our proposed framework is inspired by a special variant of coordinate descent that combines the benefits of randomized coordinate descent and greedy coordinate descent, and may be of independent interest as an optimization algorithm. As a special case, our results for RGBM lead to superior computational guarantees for GBM. Our computational guarantees depend upon a curious geometric quantity that we call the Minimal Cosine Angle, which relates to the density of weak learners in the prediction space. On a series of numerical experiments on real datasets, we demonstrate the effectiveness of RGBM over GBM in terms of obtaining a model with good training and/or testing data fidelity with a fraction of the computational cost.en_US
dc.description.sponsorshipUnited States. Office of Naval Research (Grants ONR-N000141512342,ONR-N000141812298)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant NSF-IIS-1718258)en_US
dc.language.isoen
dc.publisherSociety for Industrial & Applied Mathematics (SIAM)en_US
dc.relation.isversionof10.1137/18M1223277en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSIAMen_US
dc.titleRandomized gradient boosting machineen_US
dc.typeArticleen_US
dc.identifier.citationLu, Haihao and Rahul Mazumder. “Randomized gradient boosting machine.” SIAM Journal on Optimization, 30, 4 (October 2020): 780--2808 © 2020 The Author(s)en_US
dc.contributor.departmentSloan School of Managementen_US
dc.relation.journalSIAM Journal on Optimizationen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2021-03-12T15:50:40Z
dspace.orderedauthorsLU, H; MAZUMDER, Ren_US
dspace.date.submission2021-03-12T15:50:45Z
mit.journal.volume30en_US
mit.journal.issue4en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record