Show simple item record

dc.contributor.authorMukherjee, Indraneel
dc.contributor.authorRudin, Cynthia
dc.contributor.authorSchapire, Robert E.
dc.date.accessioned2013-12-23T21:15:38Z
dc.date.available2013-12-23T21:15:38Z
dc.date.issued2013-08
dc.date.submitted2013-05
dc.identifier.issn1532-4435
dc.identifier.issn1533-7928
dc.identifier.urihttp://hdl.handle.net/1721.1/83258
dc.description.abstractThe AdaBoost algorithm was designed to combine many “weak” hypotheses that perform slightly better than random guessing into a “strong” hypothesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the “exponential loss”. Unlike previous work, our proofs do not require a weak-learning assumption, nor do they require that minimizers of the exponential loss are finite. Our first result shows that the exponential loss of AdaBoost's computed parameter vector will be at most ε more than that of any parameter vector of ℓ[subscript 1]-norm bounded by B in a number of rounds that is at most a polynomial in B and 1/ε. We also provide lower bounds showing that a polynomial dependence is necessary. Our second result is that within C/ε iterations, AdaBoost achieves a value of the exponential loss that is at most ε more than the best possible value, where C depends on the data set. We show that this dependence of the rate on ε is optimal up to constant factors, that is, at least Ω(1/ε) rounds are necessary to achieve within ε of the optimal exponential loss.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant IIS-1016029)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant IIS-1053407)en_US
dc.language.isoen_US
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.isversionofhttp://jmlr.org/papers/v14/mukherjee13b.htmlen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceJournal of Machine Learning Researchen_US
dc.titleThe Rate of Convergence of AdaBoosten_US
dc.typeArticleen_US
dc.identifier.citationMukherjee, Indraneel, Cynthia Rudin, and Robert E. Schapire. “The Rate of Convergence of AdaBoost.” Journal of Machine Learning Research 14 (2013): 2315–2347.en_US
dc.contributor.departmentSloan School of Managementen_US
dc.contributor.mitauthorRudin, Cynthiaen_US
dc.relation.journalJournal of Machine Learning Researchen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsMukherjee, Indraneel; Rudin, Cynthia; Schapire, Robert E.en_US
mit.licensePUBLISHER_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record