Iterative regularization for learning with convex loss functions
Author(s)
Lin, Junhong; Zhou, Ding-Xuan; Rosasco, Lorenzo
Download15-115.pdf (459.1Kb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framework of reproducing kernel Hilbert spaces, and prove consistency and finite sample bounds on the excess risk under general regularity conditions. Our study provides a new class of efficient regularized learning algorithms and gives insights on the interplay between statistics and optimization in machine learning.
Date issued
2016-05Department
McGovern Institute for Brain Research at MITJournal
Journal of Machine Learning Research
Publisher
JMLR, Inc.
Citation
Lin, Junhong, Lorenzo Rosasaco, and Ding-Xuan Zhou. "Iterative Regularization for Learning with Convex Loss Functions." Journal of Machine Learning Research 17, 2016, pp. 1-38. © 2016 Junhong Lin, Lorenzo Rosasco and Ding-Xuan Zhou
Version: Final published version
ISSN
1532-4435
1533-7928