Adaptation for Regularization Operators in Learning Theory
Author(s)
Caponnetto, Andrea; Yao, Yuan
DownloadMIT-CSAIL-TR-2006-063.ps (941.0Kb)
Additional downloads
Other Contributors
Center for Biological and Computational Learning (CBCL)
Advisor
Tomaso Poggio
Metadata
Show full item recordAbstract
We consider learning algorithms induced by regularization methods in the regression setting. We show that previously obtained error bounds for these algorithms using a-priori choices of the regularization parameter, can be attained using a suitable a-posteriori choice based on validation. In particular, these results prove adaptation of the rate of convergence of the estimators to the minimax rate induced by the "effective dimension" of the problem. We also show universal consistency for theses class methods.
Date issued
2006-09-10Other identifiers
MIT-CSAIL-TR-2006-063
CBCL-265
Series/Report no.
Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory
Keywords
optimal rates, Learning, regularization methods, adaptation, cross-validation