Optimal Rates for Regularization Operators in Learning Theory
Author(s)
Caponnetto, Andrea
DownloadMIT-CSAIL-TR-2006-062.ps (758.1Kb)
Additional downloads
Other Contributors
Center for Biological and Computational Learning (CBCL)
Advisor
Tomaso Poggio
Metadata
Show full item recordAbstract
We develop some new error bounds for learning algorithms induced by regularization methods in the regression setting. The "hardness" of the problem is characterized in terms of the parameters r and s, the first related to the "complexity" of the target function, the second connected to the effective dimension of the marginal probability measure over the input space. We show, extending previous results, that by a suitable choice of the regularization parameter as a function of the number of the available examples, it is possible attain the optimal minimax rates of convergence for the expected squared loss of the estimators, over the family of priors fulfilling the constraint r + s > 1/2. The setting considers both labelled and unlabelled examples, the latter being crucial for the optimality results on the priors in the range r < 1/2.
Date issued
2006-09-10Other identifiers
MIT-CSAIL-TR-2006-062
CBCL-264
Series/Report no.
Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory
Keywords
optimal rates, regularized least-squares algorithm, regularization methods, adaptation