Adaptive Kernel Methods Using the Balancing Principle
Author(s)
Rosasco, Lorenzo; Pereverzyev, Sergei; De Vito, Ernesto
DownloadMIT-CSAIL-TR-2008-062.pdf (406.4Kb)
Additional downloads
Other Contributors
Center for Biological and Computational Learning (CBCL)
Advisor
Tomaso Poggio
Metadata
Show full item recordAbstract
The regularization parameter choice is a fundamental problem in supervised learning since the performance of most algorithms crucially depends on the choice of one or more of such parameters. In particular a main theoretical issue regards the amount of prior knowledge on the problem needed to suitably choose the regularization parameter and obtain learning rates. In this paper we present a strategy, the balancing principle, to choose the regularization parameter without knowledge of the regularity of the target function. Such a choice adaptively achieves the best error rate. Our main result applies to regularization algorithms in reproducing kernel Hilbert space with the square loss, though we also study how a similar principle can be used in other situations. As a straightforward corollary we can immediately derive adaptive parameter choice for various kernel methods recently studied. Numerical experiments with the proposed parameter choice rules are also presented.
Date issued
2008-10-16Series/Report no.
MIT-CSAIL-TR-2008-062CBCL-275
Keywords
Adaptive Model Selection, Learning Theory, Inverse Problems, Regularization
Collections
The following license files are associated with this item: