Show simple item record

dc.contributor.advisorTomaso Poggio
dc.contributor.authorCaponnetto, Andrea
dc.contributor.authorYao, Yuan
dc.contributor.otherCenter for Biological and Computational Learning (CBCL)
dc.date.accessioned2006-09-29T18:36:45Z
dc.date.available2006-09-29T18:36:45Z
dc.date.issued2006-09-10
dc.identifier.otherMIT-CSAIL-TR-2006-063
dc.identifier.otherCBCL-265
dc.identifier.urihttp://hdl.handle.net/1721.1/34217
dc.description.abstractWe consider learning algorithms induced by regularization methods in the regression setting. We show that previously obtained error bounds for these algorithms using a-priori choices of the regularization parameter, can be attained using a suitable a-posteriori choice based on validation. In particular, these results prove adaptation of the rate of convergence of the estimators to the minimax rate induced by the "effective dimension" of the problem. We also show universal consistency for theses class methods.
dc.format.extent19 p.
dc.format.extent963649 bytes
dc.format.extent819523 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory
dc.subjectoptimal rates, Learning, regularization methods, adaptation, cross-validation
dc.titleAdaptation for Regularization Operators in Learning Theory


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record