Now showing items 1-4 of 4
Empirical Effective Dimension and Optimal Rates for Regularized Least Squares Algorithm
This paper presents an approach to model selection for regularized least-squares on reproducing kernel Hilbert spaces in the semi-supervised setting. The role of effective dimension was recently shown to be crucial in the ...
Some Properties of Empirical Risk Minimization over Donsker Classes
We study properties of algorithms which minimize (or almost minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. ...
Fast Rates for Regularized Least-squares Algorithm
We develop a theoretical analysis of generalization performances of regularized least-squares on reproducing kernel Hilbert spaces for supervised learning. We show that the concept of effective dimension of an integral ...
Risk Bounds for Regularized Least-squares Algorithm with Operator-valued kernels
We show that recent results in  on risk bounds for regularized least-squares on reproducing kernel Hilbert spaces can be straightforwardly extended to the vector-valued regression setting. We first briefly introduce ...