Just interpolate: Kernel “Ridgeless” regression can generalize
Author(s)
Liang, Tengyuan; Rakhlin, Alexander
DownloadAccepted version (1.528Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© Institute of Mathematical Statistics, 2020. In the absence of explicit regularization, Kernel “Ridgeless” Regression with nonlinear kernels has the potential to fit the training data perfectly. It has been observed empirically, however, that such interpolated solutions can still generalize well on test data. We isolate a phenomenon of implicit regularization for minimum-norm interpolated solutions which is due to a combination of high dimensionality of the input data, curvature of the kernel function and favorable geometric properties of the data such as an eigenvalue decay of the empirical covariance and kernel matrices. In addition to deriving a data-dependent upper bound on the out-of-sample error, we present experimental evidence suggesting that the phenomenon occurs in the MNIST dataset.
Date issued
2020Department
Massachusetts Institute of Technology. Institute for Data, Systems, and Society; Statistics and Data Science Center (Massachusetts Institute of Technology); Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; Massachusetts Institute of Technology. Laboratory for Information and Decision SystemsJournal
Annals of Statistics
Publisher
Institute of Mathematical Statistics
Citation
Liang, Tengyuan and Rakhlin, Alexander. 2020. "Just interpolate: Kernel “Ridgeless” regression can generalize." Annals of Statistics, 48 (3).
Version: Author's final manuscript