Tight bounds for the expected risk of linear classifiers and PAC-bayes finite-sample guarantees
Author(s)
Honorio Carrillo, Jean; Jaakkola, Tommi S.
DownloadTight bounds.pdf (332.1Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” setting. That is, we analyze the worst-case risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomial-tail bound for general random variables. For sub-Gaussian random variables, we derive a novel tight exponential bound. We also provide new PAC-Bayes finite-sample guarantees when training data
is available. Our “minimax” generalization bounds are dimensionality-independent and O(√1/m) for m samples.
Date issued
2014-04Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS)
Publisher
Journal of Machine Learning Research
Citation
Honorio, Jean and Tomi Jaakkola. "Tight bounds for the expected risk of linear classifiers and PAC-bayes finite-sample guarantees." Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), 22-25 April 2014, Reykjavik, Iceland, Journal of Machine Learning Research, 2014.
Version: Author's final manuscript
ISSN
1938-7228