Advanced Search
DSpace@MIT

Applications of empirical processes in learning theory : algorithmic stability and generalization bounds

Research and Teaching Output of the MIT Community

Show simple item record

dc.contributor.advisor Tomaso Poggio. en_US
dc.contributor.author Rakhlin, Alexander en_US
dc.contributor.other Massachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences. en_US
dc.date.accessioned 2006-11-07T12:58:36Z
dc.date.available 2006-11-07T12:58:36Z
dc.date.copyright 2006 en_US
dc.date.issued 2006 en_US
dc.identifier.uri http://hdl.handle.net/1721.1/34564
dc.description Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2006. en_US
dc.description Includes bibliographical references (p. 141-148). en_US
dc.description.abstract This thesis studies two key properties of learning algorithms: their generalization ability and their stability with respect to perturbations. To analyze these properties, we focus on concentration inequalities and tools from empirical process theory. We obtain theoretical results and demonstrate their applications to machine learning. First, we show how various notions of stability upper- and lower-bound the bias and variance of several estimators of the expected performance for general learning algorithms. A weak stability condition is shown to be equivalent to consistency of empirical risk minimization. The second part of the thesis derives tight performance guarantees for greedy error minimization methods - a family of computationally tractable algorithms. In particular, we derive risk bounds for a greedy mixture density estimation procedure. We prove that, unlike what is suggested in the literature, the number of terms in the mixture is not a bias-variance trade-off for the performance. The third part of this thesis provides a solution to an open problem regarding the stability of Empirical Risk Minimization (ERM). This algorithm is of central importance in Learning Theory. en_US
dc.description.abstract (cont.) By studying the suprema of the empirical process, we prove that ERM over Donsker classes of functions is stable in the L1 norm. Hence, as the number of samples grows, it becomes less and less likely that a perturbation of o(v/n) samples will result in a very different empirical minimizer. Asymptotic rates of this stability are proved under metric entropy assumptions on the function class. Through the use of a ratio limit inequality, we also prove stability of expected errors of empirical minimizers. Next, we investigate applications of the stability result. In particular, we focus on procedures that optimize an objective function, such as k-means and other clustering methods. We demonstrate that stability of clustering, just like stability of ERM, is closely related to the geometry of the class and the underlying measure. Furthermore, our result on stability of ERM delineates a phase transition between stability and instability of clustering methods. In the last chapter, we prove a generalization of the bounded-difference concentration inequality for almost-everywhere smooth functions. This result can be utilized to analyze algorithms which are almost always stable. Next, we prove a phase transition in the concentration of almost-everywhere smooth functions. Finally, a tight concentration of empirical errors of empirical minimizers is shown under an assumption on the underlying space. en_US
dc.description.statementofresponsibility by Alexander Rakhlin. en_US
dc.format.extent 148 p. en_US
dc.format.extent 5578123 bytes
dc.format.extent 5584307 bytes
dc.format.mimetype application/pdf
dc.format.mimetype application/pdf
dc.language.iso eng en_US
dc.publisher Massachusetts Institute of Technology en_US
dc.rights M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. en_US
dc.rights.uri http://dspace.mit.edu/handle/1721.1/7582
dc.subject Brain and Cognitive Sciences. en_US
dc.title Applications of empirical processes in learning theory : algorithmic stability and generalization bounds en_US
dc.type Thesis en_US
dc.description.degree Ph.D. en_US
dc.contributor.department Massachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences. en_US
dc.identifier.oclc 71152955 en_US


Files in this item

Name Size Format Description
71152955.pdf 5.319Mb PDF Preview, non-printable (open to all)
71152955-MIT.pdf 5.325Mb PDF Full printable version (MIT only)

This item appears in the following Collection(s)

Show simple item record

MIT-Mirage