Understanding generalization
Author(s)
Ong, Ming Yang
DownloadFull printable version (594.7Kb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Pablo A. Parrilo.
Terms of use
Metadata
Show full item recordAbstract
An important goal in machine learning is to understand how to design models that can generalize. This thesis follows a venerable line of research aimed at understanding generalization through the lens of stability- the study of how variations on the inputs of a system can cause its outputs to change. We explore stability and generalization in two different directions. In the first direction we look at proving stability using a proof technique provided by Hardt et al [HRS16]. We apply this technique to stochastic gradient descent with momentum and investigate the resulting stability bounds under some assumptions. In the second direction, we explore the effectiveness of stability in obtaining generalization bounds under the violation of some model assumptions. In particular, we show that stability is insufficient for generalization under domain adaptation. We introduce a sufficient condition and show that some properties can imply this condition.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 61-62).
Date issued
2017Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.