Author(s)Ong, Ming Yang
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Pablo A. Parrilo.
MetadataShow full item record
An important goal in machine learning is to understand how to design models that can generalize. This thesis follows a venerable line of research aimed at understanding generalization through the lens of stability- the study of how variations on the inputs of a system can cause its outputs to change. We explore stability and generalization in two different directions. In the first direction we look at proving stability using a proof technique provided by Hardt et al [HRS16]. We apply this technique to stochastic gradient descent with momentum and investigate the resulting stability bounds under some assumptions. In the second direction, we explore the effectiveness of stability in obtaining generalization bounds under the violation of some model assumptions. In particular, we show that stability is insufficient for generalization under domain adaptation. We introduce a sufficient condition and show that some properties can imply this condition.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 61-62).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.