Rethinking methods to train deep neural networks : contributions of distinct regimes during training
Author(s)
Wei, Wendy,M. Eng.Massachusetts Institute of Technology.
Download1127649566-MIT.pdf (1004.Kb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Aleksander Ma̧dry.
Terms of use
Metadata
Show full item recordAbstract
Deep neural networks are known to be highly non-convex. Many of the methods used in deep learning which are informed by convex optimization work surprisingly well. The training dynamics of optimization methods such as momentum suggest that training occurs in distinct regimes, attributed to learning rate. In the low learning rate regime, many convex intuitions hold, and the recommended methods are able to reach a good solution. In the high learning rate regime, the training behavior is not convex-like, but training longer in this period achieves better generalization. This thesis focuses on rethinking deep network training from the perspective of these phases in training. Empirical results suggest that each training regime, although distinct, work together to produce high performance on deep learning tasks. Moreover, we re-examine popular learning rate schedules and find that the paradigm of high and low learning rate regimes helps to explain their advantages.
Description
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 29-30).
Date issued
2019Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.