Show simple item record

dc.contributor.advisorSuvrit Sra.en_US
dc.contributor.authorZhang, Hongyi,Ph. D.Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences.en_US
dc.date.accessioned2019-07-18T20:34:28Z
dc.date.available2019-07-18T20:34:28Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/121830
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 165-186).en_US
dc.description.abstractNon-convex optimization and learning play an important role in data science and machine learning, yet so far they still elude our understanding in many aspects. In this thesis, I study two important aspects of non-convex optimization and learning: Riemannian optimization and deep neural networks. In the first part, I develop iteration complexity analysis for Riemannian optimization, i.e., optimization problems defined on Riemannian manifolds. Through bounding the distortion introduced by the metric curvature, iteration complexity of Riemannian (stochastic) gradient descent methods is derived. I also show that some fast first-order methods in Euclidean space, such as Nesterov's accelerated gradient descent (AGD) and stochastic variance reduced gradient (SVRG), have Riemannian counterparts that are also fast under certain conditions. In the second part, I challenge two common practices in deep learning, namely empirical risk minimization (ERM) and normalization. Specifically, I show (1) training on convex combinations of samples improves model robustness and generalization, and (2) a good initialization is sufficient for training deep residual networks without normalization. The method in (1), called mixup, is motivated by a data-dependent Lipschitzness regularization of the network. The method in (2), called Zerolnit, makes the network update scale invariant to its depth at initialization.en_US
dc.description.statementofresponsibilityby Hongyi Zhang.en_US
dc.format.extent186 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectBrain and Cognitive Sciences.en_US
dc.titleTopics in non-convex optimization and learningen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.identifier.oclc1108619914en_US
dc.description.collectionPh.D. Massachusetts Institute of Technology, Department of Brain and Cognitive Sciencesen_US
dspace.imported2019-07-18T20:34:25Zen_US
mit.thesis.degreeDoctoralen_US
mit.thesis.departmentBrainen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record