Show simple item record

dc.contributor.advisorSuvrit Sra and Ali Jadbabaie.en_US
dc.contributor.authorZhang, Jingzhao,S.M.Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-11-12T17:41:40Z
dc.date.available2019-11-12T17:41:40Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/122886
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 85-87).en_US
dc.description.abstractGradient based optimization algorithms are among the most fundamental algorithms in optimization and machine learning, yet they suffer from slow convergence. Consequently, accelerating gradient based methods have become an important recent topic of study. In this thesis, we focus on explaining and understanding the acceleration results. In particular, we aim to provide insights into the acceleration phenomenon and further develop new algorithms based on this interpretation. To do so, we follow the line of work on the continuous ordinary differential equation representations of momentum based acceleration methods. We start by proving that acceleration can be achieved by stable discretization of ODEs using standard Runge-Kutta integrators when the function is smooth enough and convex. We then extend this idea and develop a distributed algorithm for solving convex finite sum problems over networks. Our proposed algorithm achieves acceleration without resorting to Nesterov's momentum approach. Finally we generalize the result to functions that are quasi-strongly convex but not necessarily convex. We show that acceleration can be achieved in a nontrivial neighborhood of the optimal solution. In particular, the neighborhood can grow larger as the condition number of the function increases. The results altogether provide a systematic way to prove nonasymptotic convergence rates of algorithms derived from ODE discretization.en_US
dc.description.statementofresponsibilityby Jingzhao Zhang.en_US
dc.format.extent87 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleDynamical systems view of acceleration in first order optimizationen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1126787253en_US
dc.description.collectionS.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-11-12T17:41:39Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record