Achieving acceleration in distributed optimization via direct discretization of the heavy-ball ODE
Author(s)
Zhang, J; Uribe, CA; Mokhtari, A; Jadbabaie, A
DownloadAccepted version (405.3Kb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2019 American Automatic Control Council. We develop a distributed algorithm for convex Empirical Risk Minimization, the problem of minimizing large but finite sum of convex functions over networks. The proposed algorithm is derived from directly discretizing the second-order heavy-ball differential equation and results in an accelerated convergence rate, i.e., faster than distributed gradient descent-based methods for strongly convex objectives that may not be smooth. Notably, we achieve acceleration without resorting to the well-known Nesterov's momentum approach. We provide numerical experiments and contrast the proposed method with recently proposed optimal distributed optimization algorithms.
Date issued
2019-07-01Department
Massachusetts Institute of Technology. Department of Civil and Environmental Engineering; Massachusetts Institute of Technology. Institute for Data, Systems, and SocietyJournal
Proceedings of the American Control Conference
Publisher
IEEE
Citation
Zhang, J, Uribe, CA, Mokhtari, A and Jadbabaie, A. 2019. "Achieving acceleration in distributed optimization via direct discretization of the heavy-ball ODE." Proceedings of the American Control Conference, 2019-July.
Version: Author's final manuscript