Convergence Rate of Distributed ADMM over Networks
Author(s)
Makhdoumi Kakhaki, Ali; Ozdaglar, Asuman E
DownloadAccepted version (238.4Kb)
Terms of use
Metadata
Show full item recordAbstract
We propose a new distributed algorithm based on alternating direction method of multipliers (ADMM) to minimize sum of locally known convex functions using communication over a network. This optimization problem emerges in many applications in distributed machine learning and statistical estimation. Our algorithm allows for a general choice of the communication weight matrix, which is used to combine the iterates at different nodes. We show that when functions are convex, both the objective function values and the feasibility violation converge with rate O(1/T), where $T$ is the number of iterations. We then show that when functions are strongly convex and have Lipschitz continuous gradients, the sequence generated by our algorithm converges linearly to the optimal solution. In particular, an psilon-optimal solution can be computed with O(κ (1)) iterations, where κ is the condition number of the problem. Our analysis highlights the effect of network and communication weights on the convergence rate through degrees of the nodes, the smallest nonzero eigenvalue, and operator norm of the communication matrix.
Date issued
2017-10Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
IEEE Transactions on Automatic Control
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Makhdoumi, Ali and Asuman Ozdaglar. "Convergence Rate of Distributed ADMM over Networks." IEEE Transactions on Automatic Control 62, no. 10 (October 2017): pages 5082 - 5095.
Version: Original manuscript
ISSN
0018-9286