A fast distributed proximal-gradient method
Author(s)
Chen, Annie I.; Ozdaglar, Asuman E.
DownloadOzdaglar_A fast.pdf (339.4Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We present a distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology. The local objectives have distinct differentiable components, but they share a common nondifferentiable component, which has a favorable structure suitable for effective computation of the proximal operator. In our method, each agent iteratively updates its estimate of the global minimum by optimizing its local objective function, and exchanging estimates with others via communication in the network. Using Nesterov-type acceleration techniques and multiple communication steps per iteration, we show that this method converges at the rate 1/k (where k is the number of communication rounds between the agents), which is faster than the convergence rate of the existing distributed methods for solving this problem. The superior convergence rate of our method is also verified by numerical experiments.
Date issued
2012-10Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science; Massachusetts Institute of Technology. Laboratory for Information and Decision SystemsJournal
Proceedings of the 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Chen, Annie I., and Asuman Ozdaglar. “A Fast Distributed Proximal-Gradient Method.” 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (October 2012).
Version: Author's final manuscript
ISBN
978-1-4673-4539-2
978-1-4673-4537-8
978-1-4673-4538-5