Fast distributed first-order methods
Author(s)Chen, Annie I-An
Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.
MetadataShow full item record
This thesis provides a systematic framework for the development and analysis of distributed optimization methods for multi-agent networks with time-varying connectivity. The goal is to optimize a global objective function which is the sum of local objective functions privately known to individual agents. In our methods, each agent iteratively updates its estimate of the global optimum by optimizing its local function and exchanging estimates with others in the network. We introduce distributed proximal-gradient methods that enable the use of a gradient-based scheme for non-differentiable functions with a favorable structure. We present a convergence rate analysis that highlights the dependence on the step size rule. We also propose a novel fast distributed method that uses Nesterov-type acceleration techniques and multiple communication steps per iteration. Our method achieves exact convergence at the rate of O(1/t) (where t is the number of communication steps taken), which is superior than the rates of existing gradient or subgradient algorithms, and is confirmed by simulation results.
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 91-94).
DepartmentMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.