Global convergence rate of incremental aggregated gradient methods for nonsmooth problems
Author(s)Vanli, Nuri Denizcan; Gurbuzbalaban, Mert; Koksal, Asuman E.
MetadataShow full item record
We analyze the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions f(x) = Σ[subscript i=1]m f[subscript i](x) and a convex function r(x). Such composite optimization problems arise in a number of machine learning applications including regularized regression problems and constrained distributed optimization problems over sensor networks. Our method computes an approximate gradient for the function f(x) by aggregating the component gradients evaluated at outdated iterates over a finite window K and uses a proximal operator with respect to the regularization function r(x) at the intermediate iterate obtained by moving along the approximate gradient. Under the assumptions that f(x) is strongly convex and each f[subscript i](x) is smooth with Lipschitz gradients, we show the first linear convergence rate result for the PIAG method and provide explicit convergence rate estimates that highlight the dependence on the condition number of the problem and the size of the window K over which outdated component gradients are evaluated.
DepartmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systems
2016 IEEE 55th Conference on Decision and Control (CDC)
Institute of Electrical and Electronics Engineers (IEEE)
Vanli, N. Denizcan et al. “Global Convergence Rate of Incremental Aggregated Gradient Methods for Nonsmooth Problems.” 2016 IEEE 55th Conference on Decision and Control (CDC), December 12-14 2016, Las Vegas, Nevada, USA, Institute of Electrical and Electronics Engineers (IEEE), December 2016: 173-178 © 2016 Institute of Electrical and Electronics Engineers (IEEE)