A globally convergent incremental Newton method
Author(s)
Gurbuzbalaban, Mert; Ozdaglar, Asuman E.; Parrilo, Pablo A.
DownloadOzdaglar_A globally.pdf (276.3Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Motivated by machine learning problems over large data sets and distributed optimization over networks, we develop and analyze a new method called incremental Newton method for minimizing the sum of a large number of strongly convex functions. We show that our method is globally convergent for a variable stepsize rule. We further show that under a gradient growth condition, convergence rate is linear for both variable and constant stepsize rules. By means of an example, we show that without the gradient growth condition, incremental Newton method cannot achieve linear convergence. Our analysis can be extended to study other incremental methods: in particular, we obtain a linear convergence rate result for the incremental Gauss–Newton algorithm under a variable stepsize rule.
Date issued
2015-04Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science; Massachusetts Institute of Technology. Laboratory for Information and Decision SystemsJournal
Mathematical Programming
Publisher
Springer-Verlag
Citation
Gurbuzbalaban, M., A. Ozdaglar, and P. Parrilo. “A Globally Convergent Incremental Newton Method.” Math. Program. 151, no. 1 (April 11, 2015): 283–313.
Version: Original manuscript
ISSN
0025-5610
1436-4646