MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Accelerating incremental gradient optimization with curvature information

Author(s)
Wai, Hoi-To; Shi, Wei; Uribe, César A; Nedić, Angelia; Scaglione, Anna
Thumbnail
Download10589_2020_183_ReferencePDF.pdf (664.9Kb)
Publisher Policy

Publisher Policy

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.

Terms of use
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Metadata
Show full item record
Abstract
Abstract This paper studies an acceleration technique for incremental aggregated gradient (IAG) method through the use of curvature information for solving strongly convex finite sum optimization problems. These optimization problems of interest arise in large-scale learning applications. Our technique utilizes a curvature-aided gradient tracking step to produce accurate gradient estimates incrementally using Hessian information. We propose and analyze two methods utilizing the new technique, the curvature-aided IAG (CIAG) method and the accelerated CIAG (A-CIAG) method, which are analogous to gradient method and Nesterov’s accelerated gradient method, respectively. Setting $$\kappa$$κ to be the condition number of the objective function, we prove the R linear convergence rates of $$1 - \frac{4c_0 \kappa }{(\kappa +1)^2}$$1-4c0κ(κ+1)2 for the CIAG method, and $$1 - \sqrt{\frac{c_1}{2\kappa }}$$1-c12κ for the A-CIAG method, where $$c_0,c_1 \le 1$$c0,c1≤1 are constants inversely proportional to the distance between the initial point and the optimal solution. When the initial iterate is close to the optimal solution, the R linear convergence rates match with the gradient and accelerated gradient method, albeit CIAG and A-CIAG operate in an incremental setting with strictly lower computation complexity. Numerical experiments confirm our findings. The source codes used for this paper can be found on http://github.com/hoitowai/ciag/.
Date issued
2020-03-07
URI
https://hdl.handle.net/1721.1/131862
Department
Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Publisher
Springer US

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.