Show simple item record

dc.contributor.authorGurbuzbalaban, Mert
dc.contributor.authorKoksal, Asuman E.
dc.contributor.authorParrilo, Pablo A
dc.date.accessioned2018-03-19T14:01:47Z
dc.date.available2018-03-19T14:01:47Z
dc.date.issued2017-01
dc.date.submitted2015-11
dc.identifier.issn1052-6234
dc.identifier.issn1095-7189
dc.identifier.urihttp://hdl.handle.net/1721.1/114181
dc.description.abstractMotivated by applications to distributed optimization over networks and large-scale data processing in machine learning, we analyze the deterministic incremental aggregated gradient method for minimizing a finite sum of smooth functions where the sum is strongly convex. This method processes the functions one at a time in a deterministic order and incorporates a memory of previous gradient values to accelerate convergence. Empirically it performs well in practice; however, no theoretical analysis with explicit rate results was previously given in the literature to our knowledge, in particular most of the recent efforts concentrated on the randomized versions. In this paper, we show that this deterministic algorithm has global linear convergence and we characterize the convergence rate. We also consider an aggregated method with momentum and demonstrate its linear convergence. Our proofs rely on a careful choice of a Lyapunov function that offers insight into the algorithm's behavior and simplifies the proofs considerably. Key words: convex optimization, first-order methods, convergence analysis, large-scale optimizationen_US
dc.description.sponsorshipUnited States. Air Force. Office of Scientific Research. Multidisciplinary University Research Initiative (FA9950-09-1-0538)en_US
dc.description.sponsorshipUnited States. Office of Naval Research (Basic Research Challenge Grant N000141210997)en_US
dc.publisherSociety for Industrial & Applied Mathematics (SIAM)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1137/15M1049695en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSociety for Industrial and Applied Mathematicsen_US
dc.titleOn the Convergence Rate of Incremental Aggregated Gradient Algorithmsen_US
dc.typeArticleen_US
dc.identifier.citationGürbüzbalaban, M., et al. “On the Convergence Rate of Incremental Aggregated Gradient Algorithms.” SIAM Journal on Optimization, vol. 27, no. 2, Jan. 2017, pp. 1035–48. © 2017 Society for Industrial and Applied Mathematics.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Biological Engineeringen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.mitauthorGurbuzbalaban, Mert
dc.contributor.mitauthorKoksal, Asuman E.
dc.contributor.mitauthorParrilo, Pablo A
dc.relation.journalSIAM Journal on Optimizationen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2018-03-02T16:59:49Z
dspace.orderedauthorsGürbüzbalaban, M.; Ozdaglar, A.; Parrilo, P. A.en_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-0575-2450
dc.identifier.orcidhttps://orcid.org/0000-0002-1827-1285
dc.identifier.orcidhttps://orcid.org/0000-0003-1132-8477
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record