Show simple item record

dc.contributor.authorVanli, Nuri Denizcan
dc.contributor.authorGurbuzbalaban, Mert
dc.contributor.authorOzdaglar, Asuman E
dc.date.accessioned2019-07-01T17:59:18Z
dc.date.available2019-07-01T17:59:18Z
dc.date.issued2018-05-18
dc.date.submitted2016-09-16
dc.identifier.issn1052-6234
dc.identifier.issn1095-7189
dc.identifier.urihttps://hdl.handle.net/1721.1/121464
dc.description.abstractWe focus on the problem of minimizing the sum of smooth component functions (where the sum is strongly convex) and a nonsmooth convex function, which arises in regularized empirical risk minimization in machine learning and distributed constrained optimization in wireless sensor networks and smart grids. We consider solving this problem using the proximal incremental aggregated gradient (PIAG) method, which at each iteration moves along an aggregated gradient (formed by incrementally updating gradients of component functions according to a deterministic order) and takes a proximal step with respect to the nonsmooth function. While the convergence properties of this method with randomized orders (in updating gradients of component functions) have been investigated, this paper, to the best of our knowledge, is the first study that establishes the convergence rate properties of the PIAG method for any deterministic order. In particular, we show that the PIAG algorithm is globally convergent with a linear rate provided that the step size is sufficiently small. We explicitly identify the rate of convergence and the corresponding step size to achieve this convergence rate. Our results improve upon the best known condition number and gradient delay bound dependence of the convergence rate of the incremental aggregated gradient methods used for minimizing a sum of smooth functions. Key words. convex optimization, nonsmooth optimization, proximal incremental aggregated gradient methoden_US
dc.language.isoen
dc.publisherSociety for Industrial & Applied Mathematics (SIAM)en_US
dc.relation.isversionof10.1137/16m1094415en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSIAMen_US
dc.titleGlobal Convergence Rate of Proximal Incremental Aggregated Gradient Methodsen_US
dc.typeArticleen_US
dc.identifier.citationVanli, N. D., et al. “Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods.” SIAM Journal on Optimization 28, no. 2, (January 2018): 1282–300. © 2018 Society for Industrial and Applied Mathematicsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2019-06-28T16:27:32Z
dspace.date.submission2019-06-28T16:27:33Z
mit.journal.issueSIAM Journal on Optimizationen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record