Show simple item record

dc.contributor.authorAybat, Necdet Serhat
dc.contributor.authorFallah, Alireza
dc.contributor.authorGürbüzbalaban, Mert
dc.contributor.authorOzdaglar, Asuman
dc.date.accessioned2021-10-27T19:56:30Z
dc.date.available2021-10-27T19:56:30Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/133761
dc.description.abstract© 2020 Society for Industrial and Applied Mathematics. We study the trade-offs between convergence rate and robustness to gradient errors in designing a first-order algorithm. We focus on gradient descent and accelerated gradient (AG) methods for minimizing strongly convex functions when the gradient has random errors in the form of additive white noise. With gradient errors, the function values of the iterates need not converge to the optimal value; hence, we define the robustness of an algorithm to noise as the asymptotic expected suboptimality of the iterate sequence to input noise power. For this robustness measure, we provide exact expressions for the quadratic case using tools from robust control theory and tight upper bounds for the smooth strongly convex case using Lyapunov functions certified through matrix inequalities. We use these characterizations within an optimization problem which selects parameters of each algorithm to achieve a particular trade-off between rate and robustness. Our results show that AG can achieve acceleration while being more robust to random gradient errors. This behavior is quite different than previously reported in the deterministic gradient noise setting. We also establish some connections between the robustness of an algorithm and how quickly it can converge back to the optimal solution if it is perturbed from the optimal point with deterministic noise. Our framework also leads to practical algorithms that can perform better than other state-of-the-art methods in the presence of random gradient noise.
dc.language.isoen
dc.publisherSociety for Industrial & Applied Mathematics (SIAM)
dc.relation.isversionof10.1137/19M1244925
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
dc.sourceSIAM
dc.titleRobust Accelerated Gradient Methods for Smooth Strongly Convex Functions
dc.typeArticle
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.relation.journalSIAM Journal on Optimization
dc.eprint.versionFinal published version
dc.type.urihttp://purl.org/eprint/type/JournalArticle
eprint.statushttp://purl.org/eprint/status/PeerReviewed
dc.date.updated2021-02-03T16:32:27Z
dspace.orderedauthorsAybat, NS; Fallah, A; Gürbüzbalaban, M; Ozdaglar, A
dspace.date.submission2021-02-03T16:32:31Z
mit.journal.volume30
mit.journal.issue1
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record