Show simple item record

dc.contributor.authorMokhtari, Aryan
dc.contributor.authorOzdaglar, Asuman E
dc.contributor.authorPattathil, Sarath
dc.date.accessioned2022-07-18T17:03:54Z
dc.date.available2022-07-18T17:03:54Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/143825
dc.description.abstract© 2020 Society for Industrial and Applied Mathematics We study the iteration complexity of the optimistic gradient descent-ascent (OGDA) method and the extragradient (EG) method for finding a saddle point of a convex-concave unconstrained min-max problem. To do so, we first show that both OGDA and EG can be interpreted as approximate variants of the proximal point method. This is similar to the approach taken in (A. Nemirovski (2004), SIAM J. Optim., 15, pp. 229-251) which analyzes EG as an approximation of the ``conceptual mirror prox."" In this paper, we highlight how gradients used in OGDA and EG try to approximate the gradient of the proximal point method. We then exploit this interpretation to show that both algorithms produce iterates that remain within a bounded set. We further show that the primal-dual gap of the averaged iterates generated by both of these algorithms converge with a rate of O (1/k). Our theoretical analysis is of interest as it provides the first convergence rate estimate for OGDA in the general convex-concave setting. Moreover, it provides a simple convergence analysis for the EG algorithm in terms of function value without using a compactness assumption.en_US
dc.language.isoen
dc.publisherSociety for Industrial & Applied Mathematics (SIAM)en_US
dc.relation.isversionof10.1137/19M127375Xen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSIAMen_US
dc.titleConvergence Rate of O(1/k) for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problemsen_US
dc.typeArticleen_US
dc.identifier.citationMokhtari, Aryan, Ozdaglar, Asuman E and Pattathil, Sarath. 2020. "Convergence Rate of O(1/k) for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems." SIAM Journal on Optimization, 30 (4).
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.relation.journalSIAM Journal on Optimizationen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-07-18T17:00:17Z
dspace.orderedauthorsMokhtari, A; Ozdaglar, AE; Pattathil, Sen_US
dspace.date.submission2022-07-18T17:00:19Z
mit.journal.volume30en_US
mit.journal.issue4en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record