MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Convergence Rate of O(1/k) for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems

Author(s)
Mokhtari, Aryan; Ozdaglar, Asuman E; Pattathil, Sarath
Thumbnail
DownloadPublished version (505.8Kb)
Publisher Policy

Publisher Policy

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.

Terms of use
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Metadata
Show full item record
Abstract
© 2020 Society for Industrial and Applied Mathematics We study the iteration complexity of the optimistic gradient descent-ascent (OGDA) method and the extragradient (EG) method for finding a saddle point of a convex-concave unconstrained min-max problem. To do so, we first show that both OGDA and EG can be interpreted as approximate variants of the proximal point method. This is similar to the approach taken in (A. Nemirovski (2004), SIAM J. Optim., 15, pp. 229-251) which analyzes EG as an approximation of the ``conceptual mirror prox."" In this paper, we highlight how gradients used in OGDA and EG try to approximate the gradient of the proximal point method. We then exploit this interpretation to show that both algorithms produce iterates that remain within a bounded set. We further show that the primal-dual gap of the averaged iterates generated by both of these algorithms converge with a rate of O (1/k). Our theoretical analysis is of interest as it provides the first convergence rate estimate for OGDA in the general convex-concave setting. Moreover, it provides a simple convergence analysis for the EG algorithm in terms of function value without using a compactness assumption.
Date issued
2020
URI
https://hdl.handle.net/1721.1/143825
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Journal
SIAM Journal on Optimization
Publisher
Society for Industrial & Applied Mathematics (SIAM)
Citation
Mokhtari, Aryan, Ozdaglar, Asuman E and Pattathil, Sarath. 2020. "Convergence Rate of O(1/k) for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems." SIAM Journal on Optimization, 30 (4).
Version: Final published version

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.