Show simple item record

dc.contributor.authorFallah, Alireza
dc.contributor.authorOzdaglar, Asuman
dc.contributor.authorPattathil, Sarath
dc.date.accessioned2022-07-18T17:00:45Z
dc.date.available2022-07-18T17:00:45Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/143824
dc.description.abstract© 2020 IEEE. In this paper, we study the minimax optimization problem in the smooth and strongly convex-strongly concave setting when we have access to noisy estimates of gradients. In particular, we first analyze the stochastic Gradient Descent Ascent (GDA) method with constant stepsize, and show that it converges to a neighborhood of the solution of the minimax problem. We further provide tight bounds on the convergence rate and the size of this neighborhood. Next, we propose a multistage variant of stochastic GDA (M-GDA) that runs in multiple stages with a particular learning rate decay schedule and converges to the exact solution of the minimax problem. We show M-GDA achieves the lower bounds in terms of noise dependence without any assumptions on the knowledge of noise characteristics. We also show that M-GDA obtains a linear decay rate with respect to the error's dependence on the initial error, although the dependence on condition number is suboptimal. In order to improve this dependence, we apply the multistage machinery to the stochastic Optimistic Gradient Descent Ascent (OGDA) algorithm and propose the M-OGDA algorithm which also achieves the optimal linear decay rate with respect to the initial error. To the best of our knowledge, this method is the first to simultaneously achieve the best dependence on noise characteristic as well as the initial error and condition number.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionof10.1109/CDC42340.2020.9304033en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleAn Optimal Multistage Stochastic Gradient Method for Minimax Problemsen_US
dc.typeArticleen_US
dc.identifier.citationFallah, Alireza, Ozdaglar, Asuman and Pattathil, Sarath. 2020. "An Optimal Multistage Stochastic Gradient Method for Minimax Problems." Proceedings of the IEEE Conference on Decision and Control, 2020-December.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.relation.journalProceedings of the IEEE Conference on Decision and Controlen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2022-07-18T16:56:10Z
dspace.orderedauthorsFallah, A; Ozdaglar, A; Pattathil, Sen_US
dspace.date.submission2022-07-18T16:56:11Z
mit.journal.volume2020-Decemberen_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record