MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Generalized stochastic Frank–Wolfe algorithm with stochastic “substitute” gradient for structured convex optimization

Author(s)
Lu, Haihao; Freund, Robert M
Thumbnail
Download10107_2020_1480_ReferencePDF.pdf (689.1Kb)
Open Access Policy

Open Access Policy

Creative Commons Attribution-Noncommercial-Share Alike

Terms of use
Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata
Show full item record
Abstract
Abstract The stochastic Frank–Wolfe method has recently attracted much general interest in the context of optimization for statistical and machine learning due to its ability to work with a more general feasible region. However, there has been a complexity gap in the dependence on the optimality tolerance $$\varepsilon $$ ε in the guaranteed convergence rate for stochastic Frank–Wolfe compared to its deterministic counterpart. In this work, we present a new generalized stochastic Frank–Wolfe method which closes this gap for the class of structured optimization problems encountered in statistical and machine learning characterized by empirical loss minimization with a certain type of “linear prediction” property (formally defined in the paper), which is typically present in loss minimization problems in practice. Our method also introduces the notion of a “substitute gradient” that is a not-necessarily-unbiased estimate of the gradient. We show that our new method is equivalent to a particular randomized coordinate mirror descent algorithm applied to the dual problem, which in turn provides a new interpretation of randomized dual coordinate descent in the primal space. Also, in the special case of a strongly convex regularizer our generalized stochastic Frank–Wolfe method (as well as the randomized dual coordinate descent method) exhibits linear convergence. Furthermore, we present computational experiments that indicate that our method outperforms other stochastic Frank–Wolfe methods for a sufficiently small optimality tolerance, consistent with the theory developed herein.
Date issued
2020-03-04
URI
https://hdl.handle.net/1721.1/136776
Department
Massachusetts Institute of Technology. Department of Mathematics; Sloan School of Management
Publisher
Springer Berlin Heidelberg

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.