MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Stochastic methods for large-scale linear problems, variational inequalities, and convex optimization

Author(s)
Wang, Mengdi
Thumbnail
DownloadFull printable version (23.94Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Dimitri P. Bertsekas.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
This thesis considers stochastic methods for large-scale linear systems, variational inequalities, and convex optimization problems. I focus on special structures that lend themselves to sampling, such as when the linear/nonlinear mapping or the objective function is an expected value or is the sum of a large number of terms, and/or the constraint is the intersection of a large number of simpler sets. For linear systems, I propose modifications to deterministic methods to allow the use of random samples and maintain the stochastic convergence, which is particularly challenging when the unknown system is singular or nearly singular. For variational inequalities and optimization problems, I propose a class of methods that combine elements of incremental constraint projection, stochastic gradient/ subgradient descent, and proximal algorithm. These methods can be applied with various sampling schemes that are suitable for applications involving distributed implementation, large data set, or online learning. I use a unified framework to analyze the convergence and the rate of convergence of these methods. This framework is based on a pair of supermartingale bounds, which control the convergence to feasibility and the convergence to optimality, respectively, and are coupled at different time scales.
Description
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (p. 201-207).
 
Date issued
2013
URI
http://hdl.handle.net/1721.1/82367
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.