MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Computing stationary distribution locally

Author(s)
Lee, Christina (Christina Esther)
Thumbnail
DownloadFull printable version (5.700Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Asuman Ozdaglar and Devavrat Shah.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Computing stationary probabilities of states in a large countable state space Markov Chain (MC) has become central to many modern scientific disciplines, whether in statistical inference problems, or in network analyses. Standard methods involve large matrix multiplications as in power iterations, or long simulations of random walks to sample states from the stationary distribution, as in Markov Chain Monte Carlo (MCMC). However, these approaches lack clear guarantees for convergence rates in the general setting. When the state space is prohibitively large, even algorithms that scale linearly in the size of the state space and require computation on behalf of every node in the state space are too expensive. In this thesis, we set out to address this outstanding challenge of computing the stationary probability of a given state in a Markov chain locally, efficiently, and with provable performance guarantees. We provide a novel algorithm, that answers whether a given state has stationary probability smaller or larger than a given value [delta] [epsilon] (0, 1). Our algorithm accesses only a local neighborhood of the given state of interest, with respect to the graph induced between states of the Markov chain through its transitions. The algorithm can be viewed as a truncated Monte Carlo method. We provide correctness and convergence rate guarantees for this method that highlight the dependence on the truncation threshold and the mixing properties of the graph. Simulation results complementing our theoretical guarantees suggest that this method is effective when our interest is in finding states with high stationary probability.
Description
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (p. 89-93).
 
Date issued
2013
URI
http://hdl.handle.net/1721.1/82410
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.