MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Near-Optimal Learning in Sequential Games

Author(s)
Yu, Tiancheng
Thumbnail
DownloadThesis PDF (4.022Mb)
Advisor
Sra, Suvrit
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Decision making is ubiquitous, and some problems become particularly challenging due to their sequential nature, where later decisions depend on earlier ones. While humans have been attempting to solve sequential decision making problems for a long time, modern computational and machine learning techniques are needed to find the optimal decision rule. One popular approach is the reinforcement learning (RL) perspective, in which an agent learns the optimal decision rule by receiving rewards based on its actions. In the presence of multiple learning agents, sequential decision making problems become sequential games. In this setting, the learning objective shifts from finding an optimal decision rule to finding a Nash equilibrium, where none of the agents can increase their reward by unilaterally switching to another decision rule. To handle both the sequential nature of the problem and the presence of the other learning agents, multi-agent RL tasks require even more data than supervised learning and single-agent RL tasks. Consequently, sample efficiency becomes a critical concern for the success of multi-agent RL. In this thesis, I study argubly the most fundamental problems of learning in sequential games: 1. (Lower bound) How many samples are necessary to find a Nash equilibrium in a sequential game, no matter what learning algorithm is used? 2. (Upper bound) How to design (computationally) efficient learning algorithms with sharp sample complexity guarantees? When the upper and lower bounds match each other, (minimax) optimal learning is achieved. It turns out utilizing structures of sequential games is the key towards optimal learning. In this thesis, we investigate near-optimal learning in two types of sequential games: 1. (Markov games) All the agents can observe the underlying states (Chapter 2) and, 2. (Extensive-form games) Different agents can have different observations given the same state (Chapter 5). To achieve near-optimal learning, a series of novel algorithmic idea and analytical tools will be introduced, such as 1. (Adaptive uncertainty quantification) Sharp uncertainty quantification of the value function estimations to design near-optimal exploration bonus (Chapter 3), 2. (Certified policy) A non-uniform and step-wise reweighting of historical policies to produce approximate Nash equilibrium policies (Chapter 4), 3. (Balanced exploration) Achieing optimal exploration of a game tree based on the size of the subtrees (Chapter 6), 4. (Log-partition function reformulation) Re-interpreting classical algorithms as computing gradients of a log-partition function (Chapter 7), which may be of independent interest.
Date issued
2023-06
URI
https://hdl.handle.net/1721.1/151570
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.