Near-Optimal Learning in Sequential Games
Author(s)
Yu, Tiancheng
DownloadThesis PDF (4.022Mb)
Advisor
Sra, Suvrit
Terms of use
Metadata
Show full item recordAbstract
Decision making is ubiquitous, and some problems become particularly challenging due to their sequential nature, where later decisions depend on earlier ones. While humans have been attempting to solve sequential decision making problems for a long time, modern computational and machine learning techniques are needed to find the optimal decision rule. One popular approach is the reinforcement learning (RL) perspective, in which an agent learns the optimal decision rule by receiving rewards based on its actions.
In the presence of multiple learning agents, sequential decision making problems become sequential games. In this setting, the learning objective shifts from finding an optimal decision rule to finding a Nash equilibrium, where none of the agents can increase their reward by unilaterally switching to another decision rule. To handle both the sequential nature of the problem and the presence of the other learning agents, multi-agent RL tasks require even more data than supervised learning and single-agent RL tasks. Consequently, sample efficiency becomes a critical concern for the success of multi-agent RL.
In this thesis, I study argubly the most fundamental problems of learning in sequential games:
1. (Lower bound) How many samples are necessary to find a Nash equilibrium in a sequential game, no matter what learning algorithm is used?
2. (Upper bound) How to design (computationally) efficient learning algorithms with sharp sample complexity guarantees?
When the upper and lower bounds match each other, (minimax) optimal learning is achieved. It turns out utilizing structures of sequential games is the key towards optimal learning. In this thesis, we investigate near-optimal learning in two types of sequential games:
1. (Markov games) All the agents can observe the underlying states (Chapter 2) and,
2. (Extensive-form games) Different agents can have different observations given the same state (Chapter 5).
To achieve near-optimal learning, a series of novel algorithmic idea and analytical tools will be introduced, such as
1. (Adaptive uncertainty quantification) Sharp uncertainty quantification of the value function estimations to design near-optimal exploration bonus (Chapter 3),
2. (Certified policy) A non-uniform and step-wise reweighting of historical policies to produce approximate Nash equilibrium policies (Chapter 4),
3. (Balanced exploration) Achieing optimal exploration of a game tree based on the size of the subtrees (Chapter 6),
4. (Log-partition function reformulation) Re-interpreting classical algorithms as computing gradients of a log-partition function (Chapter 7),
which may be of independent interest.
Date issued
2023-06Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology