On Reinforcement Learning for Turn-based Zero-sum Markov Games
Author(s)
Shah, D; Somani, V; Xie, Q; Xu, Z
DownloadPublished version (1.283Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
© 2020 Owner/Author. We consider the problem of finding Nash equilibrium for two-player turn-based zero-sum games. Inspired by the AlphaGo Zero (AGZ) algorithm, we develop a Reinforcement Learning based approach. Specifically, we propose Explore-Improve-Supervise (EIS) method that combines "exploration", "policy improvement"and "supervised learning"to find the value function and policy associated with Nash equilibrium. We identify sufficient conditions for convergence and correctness for such an approach. For a concrete instance of EIS where random policy is used for "exploration", Monte-Carlo Tree Search is used for "policy improvement"and Nearest Neighbors is used for "supervised learning", we establish that this method finds an\varepsilon-approximate value function of Nash equilibrium in\widetildeO(\varepsilon^-(d+4)) steps when the underlying state-space of the game is continuous and d-dimensional. This is nearly optimal as we establish a lower bound of\widetildeØmega (\varepsilon^-(d+2)) for any policy.
Date issued
2020Department
Massachusetts Institute of Technology. Laboratory for Information and Decision SystemsJournal
FODS 2020 - Proceedings of the 2020 ACM-IMS Foundations of Data Science Conference
Publisher
ACM
Citation
Shah, D, Somani, V, Xie, Q and Xu, Z. 2020. "On Reinforcement Learning for Turn-based Zero-sum Markov Games." FODS 2020 - Proceedings of the 2020 ACM-IMS Foundations of Data Science Conference.
Version: Final published version