Policy Distillation and Value Matching in Multiagent Reinforcement Learning
Author(s)
Wadhwania, Samir; Kim, Dong-Ki; Omidshafiei, Shayegan; How, Jonathan P.
DownloadSubmitted version (1.224Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2019 IEEE. Multiagent reinforcement learning (MARL) algorithms have been demonstrated on complex tasks that require the coordination of a team of multiple agents to complete. Existing works have focused on sharing information between agents via centralized critics to stabilize learning or through communication to improve performance, but do not generally consider how information can be shared between agents to address the curse of dimensionality in MARL. We posit that a multiagent problem can be decomposed into a multi-task problem where each agent explores a subset of the state space instead of exploring the entire state space. This paper introduces a multiagent actor-critic algorithm for combining knowledge from homogeneous agents through distillation and value-matching that outperforms policy distillation alone and allows further learning in discrete and continuous action spaces.
Date issued
2019-11Department
Massachusetts Institute of Technology. Laboratory for Information and Decision SystemsJournal
IEEE International Conference on Intelligent Robots and Systems
Publisher
IEEE
Citation
Wadhwania, Samir, Kim, Dong-Ki, Omidshafiei, Shayegan and How, Jonathan P. 2019. "Policy Distillation and Value Matching in Multiagent Reinforcement Learning." IEEE International Conference on Intelligent Robots and Systems.
Version: Original manuscript