MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Learning to teach and meta-learning for sample-efficient multiagent reinforcement learning

Author(s)
Kim, Dong Ki(Aeronautics and astronautics scientist)Massachusetts Institute of Technology.
Thumbnail
Download1201259574-MIT.pdf (10.11Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics.
Advisor
Jonathan P. How.
Terms of use
MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Learning optimal policies in the presence of non-stationary policies of other simultaneously learning agents is a major challenge in multiagent reinforcement learning (MARL). The difficulty is further complicated by other challenges, including the multiagent credit assignment, the high dimensionality of the problems, and the lack of convergence guarantees. As a result, many experiences are often required to learn effective multiagent policies. This thesis introduces two frameworks to reduce the sample complexity in MARL. The first framework presented in this thesis provides a method to reduce the sample complexity by exchanging knowledge between agents. In particular, recent work on agents that learn to teach other teammates has demonstrated that action advising accelerates team-wide learning.
 
However, the prior work simplified the learning of advising policies by using simple function approximations and only considering advising with primitive (low-level) actions, both of which limit the scalability of learning and teaching to more complex domains. This thesis introduces a novel learning-to-teach framework, called hierarchical multiagent teaching (HMAT), that improves scalability to complex environments by using a deep representation for student policies and by advising with more expressive extended-action sequences over multiple levels of temporal abstraction. Our empirical evaluations demonstrate that HMAT improves team-wide learning progress in large, complex domains where previous approaches fail. HMAT also learns teaching policies that can effectively transfer knowledge to different teammates with knowledge of different tasks, even when the teammates have heterogeneous action spaces.
 
The second framework introduces the first policy gradient theorem based on meta-learning, which enables fast adaptation (i.e., need only a few iterations) with respect to the non-stationary fellow agents in MARL. The policy gradient theorem that we prove inherently includes both a self-shaping term that considers the impact of a meta-agent's initial policy on its adapted policy and an opponent-shaping term that exploits the learning dynamics of the other agents. We demonstrate that our meta-policy gradient provides agents to meta-learn about different sources of non-stationarity in the environment to improve their learning performances.
 
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020
 
Cataloged from PDF of thesis.
 
Includes bibliographical references (pages 89-97).
 
Date issued
2020
URI
https://hdl.handle.net/1721.1/128312
Department
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Publisher
Massachusetts Institute of Technology
Keywords
Aeronautics and Astronautics.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.