Decentralized teaching and learning in cooperative multiagent systems
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics.
Jonathan P. How.
MetadataShow full item record
Cooperative multiagent decision making is a ubiquitous problem with many real-world applications, including organization of driverless car fleets [1, 2], target surveillance , and warehouse automation [4-6]. The unifying challenge in these real-world settings is the presence of domain stochasticity (due to noisy sensors and actuators) and partial observability (due to local perspectives of agents), which can obfuscate the underlying state. In many practical applications, it is desirable for teams of agents to be capable of executing well-coordinated policies despite these uncertainty challenges. The core assumption of standard multiagent planning approaches is knowledge of an accurate, high-fidelity environment model. In practice, models may be unavailable or inaccurate. In the former case, models necessary for planning-based approaches must be generated or learned (which may be difficult and/or expensive). In the latter, execution of policies optimized for incorrect models may have dire economic and/or social consequences for systems deployed in the real world. While many works have introduced learning (rather than planning) approaches for multiagent systems, few address the partially observable setting, and even fewer do so in a scalable manner deployable to real-world settings, such as multi-robot systems that face collections of tasks . The primary objective of this thesis is to develop technologies for scalable learning-based coordination in multiagent settings. Specifically, this thesis introduces methods for hierarchical learning of models and policies that enable multiagent coordination with more realistic sensors, execution in settings where underlying environment contexts may be non-unique or non-stationary, and acceleration of cooperative learning using inter-agent advice exchange. The algorithms developed are demonstrated in a variety of hardware and simulation settings, including those with complex sensory inputs and realistic dynamics and/or learning objectives, extending beyond the usual task-specific performance objectives to meta-learning (learning to learn) and multitask learning objectives.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 123-140).
DepartmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.
Massachusetts Institute of Technology
Aeronautics and Astronautics.