Show simple item record

dc.contributor.advisorJonathan P. How.en_US
dc.contributor.authorOmidshafiei, Shayeganen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2019-02-14T15:50:09Z
dc.date.available2019-02-14T15:50:09Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/120422
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 123-140).en_US
dc.description.abstractCooperative multiagent decision making is a ubiquitous problem with many real-world applications, including organization of driverless car fleets [1, 2], target surveillance [3], and warehouse automation [4-6]. The unifying challenge in these real-world settings is the presence of domain stochasticity (due to noisy sensors and actuators) and partial observability (due to local perspectives of agents), which can obfuscate the underlying state. In many practical applications, it is desirable for teams of agents to be capable of executing well-coordinated policies despite these uncertainty challenges. The core assumption of standard multiagent planning approaches is knowledge of an accurate, high-fidelity environment model. In practice, models may be unavailable or inaccurate. In the former case, models necessary for planning-based approaches must be generated or learned (which may be difficult and/or expensive). In the latter, execution of policies optimized for incorrect models may have dire economic and/or social consequences for systems deployed in the real world. While many works have introduced learning (rather than planning) approaches for multiagent systems, few address the partially observable setting, and even fewer do so in a scalable manner deployable to real-world settings, such as multi-robot systems that face collections of tasks [7]. The primary objective of this thesis is to develop technologies for scalable learning-based coordination in multiagent settings. Specifically, this thesis introduces methods for hierarchical learning of models and policies that enable multiagent coordination with more realistic sensors, execution in settings where underlying environment contexts may be non-unique or non-stationary, and acceleration of cooperative learning using inter-agent advice exchange. The algorithms developed are demonstrated in a variety of hardware and simulation settings, including those with complex sensory inputs and realistic dynamics and/or learning objectives, extending beyond the usual task-specific performance objectives to meta-learning (learning to learn) and multitask learning objectives.en_US
dc.description.statementofresponsibilityby Shayegan Omidshafiei.en_US
dc.format.extent140 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleDecentralized teaching and learning in cooperative multiagent systemsen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc1084478480en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record