Show simple item record

dc.contributor.authorAmato, Christopher
dc.contributor.authorLiu, Miao
dc.contributor.authorSivakumar, Kavinayan P
dc.contributor.authorOmidshafiei, Shayegan
dc.contributor.authorHow, Jonathan P
dc.date.accessioned2018-04-13T22:28:08Z
dc.date.available2018-04-13T22:28:08Z
dc.date.issued2017-12
dc.date.submitted2017-09
dc.identifier.isbn978-1-5386-2682-5
dc.identifier.isbn978-1-5386-2681-8
dc.identifier.isbn978-1-5386-2683-2
dc.identifier.issn2153-0866
dc.identifier.urihttp://hdl.handle.net/1721.1/114739
dc.description.abstractThis paper presents a data-driven approach for multi-robot coordination in partially-observable domains based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) and macro-actions (MAs). Dec-POMDPs provide a general framework for cooperative sequential decision making under uncertainty and MAs allow temporally extended and asynchronous action execution. To date, most methods assume the underlying Dec-POMDP model is known a priori or a full simulator is available during planning time. Previous methods which aim to address these issues suffer from local optimality and sensitivity to initial conditions. Additionally, few hardware demonstrations involving a large team of heterogeneous robots and with long planning horizons exist. This work addresses these gaps by proposing an iterative sampling based Expectation-Maximization algorithm (iSEM) to learn polices using only trajectory data containing observations, MAs, and rewards. Our experiments show the algorithm is able to achieve better solution quality than the state-of-the-art learning-based methods. We implement two variants of multi-robot Search and Rescue (SAR) domains (with and without obstacles) on hardware to demonstrate the learned policies can effectively control a team of distributed robots to cooperate in a partially observable stochastic environment.en_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/IROS.2017.8206001en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleLearning for multi-robot cooperation in partially observable stochastic environments with macro-actionsen_US
dc.typeArticleen_US
dc.identifier.citationLiu, Miao, Kavinayan Sivakumar, Shayegan Omidshafiei, Christopher Amato, and Jonathan P. How. “Learning for Multi-Robot Cooperation in Partially Observable Stochastic Environments with Macro-Actions.” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017, Vancouver, BC, Canada, 2017.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.mitauthorLiu, Miao
dc.contributor.mitauthorSivakumar, Kavinayan P
dc.contributor.mitauthorOmidshafiei, Shayegan
dc.contributor.mitauthorHow, Jonathan P
dc.relation.journal2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)en_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2018-03-21T16:14:11Z
dspace.orderedauthorsLiu, Miao; Sivakumar, Kavinayan; Omidshafiei, Shayegan; Amato, Christopher; How, Jonathan P.en_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-1648-8325
dc.identifier.orcidhttps://orcid.org/0000-0003-0903-0137
dc.identifier.orcidhttps://orcid.org/0000-0001-8576-1930
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record