Show simple item record

dc.contributor.authorAmato, Christopher
dc.contributor.authorKonidaris, George
dc.contributor.authorKaelbling, Leslie P
dc.contributor.authorHow, Jonathan P
dc.date.accessioned2021-09-20T18:21:48Z
dc.date.available2021-09-20T18:21:48Z
dc.date.issued2019
dc.identifier.urihttps://hdl.handle.net/1721.1/132314
dc.description.abstract© 2019 AI Access Foundation. All rights reserved. Decentralized partially observable Markov decision processes (Dec-POMDPs) are general models for decentralized multi-agent decision making under uncertainty. However, they typically model a problem at a low level of granularity, where each agent’s actions are primitive operations lasting exactly one time step. We address the case where each agent has macro-actions: temporally extended actions that may require different amounts of time to execute. We model macro-actions as options in a Dec-POMDP, focusing on actions that depend only on information directly available to the agent during execution. Therefore, we model systems where coordination decisions only occur at the level of deciding which macro-actions to execute. The core technical difficulty in this setting is that the options chosen by each agent no longer terminate at the same time. We extend three leading Dec-POMDP algorithms for policy generation to the macro-action case, and demonstrate their effectiveness in both standard benchmarks and a multi-robot coordination problem. The results show that our new algorithms retain agent coordination while allowing high-quality solutions to be generated for significantly longer horizons and larger state-spaces than previous Dec-POMDP methods. Furthermore, in the multi-robot domain, we show that, in contrast to most existing methods that are specialized to a particular problem class, our approach can synthesize control policies that exploit opportunities for coordination while balancing uncertainty, sensor information, and information about other agents.
dc.language.isoen
dc.publisherAI Access Foundation
dc.relation.isversionof10.1613/JAIR.1.11418
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/
dc.sourcePMC
dc.titleModeling and Planning with Macro-Actions in Decentralized POMDPs
dc.typeArticle
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systems
dc.relation.journalJournal of Artificial Intelligence Research
dc.eprint.versionAuthor's final manuscript
dc.type.urihttp://purl.org/eprint/type/JournalArticle
eprint.statushttp://purl.org/eprint/status/PeerReviewed
dc.date.updated2020-12-22T18:27:57Z
dspace.orderedauthorsAmato, C; Konidaris, G; Kaelbling, LP; How, JP
dspace.date.submission2020-12-22T18:28:02Z
mit.journal.volume64
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record