Show simple item record

dc.contributor.authorAmato, Christopher
dc.contributor.authorKonidaris, George D.
dc.contributor.authorKaelbling, Leslie P.
dc.date.accessioned2016-01-06T15:59:58Z
dc.date.available2016-01-06T15:59:58Z
dc.date.issued2014-05
dc.identifier.urihttp://hdl.handle.net/1721.1/100721
dc.description.abstractDecentralized partially observable Markov decision processes (Dec-POMDPs) are general models for decentralized decision making under uncertainty. However, they typically model a problem at a low level of granularity, where each agent's actions are primitive operations lasting exactly one time step. We address the case where each agent has macro-actions: temporally extended actions which may require different amounts of time to execute. We model macro-actions as 'options' in a factored Dec-POMDP model, focusing on options which depend only on information available to an individual agent while executing. This enables us to model systems where coordination decisions only occur at the level of deciding which macro-actions to execute, and the macro-actions themselves can then be executed to completion. The core technical difficulty when using options in a Dec-POMDP is that the options chosen by the agents no longer terminate at the same time. We present extensions of two leading Dec-POMDP algorithms for generating a policy with options and discuss the resulting form of optimality. Our results show that these algorithms retain agent coordination while allowing near-optimal solutions to be generated for significantly longer horizons and larger state-spaces than previous Dec-POMDP methods.en_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research. Multidisciplinary University Research Initiative (Project FA9550-09-1-0538)en_US
dc.language.isoen_US
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.isversionofhttp://dl.acm.org/citation.cfm?id=2617451en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titlePlanning with Macro-Actions in Decentralized POMDPsen_US
dc.typeArticleen_US
dc.identifier.citationChristopher Amato, George D. Konidaris, and Leslie P. Kaelbling. 2014. Planning with macro-actions in decentralized POMDPs. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (AAMAS '14). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1273-1280.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorAmato, Christopheren_US
dc.contributor.mitauthorKonidaris, George D.en_US
dc.contributor.mitauthorKaelbling, Leslie P.en_US
dc.relation.journalProceedings of the 2014 international conference on Autonomous agents and multi-agent systems (AAMAS '14)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsAmato, Christopher; Konidaris, George D.; Kaelbling, Leslie P.en_US
dc.identifier.orcidhttps://orcid.org/0000-0002-6786-7384
dc.identifier.orcidhttps://orcid.org/0000-0001-6054-7145
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record