Show simple item record

dc.contributor.authorHaupt, Andreas
dc.contributor.authorChristoffersen, Phillip
dc.contributor.authorDamani, Mehul
dc.contributor.authorHadfield-Menell, Dylan
dc.date.accessioned2024-10-24T20:46:42Z
dc.date.available2024-10-24T20:46:42Z
dc.date.issued2024-10-18
dc.identifier.urihttps://hdl.handle.net/1721.1/157416
dc.description.abstractMulti-agent Reinforcement Learning (MARL) is a powerful tool for training autonomous agents acting independently in a common environment. However, it can lead to sub-optimal behavior when individual incentives and group incentives diverge. Humans are remarkably capable at solving these social dilemmas. It is an open problem in MARL to replicate such cooperative behaviors in selfish agents. In this work, we draw upon the idea of formal contracting from economics to overcome diverging incentives between agents in MARL. We propose an augmentation to a Markov game where agents voluntarily agree to binding transfers of reward, under pre-specified conditions. Our contributions are theoretical and empirical. First, we show that this augmentation makes all subgame-perfect equilibria of all Fully Observable Markov Games exhibit socially optimal behavior, given a sufficiently rich space of contracts. Next, we show that for general contract spaces, and even under partial observability, richer contract spaces lead to higher welfare. Hence, contract space design solves an exploration-exploitation tradeoff, sidestepping incentive issues. We complement our theoretical analysis with experiments. Issues of exploration in the contracting augmentation are mitigated using a training methodology inspired by multi-objective reinforcement learning: Multi-Objective Contract Augmentation Learning. We test our methodology in static, single-move games, as well as dynamic domains that simulate traffic, pollution management, and common pool resource management.en_US
dc.publisherSpringer USen_US
dc.relation.isversionofhttps://doi.org/10.1007/s10458-024-09682-5en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSpringer USen_US
dc.titleFormal contracts mitigate social dilemmas in multi-agent reinforcement learningen_US
dc.typeArticleen_US
dc.identifier.citationHaupt, A., Christoffersen, P., Damani, M. et al. Formal contracts mitigate social dilemmas in multi-agent reinforcement learning. Auton Agent Multi-Agent Syst 38, 51 (2024).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.relation.journalAutonomous Agents and Multi-Agent Systemsen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-10-20T03:22:42Z
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dspace.embargo.termsN
dspace.date.submission2024-10-20T03:22:42Z
mit.journal.volume38en_US
mit.journal.issue51en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record