Cooperation and Fairness in Multi-Agent Reinforcement Learning
Author(s)
Aloor, Jasmine; Nayak, Siddharth Nagar; Dolan, Sydney; Balakrishnan, Hamsa
Download3702012.pdf (1.139Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Multi-agent systems are trained to maximize shared cost objectives, which typically reflect system-level efficiency. However, in the resource-constrained environments of mobility and transportation systems, efficiency may be achieved at the expense of fairness --- certain agents may incur significantly greater costs or lower rewards compared to others. Tasks could be distributed inequitably, leading to some agents receiving an unfair advantage while others incur disproportionately high costs. It is, therefore, important to consider the tradeoffs between efficiency and fairness in such settings. We consider the problem of fair multi-agent navigation for a group of decentralized agents using multi-agent reinforcement learning (MARL). We consider the reciprocal of the coefficient of variation of the distances traveled by different agents as a measure of fairness and investigate whether agents can learn to be fair without significantly sacrificing efficiency (i.e., increasing the total distance traveled). We find that by training agents using min-max fair distance goal assignments along with a reward term that incentivizes fairness as they move towards their goals, the agents (1) learn a fair assignment of goals and (2) achieve almost perfect goal coverage in navigation scenarios using only local observations. For goal coverage scenarios, we find that, on average, the proposed model yields a 14% improvement in efficiency and a 5% improvement in fairness over a baseline model that is trained using random assignments. Furthermore, an average of 21% improvement in fairness can be achieved by the proposed model as compared to a model trained on optimally efficient assignments; this increase in fairness comes at the expense of only a 7% decrease in efficiency. Finally, we extend our method to environments in which agents must complete coverage tasks in prescribed formations and show that it is possible to do so without tailoring the models to specific formation shapes.
Date issued
2024-10-29Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsJournal
ACM Journal on Autonomous Transportation Systems
Publisher
ACM
Citation
Aloor, Jasmine, Nayak, Siddharth Nagar, Dolan, Sydney and Balakrishnan, Hamsa. 2024. "Cooperation and Fairness in Multi-Agent Reinforcement Learning." ACM Journal on Autonomous Transportation Systems.
Version: Final published version
ISSN
2833-0528
Collections
The following license files are associated with this item: