Advanced Search

All learning is local: Multi-agent learning in global reward games

Research and Teaching Output of the MIT Community

Show simple item record Chang, Yu-Han Ho, Tracey Kaelbling, Leslie P. 2003-12-13T18:55:17Z 2003-12-13T18:55:17Z 2004-01
dc.description.abstract In large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings. A sequence of increasingly complex empirical tests verifies the efficacy of this technique. en
dc.description.sponsorship Singapore-MIT Alliance (SMA) en
dc.format.extent 1408858 bytes
dc.format.mimetype application/pdf
dc.language.iso en_US
dc.relation.ispartofseries Computer Science (CS);
dc.subject Kalman filtering en
dc.subject multi-agent systems en
dc.subject Q-learning en
dc.subject reinforcement learning en
dc.title All learning is local: Multi-agent learning in global reward games en
dc.type Article en

Files in this item

Name Size Format Description
CS004.pdf 1.343Mb PDF

This item appears in the following Collection(s)

Show simple item record