Advanced Search
DSpace@MIT

All learning is local: Multi-agent learning in global reward games

Research and Teaching Output of the MIT Community

Show simple item record

dc.contributor.author Chang, Yu-Han
dc.contributor.author Ho, Tracey
dc.contributor.author Kaelbling, Leslie P.
dc.date.accessioned 2003-12-13T18:55:17Z
dc.date.available 2003-12-13T18:55:17Z
dc.date.issued 2004-01
dc.identifier.uri http://hdl.handle.net/1721.1/3851
dc.description.abstract In large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings. A sequence of increasingly complex empirical tests verifies the efficacy of this technique. en
dc.description.sponsorship Singapore-MIT Alliance (SMA) en
dc.format.extent 1408858 bytes
dc.format.mimetype application/pdf
dc.language.iso en_US
dc.relation.ispartofseries Computer Science (CS);
dc.subject Kalman filtering en
dc.subject multi-agent systems en
dc.subject Q-learning en
dc.subject reinforcement learning en
dc.title All learning is local: Multi-agent learning in global reward games en
dc.type Article en


Files in this item

Name Size Format Description
CS004.pdf 1.343Mb PDF

This item appears in the following Collection(s)

Show simple item record

MIT-Mirage