Show simple item record

dc.contributor.authorChang, Yu-Han
dc.contributor.authorHo, Tracey
dc.contributor.authorKaelbling, Leslie P.
dc.date.accessioned2003-12-13T18:55:17Z
dc.date.available2003-12-13T18:55:17Z
dc.date.issued2004-01
dc.identifier.urihttp://hdl.handle.net/1721.1/3851
dc.description.abstractIn large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings. A sequence of increasingly complex empirical tests verifies the efficacy of this technique.en
dc.description.sponsorshipSingapore-MIT Alliance (SMA)en
dc.format.extent1408858 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesComputer Science (CS);
dc.subjectKalman filteringen
dc.subjectmulti-agent systemsen
dc.subjectQ-learningen
dc.subjectreinforcement learningen
dc.titleAll learning is local: Multi-agent learning in global reward gamesen
dc.typeArticleen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record