Show simple item record

dc.contributor.advisorHow, Jonathan P.
dc.contributor.advisorAgrawal, Pulkit
dc.contributor.advisorFoerster, Jakob N.
dc.contributor.authorKim, Dong Ki
dc.date.accessioned2023-03-31T14:37:46Z
dc.date.available2023-03-31T14:37:46Z
dc.date.issued2023-02
dc.date.submitted2023-02-15T14:05:24.284Z
dc.identifier.urihttps://hdl.handle.net/1721.1/150177
dc.description.abstractMultiagent reinforcement learning (MARL) provides a principled framework for a group of artificial intelligence agents to learn collaborative and/or competitive behaviors at the level of human experts. Multiagent learning settings inherently solve much more complex problems than single-agent learning because an agent interacts both with the environment and other agents. In particular, multiple agents simultaneously learn in MARL, leading to natural non-stationarity in the experiences encountered and thus requiring each agent to its behavior with respect to potentially large changes in other agents' policies. This thesis aims to address the non-stationarity challenge in multiagent learning from three important topics: 1) adaptation, 2) convergence, and 3) state space. The first topic answers how an agent can learn effective adaptation strategies concerning other agents' changing policies by developing a new meta-learning framework. The second topic answers how agents can adapt and influence the joint learning process such that policies converge to more desirable limiting behaviors by the end of learning based on a new game-theoretical solution concept. Lastly, the last topic answers how state space size can be reduced based on knowledge sharing and context-specific abstraction such that the learning complexity is less affected by non-stationarity. In summary, this thesis develops theoretical and algorithmic contributions to provide principled answers to the aforementioned topics on non-stationarity. The developed algorithms in this thesis demonstrate their effectiveness in a diverse suite of multiagent benchmark domains, including the full spectrum of mixed incentive, competitive, and cooperative environments.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleEffective Learning in Non-Stationary Multiagent Environments
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record