Show simple item record

dc.contributor.advisorHow, Jonathan P.
dc.contributor.authorHuang, Vivian
dc.date.accessioned2022-06-15T13:09:59Z
dc.date.available2022-06-15T13:09:59Z
dc.date.issued2022-02
dc.date.submitted2022-02-22T18:32:30.252Z
dc.identifier.urihttps://hdl.handle.net/1721.1/143288
dc.description.abstractDeep reinforcement learning (RL) methods have made significant advancements over recent years toward mastering challenging problems. Because many real-world systems involve multiple agents interacting with each other in a shared environment, one particularly active subfield of RL is multi-agent reinforcement learning (MARL). Learning robust multi-agent policies in real-time strategy games, such as StarCraft II, is an important objective. In particular, being able to quickly adapt game playing agents to perturbations in rules and successfully displaying the ability to take advantage of such changes can yield insights about properties, such as game balance. However, progress in MARL research faces a major challenge associated with the high cost of sample complexity, which makes learning a complicated task from scratch computationally intensive. Therefore, this thesis work details the design and implementation of a MARL framework that facilitates the training of robust agents which are adaptive to perturbations in a multi-agent, StarCraft II-based real-time strategy game such that the features that most affect game balance can be determined. The framework also includes an incremental warm-start approach to improve the computational complexity of agent adaptation. The results show that our approach achieves up to 97% improvement in computational time compared to the standard approach of training the policy with a random initialization.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleWarm-Starting Networks for Sample-Efficient Continuous Adaptation to Parameter Perturbations in Multi-Agent Reinforcement Learning
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record