Learning environment simulators from sparse signals
Author(s)Shavit, Yonadav Goldwasser
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Leslie P. Kaelbling.
MetadataShow full item record
To allow planning in novel environments that have not been mapped out by hand, we need ways of learning environment models. While conventional work has focused on video prediction as a means for environment learning, this work instead seeks to learn from much sparser signals, like the agent's reward. In Chapter 1, we establish a taxonomy of environments and the attributes that make them easier or harder to model through learning. In Chapter 2, we review prior work in the field of environment learning. In Chapter 3, we propose a model-learning architecture based purely on reward prediction, and analyze its performance on illustrative problems. Finally, in Chapter 4, we propose and evaluate a model-learning architecture that uses both reward and sparse "features" extracted from the environment.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 83-85).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.