Show simple item record

dc.contributor.advisorBoris Katz.en_US
dc.contributor.authorAlverio, Julian(Julian A.)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2021-02-19T20:45:53Z
dc.date.available2021-02-19T20:45:53Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/129898
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020en_US
dc.descriptionCataloged from student-submitted PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 41-42).en_US
dc.description.abstractThis thesis explores multiple approaches for improving the state of the art in robotic planning with reinforcement learning. We are interested in designing a generalizable framework with several features, namely: allowing for zero-shot learning agents that are robust and resilient in the event of failing midway during a task, allowing us to detect failures, and being highly generalizable to new environments. Initially, we focused mostly on training agents that are resilient in the event of failure and robust to changing environments. For this, we first explore the use of deep Q networks to control a robot. Upon finding deep Q learning too unstable, we determine that Q networks alone are insufficient for attaining true resilience. Second, we explore the use of more powerful actor-critic methods, augmented with hindsight experience replay (HER). We determine that approaches requiring low-dimensional representations of the environment, such as HER, will not scale gracefully to handle more complex environments. Finally, we explore the use of generative models to learn a reward function that tightly couples the context of a linguistic command to the reward of a reinforcement learning agent. We hypothesize that a learned reward function will satisfy all of our criteria, and is part of our ongoing research.en_US
dc.description.statementofresponsibilityby Julian Alverio.en_US
dc.format.extent42 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleZero-shot learning to execute tasks with robotsen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1237279830en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2021-02-19T20:45:22Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record