MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Zero-shot learning to execute tasks with robots

Author(s)
Alverio, Julian(Julian A.)
Thumbnail
Download1237279830-MIT.pdf (2.040Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Boris Katz.
Terms of use
MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
This thesis explores multiple approaches for improving the state of the art in robotic planning with reinforcement learning. We are interested in designing a generalizable framework with several features, namely: allowing for zero-shot learning agents that are robust and resilient in the event of failing midway during a task, allowing us to detect failures, and being highly generalizable to new environments. Initially, we focused mostly on training agents that are resilient in the event of failure and robust to changing environments. For this, we first explore the use of deep Q networks to control a robot. Upon finding deep Q learning too unstable, we determine that Q networks alone are insufficient for attaining true resilience. Second, we explore the use of more powerful actor-critic methods, augmented with hindsight experience replay (HER). We determine that approaches requiring low-dimensional representations of the environment, such as HER, will not scale gracefully to handle more complex environments. Finally, we explore the use of generative models to learn a reward function that tightly couples the context of a linguistic command to the reward of a reinforcement learning agent. We hypothesize that a learned reward function will satisfy all of our criteria, and is part of our ongoing research.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020
 
Cataloged from student-submitted PDF of thesis.
 
Includes bibliographical references (pages 41-42).
 
Date issued
2020
URI
https://hdl.handle.net/1721.1/129898
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.