MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Reinforcement learning for robots through efficient simulator sampling

Author(s)
Cutler, Mark Johnson
Thumbnail
DownloadFull printable version (25.36Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics.
Advisor
Jonathan P. How.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Reinforcement learning (RL) has great potential in robotic systems as a tool for developing policies and controllers in novel situations. However, the cost of realworld samples remains prohibitive as most RL algorithms require a large number of samples before learning near-optimal or even useful policies. Simulators are one way to decrease the number of required real-world samples, but imperfect models make deciding when and how to trust samples from a simulator difficult. Two frameworks are presented for efficient RL through the use of simulators. The first framework considers scenarios where multiple simulators of a target task are available, each with varying levels of fidelity. It is designed to limit the number of samples used in each successively higher-fidelity/cost simulator by allowing a learning agent to choose to run trajectories at the lowest level simulator that will still provide it with useful information. Theoretical proofs of this framework's sample complexity are given and empirical results are demonstrated on a robotic car with multiple simulators. The second framework focuses on problems represented with continuous states and actions, as are common in many robotics domains. Using probabilistic model-based policy search algorithms and principles of optimal control, this second framework uses data from simulators as prior information for the real-world learning. The framework is tested on a propeller-driven inverted pendulum and on a drifting robotic car. These novel frameworks enable RL algorithms to find near-optimal policies in physical robot domains with fewer expensive real-world samples than previous transfer approaches or learning without simulators.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.
 
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
 
Cataloged from student-submitted PDF version of thesis.
 
Includes bibliographical references (pages 151-160).
 
Date issued
2015
URI
http://hdl.handle.net/1721.1/101441
Department
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Publisher
Massachusetts Institute of Technology
Keywords
Aeronautics and Astronautics.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.