dc.contributor.advisor | Jonathan P. How. | en_US |
dc.contributor.author | Cutler, Mark Johnson | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics. | en_US |
dc.date.accessioned | 2016-03-03T20:28:47Z | |
dc.date.available | 2016-03-03T20:28:47Z | |
dc.date.copyright | 2015 | en_US |
dc.date.issued | 2015 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/101441 | |
dc.description | Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015. | en_US |
dc.description | This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. | en_US |
dc.description | Cataloged from student-submitted PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 151-160). | en_US |
dc.description.abstract | Reinforcement learning (RL) has great potential in robotic systems as a tool for developing policies and controllers in novel situations. However, the cost of realworld samples remains prohibitive as most RL algorithms require a large number of samples before learning near-optimal or even useful policies. Simulators are one way to decrease the number of required real-world samples, but imperfect models make deciding when and how to trust samples from a simulator difficult. Two frameworks are presented for efficient RL through the use of simulators. The first framework considers scenarios where multiple simulators of a target task are available, each with varying levels of fidelity. It is designed to limit the number of samples used in each successively higher-fidelity/cost simulator by allowing a learning agent to choose to run trajectories at the lowest level simulator that will still provide it with useful information. Theoretical proofs of this framework's sample complexity are given and empirical results are demonstrated on a robotic car with multiple simulators. The second framework focuses on problems represented with continuous states and actions, as are common in many robotics domains. Using probabilistic model-based policy search algorithms and principles of optimal control, this second framework uses data from simulators as prior information for the real-world learning. The framework is tested on a propeller-driven inverted pendulum and on a drifting robotic car. These novel frameworks enable RL algorithms to find near-optimal policies in physical robot domains with fewer expensive real-world samples than previous transfer approaches or learning without simulators. | en_US |
dc.description.statementofresponsibility | by Mark Johnson Cutler. | en_US |
dc.format.extent | 160 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Aeronautics and Astronautics. | en_US |
dc.title | Reinforcement learning for robots through efficient simulator sampling | en_US |
dc.type | Thesis | en_US |
dc.description.degree | Ph. D. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics | |
dc.identifier.oclc | 939649427 | en_US |