MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Policy search approaches to reinforcement learning for quadruped locomotion

Author(s)
Jimenez, Antonio R
Thumbnail
DownloadFull printable version (4.544Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.
Advisor
Leslie P. Kaelbling and Paul A. DeBitetto.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Legged locomotion is a challenging problem for machine learning to solve. A quadruped has 12 degrees of freedom which results in a large state space for the resulting Markov Decision Problem (DP). It is too difficult for computers to completely learn the state space, while it is too difficult for humans to fully understand the system dynamics and directly program the most efficient controller. This thesis combines these two approaches by integrating a model-based controller approach with reinforcement learning to develop an effective walk for a quadruped robot. We then evaluate different policy search approaches to reinforcement learning. To solve the Partially Observable Markov Decision Problem (POMIDP), a deterministic simulation is developed that generates a model which allows us to conduct a direct. policy search using dynamic programming. This is compared against using a, nondeterministic simulation to generate a model that evaluates policies. We show that using deterministic transitions to allow the use of dynamic programming has little impact on the performance of our system. Two local policy search approaches are implemented.
 
(cont.) A hill climbing algorithm is compared to a policy gradient algorithm to optimize parameters for the robot's model-based controller. The optimal machine-learned policy achieved a 155'% increase in performance over the hand-tuned policy. The baseline hill climbing algorithm is shown to outperform the policy gradient. algorithm with this particular gait.
 
Description
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
 
Includes bibliographical references (leaves 63-65).
 
Date issued
2006
URI
http://hdl.handle.net/1721.1/36798
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.