Show simple item record

dc.contributor.advisorLeslie P. Kaelbling and Paul A. DeBitetto.en_US
dc.contributor.authorJimenez, Antonio Ren_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2007-03-12T17:54:43Z
dc.date.available2007-03-12T17:54:43Z
dc.date.copyright2006en_US
dc.date.issued2006en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/36798
dc.descriptionThesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.en_US
dc.descriptionIncludes bibliographical references (leaves 63-65).en_US
dc.description.abstractLegged locomotion is a challenging problem for machine learning to solve. A quadruped has 12 degrees of freedom which results in a large state space for the resulting Markov Decision Problem (DP). It is too difficult for computers to completely learn the state space, while it is too difficult for humans to fully understand the system dynamics and directly program the most efficient controller. This thesis combines these two approaches by integrating a model-based controller approach with reinforcement learning to develop an effective walk for a quadruped robot. We then evaluate different policy search approaches to reinforcement learning. To solve the Partially Observable Markov Decision Problem (POMIDP), a deterministic simulation is developed that generates a model which allows us to conduct a direct. policy search using dynamic programming. This is compared against using a, nondeterministic simulation to generate a model that evaluates policies. We show that using deterministic transitions to allow the use of dynamic programming has little impact on the performance of our system. Two local policy search approaches are implemented.en_US
dc.description.abstract(cont.) A hill climbing algorithm is compared to a policy gradient algorithm to optimize parameters for the robot's model-based controller. The optimal machine-learned policy achieved a 155'% increase in performance over the hand-tuned policy. The baseline hill climbing algorithm is shown to outperform the policy gradient. algorithm with this particular gait.en_US
dc.description.statementofresponsibilityby Antonio R. Jimenez.en_US
dc.format.extent65 leavesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titlePolicy search approaches to reinforcement learning for quadruped locomotionen_US
dc.typeThesisen_US
dc.description.degreeM.Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc79650848en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record