Show simple item record

dc.contributor.advisorKim, Sangbae
dc.contributor.authorMiller, Adam Joseph
dc.date.accessioned2023-01-19T18:42:41Z
dc.date.available2023-01-19T18:42:41Z
dc.date.issued2022-09
dc.date.submitted2022-10-19T18:57:53.967Z
dc.identifier.urihttps://hdl.handle.net/1721.1/147283
dc.description.abstractThe development of legged robots capable of navigating in and interacting with the world is quickly advancing as new methods and techniques for sensing, decisionmaking, and controls expand the capabilities of state-of-the-art systems. Model-based methods, empowered by greater computing capacity and clever formulations, are imbuing systems with further physics-based understanding. While machine learning techniques, enabled by parallelized data generation and more efficient training, are imparting greater robustness to noise and abilities to handle poorly defined world features. Together these tools constitute the two major paradigms of legged robot research and while both have their shortcomings, they have complementary limitations that can be reinforced by the other’s strengths. We propose MIMOC: Motion Imitation from Model-Based Optimal Control. MIMOC is a Reinforcement Learning (RL) locomotion controller that learns agile locomotion by imitating reference trajectories from model-based optimal control. MIMOC mitigates challenges faced by other motion imitation-based RL approaches because the generated reference trajectories are dynamically consistent, require no motion retargeting, and include torque references that are essential to learn dynamic locomotion. As a result, MIMOC does not require any fine-tuning to transfer the policy to the real robots. MIMOC also overcomes key issues with model-based optimal controllers. Since it is trained with simulated sensor noise and domain randomization, MIMOC is less sensitive to modeling and state estimation inaccuracies. We validate MIMOC on the Mini-Cheetah in outdoor environments over a wide variety of challenging terrain and on the MIT Humanoid in simulation. We show that MIMOC can transfer to the real-world and to different legged platforms. We also show cases where MIMOC outperforms model-based optimal controllers, and demonstrate the value of imitating torque references.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleLearning Legged Locomotion by Physics-based Initialization: Motion Imitation from Model-Based Optimal Control
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record