Show simple item record

dc.contributor.advisorAgrawal, Pulkit
dc.contributor.authorJenkins, Andrew
dc.date.accessioned2024-09-24T18:23:54Z
dc.date.available2024-09-24T18:23:54Z
dc.date.issued2024-05
dc.date.submitted2024-07-11T15:30:33.981Z
dc.identifier.urihttps://hdl.handle.net/1721.1/156972
dc.description.abstractAdvancements in quadrupedal robot locomotion have yielded impressive results, achieving dynamic maneuvers like climbing, ducking, and jumping. These successes are largely attributed to depth-based visual locomotion policies, known for their robust transferability between simulated and real-world environments (sim-to-real). However, depth information inherently lacks the semantic information present in RGB images. This thesis investigates the application of an RGB visual locomotion policy for navigating complex environments, specifically focusing on extreme parkour terrain. While RGB data offers a deeper understanding of the scene through semantic cues, it presents challenges in sim-to-real transfer due to large domain gaps. This work proposes a novel approach for training an RGB parkour policy and demonstrates that it achieves performance comparable to depth-based approaches in simulation. Furthermore, we successfully deploy and evaluate our RGB policy on real-world parkour obstacles, signifying its potential for practical applications.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleLearning Sim-to-Real Robot Parkour from RGB Images
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Computation and Cognition


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record