Learning Sim-to-Real Robot Parkour from RGB Images
Author(s)
Jenkins, Andrew
DownloadThesis PDF (21.56Mb)
Advisor
Agrawal, Pulkit
Terms of use
Metadata
Show full item recordAbstract
Advancements in quadrupedal robot locomotion have yielded impressive results, achieving dynamic maneuvers like climbing, ducking, and jumping. These successes are largely attributed to depth-based visual locomotion policies, known for their robust transferability between simulated and real-world environments (sim-to-real). However, depth information inherently lacks the semantic information present in RGB images. This thesis investigates the application of an RGB visual locomotion policy for navigating complex environments, specifically focusing on extreme parkour terrain. While RGB data offers a deeper understanding of the scene through semantic cues, it presents challenges in sim-to-real transfer due to large domain gaps. This work proposes a novel approach for training an RGB parkour policy and demonstrates that it achieves performance comparable to depth-based approaches in simulation. Furthermore, we successfully deploy and evaluate our RGB policy on real-world parkour obstacles, signifying its potential for practical applications.
Date issued
2024-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology