View Synthesis for Visuomotor Policy Learning
Author(s)
Lin, Yen-Chen
DownloadThesis PDF (23.11Mb)
Advisor
Isola, Phillip
Terms of use
Metadata
Show full item recordAbstract
Visuomotor policy learning is the problem of teaching machines how to use visual information to determine how to interact with their environment. Recent approaches have harnessed deep learning models to demonstrate impressive results in multi-modal and multi-task generalization. However, these models often lack a comprehensive understanding of the 3D world as they are primarily trained on large-scale RGB image datasets. In this thesis, we present a new framework that equips visuomotor policies with a view synthesizer. This generative model has the ability to envision novel viewpoints and perspectives of the 3D environment. Unlike training a visuomotor policy solely on real-world data, a view synthesizer can produce coherent views of a 3D scene in a controllable manner. This capability assists the policy in utilizing symmetries present in robotic tasks through learned and designed utilization. Learned utilization expands the training dataset of the visuomotor policy to implicitly encourage the emergence of symmetric properties through learning. On the other hand, designed utilization integrates symmetric properties into both the policy’s input representations and its model architectures to explicitly establish symmetric properties. We demonstrate that the proposed systems exhibit improved sample efficiency and generalization compared to visuomotor policies that lack the capability for view synthesis.
Date issued
2023-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology