Visual Transfer Learning for Robotic Manipulation
Author(s)
Lin, Yen-Chen
DownloadThesis PDF (4.854Mb)
Advisor
Isola, Phillip J.
Terms of use
Metadata
Show full item recordAbstract
Humans are remarkable at manipulating unfamiliar objects. For the past decades of robotics, tremendous efforts have been dedicated to endow robot manipulation systems with such capabilities. As classic solutions typically require prior knowledge of the objects (e.g., 3D CAD models) which are not available in the unstructured environments, data-driven solutions that learn from robot-environment interactions (e.g., trial and error) have emerged as a promising approach for autonomously acquiring complex skills for manipulation. For data-driven methods, the ability to do more with less data is incredibly important, since data collection through physical interaction between the robots and the environment can be both time consuming and expensive. In this thesis, we develop transfer learning algorithms for robotic manipulation in order to reduce the amount of robot-environment interactions needed to adapt to different environments. With real robot hardware, we show that our algorithms enable robots to learn to pick and grasp arbitrary objects with 10 minutes of trial and error, and help robots learn to push unfamiliar objects with 5 interactions.
Date issued
2021-06Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology