MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Learning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learning

Author(s)
Zeng, Andy; Song, Shuran; Welker, Stefan; Lee, Johnny; Rodriguez Garcia, Alberto; Funkhouser, Thomas; ... Show more Show less
Thumbnail
DownloadAccepted version (4.150Mb)
Open Access Policy

Open Access Policy

Creative Commons Attribution-Noncommercial-Share Alike

Terms of use
Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata
Show full item record
Abstract
Skilled robotic manipulation benefits from complex synergies between non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free. In this work, we demonstrate that it is possible to discover and learn these synergies from scratch through model-free deep reinforcement learning. Our method involves training two fully convolutional networks that map from visual observations to actions: one infers the utility of pushes for a dense pixel-wise sampling of end-effector orientations and locations, while the other does the same for grasping. Both networks are trained jointly in a Q-learning framework and are entirely self-supervised by trial and error, where rewards are provided from successful grasps. In this way, our policy learns pushing motions that enable future grasps, while learning grasps that can leverage past pushes. During picking experiments in both simulation and real-world scenarios, we find that our system quickly learns complex behaviors even amid challenging cases of tightly packed clutter, and achieves better grasping success rates and picking efficiencies than baseline alternatives after a few hours of training. We further demonstrate that our method is capable of generalizing to novel objects. Qualitative results (videos), code, pre-trained models, and simulation environments are available at http://vpg.cs.princeton.edu .
Date issued
2019-01
URI
https://hdl.handle.net/1721.1/130010
Department
Massachusetts Institute of Technology. Department of Mechanical Engineering
Journal
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Zeng, Andy et al. "Learning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learning." IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2018, Madrid, Spain, Institute of Electrical and Electronics Engineers, January 2019. © 2018 IEEE
Version: Author's final manuscript
ISBN
9781538680940
ISSN
2153-0866

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.