Show simple item record

dc.contributor.authorZeng, Andy
dc.contributor.authorSong, Shuran
dc.contributor.authorWelker, Stefan
dc.contributor.authorLee, Johnny
dc.contributor.authorRodriguez Garcia, Alberto
dc.contributor.authorFunkhouser, Thomas
dc.date.accessioned2021-02-26T16:49:08Z
dc.date.available2021-02-26T16:49:08Z
dc.date.issued2019-01
dc.date.submitted2018-10
dc.identifier.isbn9781538680940
dc.identifier.issn2153-0866
dc.identifier.urihttps://hdl.handle.net/1721.1/130010
dc.description.abstractSkilled robotic manipulation benefits from complex synergies between non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free. In this work, we demonstrate that it is possible to discover and learn these synergies from scratch through model-free deep reinforcement learning. Our method involves training two fully convolutional networks that map from visual observations to actions: one infers the utility of pushes for a dense pixel-wise sampling of end-effector orientations and locations, while the other does the same for grasping. Both networks are trained jointly in a Q-learning framework and are entirely self-supervised by trial and error, where rewards are provided from successful grasps. In this way, our policy learns pushing motions that enable future grasps, while learning grasps that can leverage past pushes. During picking experiments in both simulation and real-world scenarios, we find that our system quickly learns complex behaviors even amid challenging cases of tightly packed clutter, and achieves better grasping success rates and picking efficiencies than baseline alternatives after a few hours of training. We further demonstrate that our method is capable of generalizing to novel objects. Qualitative results (videos), code, pre-trained models, and simulation environments are available at http://vpg.cs.princeton.edu .en_US
dc.description.sponsorshipNSF (Grant VEC-1539014/1539099)en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/iros.2018.8593986en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleLearning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learningen_US
dc.typeArticleen_US
dc.identifier.citationZeng, Andy et al. "Learning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learning." IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2018, Madrid, Spain, Institute of Electrical and Electronics Engineers, January 2019. © 2018 IEEEen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.relation.journalIEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-07-31T18:11:55Z
dspace.date.submission2020-07-31T18:11:57Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record