Show simple item record

dc.contributor.authorZeng, Andy
dc.contributor.authorSong, Shuran
dc.contributor.authorYu, Kuan-Ting
dc.contributor.authorDonlon, Elliott S
dc.contributor.authorHogan, Francois R.
dc.contributor.authorBauza Villalonga, Maria
dc.contributor.authorMa, Daolin
dc.contributor.authorTaylor, Orion Thomas
dc.contributor.authorLiu, Melody
dc.contributor.authorRomo, Eudald
dc.contributor.authorFazeli, Nima
dc.contributor.authorAlet, Ferran
dc.contributor.authorChavan Dafle, Nikhil Narsingh
dc.contributor.authorHolladay, Rachel
dc.contributor.authorMorona, Isabella
dc.contributor.authorNair, Prem Qu
dc.contributor.authorGreen, Druck
dc.contributor.authorTaylor, Ian
dc.contributor.authorLiu, Weber
dc.contributor.authorFunkhouser, Thomas
dc.contributor.authorRodriguez, Alberto
dc.date.accessioned2021-03-31T19:02:14Z
dc.date.available2021-03-31T19:02:14Z
dc.date.issued2019-08
dc.identifier.issn0278-3649
dc.identifier.issn1741-3176
dc.identifier.urihttps://hdl.handle.net/1721.1/130311
dc.description.abstractThis article presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses an object-agnostic grasping framework to map from visual observations to actions: inferring dense pixel-wise probability maps of the affordances for four different grasping primitive actions. It then executes the action with the highest affordance and recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional data collection or re-training. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT–Princeton Team system that took first place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu/en_US
dc.description.sponsorshipNSF (Grants IIS-1251217, VEC 1539014/1539099)en_US
dc.language.isoen
dc.publisherSAGE Publicationsen_US
dc.relation.isversionof10.1177/0278364919868017en_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivs Licenseen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.sourceSageen_US
dc.titleRobotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matchingen_US
dc.typeArticleen_US
dc.identifier.citationZeng, Andy et al. "Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching." International Journal of Robotics Research (August 2019): 1-16.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.relation.journalInternational Journal of Robotics Researchen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2020-08-03T13:55:36Z
dspace.date.submission2020-08-03T13:55:39Z
mit.licensePUBLISHER_CC
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record