Show simple item record

dc.contributor.authorZeng, Andy
dc.contributor.authorSong, Shuran
dc.contributor.authorYu, Kuan-Ting
dc.contributor.authorDonlon, Elliott S
dc.contributor.authorHogan, Francois R.
dc.contributor.authorBauza Villalonga, Maria
dc.contributor.authorMa, Daolin
dc.contributor.authorTaylor, Orion Thomas
dc.contributor.authorLiu, Melody
dc.contributor.authorRomo, Eudald
dc.contributor.authorFazeli, Nima
dc.contributor.authorAlet, Ferran
dc.contributor.authorChavan Dafle, Nikhil Narsingh
dc.contributor.authorHolladay, Rachel
dc.contributor.authorMorena, Isabella
dc.contributor.authorQu Nair, Prem
dc.contributor.authorGreen, Druck
dc.contributor.authorTaylor, Ian
dc.contributor.authorLiu, Weber
dc.contributor.authorFunkhouser, Thomas
dc.contributor.authorRodriguez, Alberto
dc.date.accessioned2020-09-01T16:02:35Z
dc.date.available2020-09-01T16:02:35Z
dc.date.issued2018-09
dc.date.submitted2018-05
dc.identifier.isbn9781538630815
dc.identifier.urihttps://hdl.handle.net/1721.1/126872
dc.description.abstractThis paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu.en_US
dc.description.sponsorshipNSF (Grants IIS-1251217 and VEC 1539014/1539099)en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/icra.2018.8461044en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleRobotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matchingen_US
dc.typeArticleen_US
dc.identifier.citationZeng, Andy et al. "Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching." IEEE International Conference on Robotics and Automation, May 2018, Brisbane, Australia, Institute of Electrical and Electronics Engineers, September 2018. © 2018 IEEEen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.relation.journalIEEE International Conference on Robotics and Automation (ICRA)en_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-08-03T12:54:52Z
dspace.date.submission2020-08-03T12:54:55Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record