MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching

Author(s)
Zeng, Andy; Song, Shuran; Yu, Kuan-Ting; Donlon, Elliott S; Hogan, Francois R.; Bauza Villalonga, Maria; Ma, Daolin; Taylor, Orion Thomas; Liu, Melody; Romo, Eudald; Fazeli, Nima; Alet, Ferran; Chavan Dafle, Nikhil Narsingh; Holladay, Rachel; Morona, Isabella; Nair, Prem Qu; Green, Druck; Taylor, Ian; Liu, Weber; Funkhouser, Thomas; Rodriguez, Alberto; ... Show more Show less
Thumbnail
DownloadPublished version (3.029Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution-NonCommercial-NoDerivs License http://creativecommons.org/licenses/by-nc-nd/4.0/
Metadata
Show full item record
Abstract
This article presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses an object-agnostic grasping framework to map from visual observations to actions: inferring dense pixel-wise probability maps of the affordances for four different grasping primitive actions. It then executes the action with the highest affordance and recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional data collection or re-training. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT–Princeton Team system that took first place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu/
Date issued
2019-08
URI
https://hdl.handle.net/1721.1/130311
Department
Massachusetts Institute of Technology. Department of Mechanical Engineering
Journal
International Journal of Robotics Research
Publisher
SAGE Publications
Citation
Zeng, Andy et al. "Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching." International Journal of Robotics Research (August 2019): 1-16.
Version: Final published version
ISSN
0278-3649
1741-3176

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.