Show simple item record

dc.contributor.authorCalandra, Roberto
dc.contributor.authorOwens, Andrew
dc.contributor.authorJayaraman, Dinesh
dc.contributor.authorLin, Justin
dc.contributor.authorYuan, Wenzhen
dc.contributor.authorMalik, Jitendra
dc.contributor.authorAdelson, Edward H
dc.contributor.authorLevine, Sergey
dc.date.accessioned2020-08-25T19:21:29Z
dc.date.available2020-08-25T19:21:29Z
dc.date.issued2018-07
dc.identifier.issn2377-3766
dc.identifier.issn2377-3774
dc.identifier.urihttps://hdl.handle.net/1721.1/126806
dc.description.abstractFor humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this letter, we investigate how a robot can learn to use tactile information to iteratively and efficiently adjust its grasp. To this end, we propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data. This model - a deep, multimodal convolutional network - predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions. Our approach requires neither calibration of the tactile sensors nor any analytical modeling of contact forces, thus reducing the engineering effort required to obtain efficient grasping policies. We train our model with data from about 6450 grasping trials on a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger. Across extensive experiments, our approach outperforms a variety of baselines at 1) estimating grasp adjustment outcomes, 2) selecting efficient grasp adjustments for quick grasping, and 3) reducing the amount of force applied at the fingers, while maintaining competitive performance. Finally, we study the choices made by our model and show that it has successfully acquired useful and interpretable grasping behaviors.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/lra.2018.2852779en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleMore Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touchen_US
dc.typeArticleen_US
dc.identifier.citationCalandra, Roberto et al. "More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch." IEEE Robotics and Automation Letters 3, 4 (October 2018): 3300 - 3307 © 2016 IEEEen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalIEEE Robotics and Automation Lettersen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2019-09-27T17:11:34Z
dspace.date.submission2019-09-27T17:11:39Z
mit.journal.volume3en_US
mit.journal.issue4en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record