Show simple item record

dc.contributor.authorYen-Chen, Lin
dc.contributor.authorIsola, Phillip John
dc.date.accessioned2021-01-12T18:26:08Z
dc.date.available2021-01-12T18:26:08Z
dc.date.issued2020-05
dc.identifier.isbn9781728173962
dc.identifier.urihttps://hdl.handle.net/1721.1/129384
dc.description.abstractDoes having visual priors (e.g. the ability to detect objects) facilitate learning to perform vision-based manipulation (e.g. picking up objects)? We study this problem under the framework of transfer learning, where the model is first trained on a passive vision task (i.e., the data distribution does not depend on the agent's decisions), then adapted to perform an active manipulation task (i.e., the data distribution does depend on the agent's decisions). We find that pre-training on vision tasks significantly improves generalization and sample efficiency for learning to manipulate objects. However, realizing these gains requires careful selection of which parts of the model to transfer. Our key insight is that outputs of standard vision models highly correlate with affordance maps commonly used in manipulation. Therefore, we explore directly transferring model parameters from vision networks to affordance prediction networks, and show that this can result in successful zero-shot adaptation, where a robot can pick up certain objects with zero robotic experience. With just a small amount of robotic experience, we can further fine-tune the affordance model to achieve better results. With just 10 minutes of suction experience or 1 hour of grasping experience, our method achieves ∼ 80% success rate at picking up novel objects.en_US
dc.language.isoen
dc.publisherIEEEen_US
dc.relation.isversionof10.1109/ICRA40945.2020.9197331en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleLearning to See before Learning to Act: Visual Pre-training for Manipulationen_US
dc.typeArticleen_US
dc.identifier.citationLin, Yen-Chen et al. “Learning to See before Learning to Act: Visual Pre-training for Manipulation.” Paper presented at the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May-31 Aug. 2020, IEEE © 2020 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.relation.journalProceedings - IEEE International Conference on Robotics and Automationen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-18T18:39:35Z
dspace.orderedauthorsYen-Chen, L; Zeng, A; Song, S; Isola, P; Lin, TYen_US
dspace.date.submission2020-12-18T18:39:40Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record