Show simple item record

dc.contributor.authorMoses, Caris
dc.contributor.authorNoseworthy, Michael
dc.contributor.authorKaelbling, Leslie P
dc.contributor.authorLozano-Perez, Tomas
dc.contributor.authorRoy, Nicholas
dc.date.accessioned2021-03-02T19:17:57Z
dc.date.available2021-03-02T19:17:57Z
dc.date.issued2020-09
dc.date.submitted2020-05
dc.identifier.isbn9781728173955
dc.identifier.urihttps://hdl.handle.net/1721.1/130052
dc.description.abstractExploration in novel settings can be challenging without prior experience in similar domains. However, humans are able to build on prior experience quickly and efficiently. Children exhibit this behavior when playing with toys. For example, given a toy with a yellow and blue door, a child will explore with no clear objective, but once they have discovered how to open the yellow door, they will most likely be able to open the blue door much faster. Adults also exhibit this behaviour when entering new spaces such as kitchens. We develop a method, Contextual Prior Prediction, which provides a means of transferring knowledge between interactions in similar domains through vision. We develop agents that exhibit exploratory behavior with increasing efficiency, by learning visual features that are shared across environments, and how they correlate to actions. Our problem is formulated as a Contextual Multi-Armed Bandit where the contexts are images, and the robot has access to a parameterized action space. Given a novel object, the objective is to maximize reward with few interactions. A domain which strongly exhibits correlations between visual features and motion is kinemetically constrained mechanisms. We evaluate our method on simulated prismatic and revolute joints.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/icra40945.2020.9196541en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleVisual Prediction of Priors for Articulated Object Interactionen_US
dc.typeArticleen_US
dc.identifier.citationMoses, Caris et al. "Visual Prediction of Priors for Articulated Object Interaction." 2020 IEEE International Conference on Robotics and Automation, May-August 2020, virtual, Institute of Electrical and Electronics Engineers, September 2020. © 2020 IEEEen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journal2020 IEEE International Conference on Robotics and Automationen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-22T19:04:36Z
dspace.orderedauthorsMoses, C; Noseworthy, M; Kaelbling, LP; Lozano-Perez, T; Roy, Nen_US
dspace.date.submission2020-12-22T19:04:41Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record