Show simple item record

dc.contributor.authorPillai, Sudeep
dc.contributor.authorLeonard, John J
dc.date.accessioned2019-01-09T19:09:47Z
dc.date.available2019-01-09T19:09:47Z
dc.date.issued2017-12
dc.date.submitted2017-09
dc.identifier.isbn978-1-5386-2682-5
dc.identifier.urihttp://hdl.handle.net/1721.1/119893
dc.description.abstractMany model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more experience. To this end, we propose a fully trainable solution to visual ego-motion estimation for varied camera optics. We propose a visual ego-motion learning architecture that maps observed optical flow vectors to an ego-motion density estimate via a Mixture Density Network (MDN). By modeling the architecture as a Conditional Variational Autoencoder (C-VAE), our model is able to provide introspective reasoning and prediction for ego-motion induced scene-flow. Additionally, our proposed model is especially amenable to bootstrapped ego-motion learning in robots where the supervision in ego-motion estimation for a particular camera sensor can be obtained from standard navigation-based sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through experiments, we show the utility of our proposed approach in enabling the concept of self-supervised learning for visual ego-motion estimation in autonomous robots.en_US
dc.description.sponsorshipUnited States. Office of Naval Research (Grant N00014-11-1-0688)en_US
dc.description.sponsorshipUnited States. Office of Naval Research (Grant N00014- 13-1-0588)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant IIS-1318392)en_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/IROS.2017.8206441en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleTowards visual ego-motion learning in robotsen_US
dc.typeArticleen_US
dc.identifier.citationPillai, Sudeep, and John J. Leonard. “Towards Visual Ego-Motion Learning in Robots.” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 24-28 September, 2017, Vancouver, BC, Canada, IEEE, 2017. © 2017 IEEEen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.contributor.mitauthorPillai, Sudeep
dc.contributor.mitauthorLeonard, John J
dc.relation.journal2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)en_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2018-12-12T15:08:20Z
dspace.orderedauthorsPillai, Sudeep; Leonard, John J.en_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0001-7198-1772
dc.identifier.orcidhttps://orcid.org/0000-0002-8863-6550
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record