Show simple item record

dc.contributor.authorCarlone, Luca
dc.contributor.authorKaraman, Sertac
dc.date.accessioned2020-08-14T19:18:50Z
dc.date.available2020-08-14T19:18:50Z
dc.date.issued2017-07
dc.identifier.urihttps://hdl.handle.net/1721.1/126592
dc.description.abstractWe study a visual-inertial navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of VIN? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-The-Art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate VIN while appearance-based feature selection fails to track robot's motion during aggressive maneuvers.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionof10.1109/TRO.2018.2872402en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleAttention and Anticipation in Fast Visual-Inertial Navigationen_US
dc.typeArticleen_US
dc.identifier.citationCarlone, Luca and Sertac Karaman. “Attention and Anticipation in Fast Visual-Inertial Navigation.” Paper presented at the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May-3 June 2017, IEEE © 2017 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.relation.journal2017 IEEE International Conference on Robotics and Automation (ICRA)en_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-10-29T15:16:49Z
dspace.date.submission2019-10-29T15:16:56Z
mit.journal.volume2017en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record