Show simple item record

dc.contributor.authorZhang, Zhengdong
dc.contributor.authorSuleiman, Amr AbdulZahir
dc.contributor.authorCarlone, Luca
dc.contributor.authorSze, Vivienne
dc.contributor.authorKaraman, Sertac
dc.date.accessioned2017-06-01T21:09:22Z
dc.date.available2017-06-01T21:09:22Z
dc.date.issued2017-07
dc.identifier.urihttp://hdl.handle.net/1721.1/109522
dc.description.abstractAutonomous navigation of miniaturized robots (e.g., nano/pico aerial vehicles) is currently a grand challenge for robotics research, due to the need of processing a large amount of sensor data (e.g., camera frames) with limited on-board computational resources. In this paper we focus on the design of a visual-inertial odometry (VIO) system in which the robot estimates its ego-motion (and a landmark-based map) from on- board camera and IMU data. We argue that scaling down VIO to miniaturized platforms (without sacrificing performance) requires a paradigm shift in the design of perception algorithms, and we advocate a co-design approach in which algorithmic and hardware design choices are tightly coupled. Our contribution is four-fold. First, we discuss the VIO co-design problem, in which one tries to attain a desired resource-performance trade-off, by making suitable design choices (in terms of hardware, algorithms, implementation, and parameters). Second, we characterize the design space, by discussing how a relevant set of design choices affects the resource-performance trade-off in VIO. Third, we provide a systematic experiment-driven way to explore the design space, towards a design that meets the desired trade-off. Fourth, we demonstrate the result of the co-design process by providing a VIO implementation on specialized hardware and showing that such implementation has the same accuracy and speed of a desktop implementation, while requiring a fraction of the power.en_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research. Young Investigator Program (FA9550-16-1-0228)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (NSF CAREER 1350685)en_US
dc.language.isoen_US
dc.relation.isversionofhttp://rss2017.personalrobotics.ri.cmu.edu/program/papers/en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceSzeen_US
dc.titleVisual-Inertial Odometry on Chip: An Algorithm-and-Hardware Co-design Approachen_US
dc.typeArticleen_US
dc.identifier.citationZhang, Zhengdong, Amr Suleiman, Luca Carlone, Vivienne Sze, Sertac Karaman. "Visual-Inertial Odometry on Chip: An Algorithm-and-Hardware Co-design Approach." Robotics: Science and System XIII, Cambridge, Massachusetts, 2017.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Microsystems Technology Laboratoriesen_US
dc.contributor.approverSze, Vivienneen_US
dc.contributor.mitauthorZhang, Zhengdong
dc.contributor.mitauthorSuleiman, Amr AbdulZahir
dc.contributor.mitauthorCarlone, Luca
dc.contributor.mitauthorSze, Vivienne
dc.contributor.mitauthorKaraman, Sertac
dc.relation.journalRobotics: Science and Systemsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-0619-8199
dc.identifier.orcidhttps://orcid.org/0000-0002-0376-4220
dc.identifier.orcidhttps://orcid.org/0000-0003-1884-5397
dc.identifier.orcidhttps://orcid.org/0000-0003-4841-3990
dc.identifier.orcidhttps://orcid.org/0000-0002-2225-7275
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record