MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Performance Enhancements to Visual-Inertial SLAM for Robots and Autonomous Vehicles

Author(s)
Abate, Marcus
Thumbnail
DownloadThesis PDF (16.78Mb)
Advisor
Carlone, Luca
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Spatial perception is a key enabler for effective and safe operation of robots and autonomous vehicles in unstructured environments. Two key components of a complete spatial perception system are: identifying where the robot is in space, and constructing a representation of the world around the robot. In this thesis, we study Visual-Inertial Simultaneous Localization and Mapping (VI-SLAM) and present several findings on its application to a variety of robotic platforms to obtain globallyconsistent localization for a robot as well as a dense map of its surroundings. In particular, we extend Kimera, an open-source VI-SLAM pipeline, to be more effective in traditional use-cases (e.g., stereo-inertial VI-SLAM) as well as more broadly applicable to different platforms and sensor modalities. Our first contribution is to present a system built around Kimera for autonomous valet parking of self-driving cars, and test on real-world self-driving car datasets. This system uses a modified version of Kimera to support multi-camera VI-SLAM and perform dense free-space mapping using multiple cameras with non-overlapping field of view. Our second contribution is to describe recent updates to Kimera and showcase their beneficial effect on localization and mapping performance, while also comparing against the state of the art on extensive datasets collected on a variety of platforms. Finally, we present a novel method for detecting and tracking humans in the scene in order to build 3D Dynamic Scene Graphs for high-level perception tasks, and evaluate our method in a photorealistic simulation environment. We conclude by commenting on the advantages of Kimera and identifying areas for future work.
Date issued
2023-06
URI
https://hdl.handle.net/1721.1/151684
Department
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.