Fusing visual odometry and depth completion
Author(s)Venturelli Cavalheiro, Guilherme.
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics.
MetadataShow full item record
Recent advances in technology indicate that autonomous vehicles and self-driving cats in particular may become commonplace in the near future. This thesis contributes to that scenario by studying the problem of depth perception based on sequences of camera images. We start by presenting a sensor fusion framework that achieves state-of-the-art performance when completing depth from sparse LiDAR measurements and a camera. Then, we study how the system performs under a variety of modifications of the sparse input until we ultimately replace LiDAR measurements with triangulations from a typical sparse visual odometry pipeline. We are then able to achieve a small improvement over the single image baseline and chart guidelines to assist in designing a system with even more substantial gains.
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 57-62).
DepartmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
Massachusetts Institute of Technology
Aeronautics and Astronautics.