Motion-aware monocular depth completion for aerial vehicles with deep neural networks
Author(s)
Lin, Jing,(Jing C.)M. Eng.Massachusetts Institute of Technology.
Download1193020224-MIT.pdf (29.95Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Sertac Karaman.
Terms of use
Metadata
Show full item recordAbstract
Depth data is critical for autonomous robots like cars and aerial vehicles to understand their environments for obstacle avoidance and path planning. Classically, depth data for these robotics applications is obtained with stereo cameras, structured light cameras, or light detection and ranging (LIDAR) sensors. That is possible for autonomous vehicles which can be equipped with additional sensors but poses significant challenges for aerial vehicles: more sensors mean more weight which restricts mobility and flight-time. Furthermore, it is impossible to mount depth sensors on drones on the scale of 10-50 centimeters. To that end, we explore the depth completion problem with only a monocular camera which can be readily mounted on a drone. Our work builds on a prior state-of-the-art encoder-decoder network architecture for depth completion. Our model performs accurate depth completion on the Blackbird dataset, a drone dataset and adding scaled depth data from visual inertial odometry (VIO) further improves performance.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020 Cataloged from the official PDF of thesis. Includes bibliographical references (pages 61-64).
Date issued
2020Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.