Show simple item record

dc.contributor.advisorSertac Karaman.en_US
dc.contributor.authorLin, Jing,(Jing C.)M. Eng.Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-09-15T21:59:47Z
dc.date.available2020-09-15T21:59:47Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/127478
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 61-64).en_US
dc.description.abstractDepth data is critical for autonomous robots like cars and aerial vehicles to understand their environments for obstacle avoidance and path planning. Classically, depth data for these robotics applications is obtained with stereo cameras, structured light cameras, or light detection and ranging (LIDAR) sensors. That is possible for autonomous vehicles which can be equipped with additional sensors but poses significant challenges for aerial vehicles: more sensors mean more weight which restricts mobility and flight-time. Furthermore, it is impossible to mount depth sensors on drones on the scale of 10-50 centimeters. To that end, we explore the depth completion problem with only a monocular camera which can be readily mounted on a drone. Our work builds on a prior state-of-the-art encoder-decoder network architecture for depth completion. Our model performs accurate depth completion on the Blackbird dataset, a drone dataset and adding scaled depth data from visual inertial odometry (VIO) further improves performance.en_US
dc.description.statementofresponsibilityby Jing Lin.en_US
dc.format.extent64 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleMotion-aware monocular depth completion for aerial vehicles with deep neural networksen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1193020224en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-09-15T21:59:47Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record