Show simple item record

dc.contributor.authorMa, Fangchang
dc.contributor.authorVenturelli Cavalheiro, Guilherme.
dc.contributor.authorKaraman, Sertac
dc.date.accessioned2020-08-12T17:05:50Z
dc.date.available2020-08-12T17:05:50Z
dc.date.issued2019-05
dc.identifier.urihttps://hdl.handle.net/1721.1/126545
dc.description.abstract© 2019 IEEE. Depth completion, the technique of estimating a dense depth image from sparse depth measurements, has a variety of applications in robotics and autonomous driving. However, depth completion faces 3 main challenges: the irregularly spaced pattern in the sparse depth input, the difficulty in handling multiple sensor modalities (when color images are available), as well as the lack of dense, pixel-level ground truth depth labels for training. In this work, we address all these challenges. Specifically, we develop a deep regression model to learn a direct mapping from sparse depth (and color images) input to dense depth prediction. We also propose a self-supervised training framework that requires only sequences of color and sparse depth images, without the need for dense depth labels. Our experiments demonstrate that the self-supervised framework outperforms a number of existing solutions trained with semi-dense annotations. Furthermore, when trained with semi-dense annotations, our network attains state-of-the-art accuracy and is the winning approach on the KITTI depth completion benchmark² at the time of submission. Furthermore, the self-supervised framework outperforms a number of existing solutions trained with semi-dense annotations.en_US
dc.description.sponsorshipUnited States. Office of Naval Research (Grant N00014-17-1-2670)en_US
dc.language.isoen
dc.publisherIEEEen_US
dc.relation.isversionof10.1109/ICRA.2019.8793637en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleSelf-supervised sparse-to-dense: Self-supervised depth completion from LiDAR and monocular cameraen_US
dc.typeArticleen_US
dc.identifier.citationMa, Fangchang, Guilherme Venturelli Cavalheiro and Sertac Karaman. “Self-supervised sparse-to-dense: Self-supervised depth completion from LiDAR and monocular camera.” Paper presented at the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20-24 May 2019, IEEE © 2019 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.relation.journal2019 International Conference on Robotics and Automation (ICRA)en_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-10-29T16:03:10Z
dspace.date.submission2019-10-29T16:03:22Z
mit.journal.issue2019en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record