dc.contributor.author | Ma, Fangchang | |
dc.contributor.author | Venturelli Cavalheiro, Guilherme. | |
dc.contributor.author | Karaman, Sertac | |
dc.date.accessioned | 2020-08-12T17:05:50Z | |
dc.date.available | 2020-08-12T17:05:50Z | |
dc.date.issued | 2019-05 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/126545 | |
dc.description.abstract | © 2019 IEEE. Depth completion, the technique of estimating a dense depth image from sparse depth measurements, has a variety of applications in robotics and autonomous driving. However, depth completion faces 3 main challenges: the irregularly spaced pattern in the sparse depth input, the difficulty in handling multiple sensor modalities (when color images are available), as well as the lack of dense, pixel-level ground truth depth labels for training. In this work, we address all these challenges. Specifically, we develop a deep regression model to learn a direct mapping from sparse depth (and color images) input to dense depth prediction. We also propose a self-supervised training framework that requires only sequences of color and sparse depth images, without the need for dense depth labels. Our experiments demonstrate that the self-supervised framework outperforms a number of existing solutions trained with semi-dense annotations. Furthermore, when trained with semi-dense annotations, our network attains state-of-the-art accuracy and is the winning approach on the KITTI depth completion benchmark² at the time of submission. Furthermore, the self-supervised framework outperforms a number of existing solutions trained with semi-dense annotations. | en_US |
dc.description.sponsorship | United States. Office of Naval Research (Grant N00014-17-1-2670) | en_US |
dc.language.iso | en | |
dc.publisher | IEEE | en_US |
dc.relation.isversionof | 10.1109/ICRA.2019.8793637 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
dc.source | arXiv | en_US |
dc.title | Self-supervised sparse-to-dense: Self-supervised depth completion from LiDAR and monocular camera | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Ma, Fangchang, Guilherme Venturelli Cavalheiro and Sertac Karaman. “Self-supervised sparse-to-dense: Self-supervised depth completion from LiDAR and monocular camera.” Paper presented at the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20-24 May 2019, IEEE © 2019 The Author(s) | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics | en_US |
dc.relation.journal | 2019 International Conference on Robotics and Automation (ICRA) | en_US |
dc.eprint.version | Original manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dc.date.updated | 2019-10-29T16:03:10Z | |
dspace.date.submission | 2019-10-29T16:03:22Z | |
mit.journal.issue | 2019 | en_US |
mit.metadata.status | Complete | |