Show simple item record

dc.contributor.authorSudhakar, Soumya
dc.contributor.authorSze, Vivienne
dc.contributor.authorKaraman, Sertac
dc.date.accessioned2022-04-06T15:22:45Z
dc.date.available2022-03-29T12:01:20Z
dc.date.available2022-04-06T15:22:45Z
dc.date.issued2022-05-23
dc.identifier.urihttps://hdl.handle.net/1721.1/141382.2
dc.description.abstractDeployment of deep neural networks (DNNs) for monocular depth estimation in safety-critical scenarios on resource-constrained platforms requires well-calibrated and efficient uncertainty estimates. However, many popular uncertainty estimation techniques, including state-of-the-art ensembles and popular sampling-based methods, require multiple inferences per input, making them difficult to deploy in latencyconstrained or energy-constrained scenarios. We propose a new algorithm, called Uncertainty from Motion (UfM), that requires only one inference per input. UfM exploits the temporal redundancy in video inputs by merging incrementally the per-pixel depth prediction and per-pixel aleatoric uncertainty prediction of points that are seen in multiple views in the video sequence. When UfM is applied to ensembles, we show that UfM can retain the uncertainty quality of ensembles at a fraction of the energy by running only a single ensemble member at each frame and fusing the uncertainty over the sequence of frames. In a set of representative experiments using FCDenseNet and eight indistribution and out-of-distribution video sequences, UfM offers comparable uncertainty quality to an ensemble of size 10 while consuming only 11.3% of the ensemble’s energy and running 6.4× faster on a single Nvidia RTX 2080 Ti GPU, enabling near ensemble uncertainty quality for resource-constrained, real-time scenarios.en_US
dc.description.sponsorshipNational Science Foundation (Grants 1837212, 1937501)en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceProf. Szeen_US
dc.titleUncertainty from Motion for DNN Monocular Depth Estimationen_US
dc.typeArticleen_US
dc.identifier.citationSze, Vivienne, Karaman, Sertac and Sudhakar, Soumya. 2022. "Uncertainty from Motion for DNN Monocular Depth Estimation." IEEE International Conference on Robotics and Automation (ICRA).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.relation.journalIEEE International Conference on Robotics and Automation (ICRA)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.date.submission2022-03-26T16:43:59Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusPublication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version