Combining recognition and geometry for data-driven 3D reconstruction
Author(s)
Owens, Andrew (Andrew Hale)
DownloadFull printable version (5.156Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
William T. Freeman and Antonio Torralba.
Terms of use
Metadata
Show full item recordAbstract
Today's multi-view 3D reconstruction techniques rely almost exclusively on depth cues that come from multiple view geometry. While these cues can be used to produce highly accurate reconstructions, the resulting point clouds are often noisy and incomplete. Due to these issues, it may also be difficult to answer higher-level questions about the geometry, such as whether two surfaces meet at a right angle or whether a surface is planar. Furthermore, state-of-the-art reconstruction techniques generally cannot learn from training data, so having the ground-truth geometry for one scene does not aid in reconstructing similar scenes. In this work, we make two contributions toward data-driven 3D reconstruction. First, we present a dataset containing hundreds of RGBD videos that can be used as a source of training data for reconstruction algorithms. Second, we introduce the concept of the Shape Anchor, a region for which the combination of recognition and multiple view geometry allows us to accurately predict the latent, dense point cloud. We propose a technique to detect these regions and to predict their shapes, and we demonstrate it on our dataset.
Description
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013. Cataloged from PDF version of thesis. Includes bibliographical references (p. 47-50).
Date issued
2013Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.