Shape Anchors for Data-Driven Multi-view Reconstruction
Author(s)
Xiao, Jianxiong; Torralba, Antonio; Owens, Andrew Hale; Freeman, William T.
DownloadTorralba_Shape anchors.pdf (2.491Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We present a data-driven method for building dense 3D reconstructions using a combination of recognition and multi-view cues. Our approach is based on the idea that there are image patches that are so distinctive that we can accurately estimate their latent 3D shapes solely using recognition. We call these patches shape anchors, and we use them as the basis of a multi-view reconstruction system that transfers dense, complex geometry between scenes. We "anchor" our 3D interpretation from these patches, using them to predict geometry for parts of the scene that are relatively ambiguous. The resulting algorithm produces dense reconstructions from stereo point clouds that are sparse and noisy, and we demonstrate it on a challenging dataset of real-world, indoor scenes.
Date issued
2013-12Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Proceedings of the 2013 IEEE International Conference on Computer Vision
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Owens, Andrew, Jianxiong Xiao, Antonio Torralba, and William Freeman. “Shape Anchors for Data-Driven Multi-View Reconstruction.” 2013 IEEE International Conference on Computer Vision (December 2013).
Version: Author's final manuscript
ISBN
978-1-4799-2840-8
ISSN
1550-5499