SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels
Author(s)
Xiao, Jianxiong; Torralba, Antonio; Owens, Andrew Hale
DownloadTorralba_SUN3D.pdf (10.51Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Existing scene understanding datasets contain only a limited set of views of a place, and they lack representations of complete 3D spaces. In this paper, we introduce SUN3D, a large-scale RGB-D video database with camera pose and object labels, capturing the full 3D extent of many places. The tasks that go into constructing such a dataset are difficult in isolation -- hand-labeling videos is painstaking, and structure from motion (SfM) is unreliable for large spaces. But if we combine them together, we make the dataset construction task much easier. First, we introduce an intuitive labeling tool that uses a partial reconstruction to propagate labels from one frame to another. Then we use the object labels to fix errors in the reconstruction. For this, we introduce a generalization of bundle adjustment that incorporates object-to-object correspondences. This algorithm works by constraining points for the same object from different frames to lie inside a fixed-size bounding box, parameterized by its rotation and translation. The SUN3D database, the source code for the generalized bundle adjustment, and the web-based 3D annotation tool are all available at http://sun3d.cs.princeton.edu.
Date issued
2013-12Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Proceedings of the 2013 IEEE International Conference on Computer Vision
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Xiao, Jianxiong, Andrew Owens, and Antonio Torralba. “SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels.” 2013 IEEE International Conference on Computer Vision (December 2013).
Version: Author's final manuscript
ISBN
978-1-4799-2840-8
ISSN
1550-5499