Building 3D Generative Models from Minimal Data
Author(s)
Sutherland, Skylar; Egger, Bernhard; Tenenbaum, Joshua
Download11263_2023_Article_1870.pdf (12.84Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Abstract
We propose a method for constructing generative models of 3D objects from a single 3D mesh and improving them through unsupervised low-shot learning from 2D images. Our method produces a 3D morphable model that represents shape and albedo in terms of Gaussian processes. Whereas previous approaches have typically built 3D morphable models from multiple high-quality 3D scans through principal component analysis, we build 3D morphable models from a single scan or template. As we demonstrate in the face domain, these models can be used to infer 3D reconstructions from 2D data (inverse graphics) or 3D data (registration). Specifically, we show that our approach can be used to perform face recognition using only a single 3D template (one scan total, not one per person). We extend our model to a preliminary unsupervised learning framework that enables the learning of the distribution of 3D faces using one 3D template and a small number of 2D images. Our approach is motivated as a potential model for the origins of face perception in human infants, who appear to start with an innate face template and subsequently develop a flexible system for perceiving the 3D structure of any novel face from experience with only 2D images of a relatively small number of familiar faces.
Date issued
2023-09-13Department
Massachusetts Institute of Technology. Department of Brain and Cognitive SciencesPublisher
Springer US
Citation
Sutherland, Skylar, Egger, Bernhard and Tenenbaum, Joshua. 2023. "Building 3D Generative Models from Minimal Data."
Version: Final published version