Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/137462.2

Show simple item record

dc.contributor.authorWu, Jiajun
dc.contributor.authorZhang, Chengkai
dc.contributor.authorZhang, Xiuming
dc.contributor.authorZhang, Zhoutong
dc.contributor.authorFreeman, William T.
dc.contributor.authorTenenbaum, Joshua B.
dc.date.accessioned2021-11-05T13:57:41Z
dc.date.available2021-11-05T13:57:41Z
dc.date.issued2018
dc.identifier.issn0302-9743
dc.identifier.issn1611-3349
dc.identifier.urihttps://hdl.handle.net/1721.1/137462
dc.description.abstract© 2018, Springer Nature Switzerland AG. The problem of single-view 3D shape completion or reconstruction is challenging, because among the many possible shapes that explain an observation, most are implausible and do not correspond to natural objects. Recent research in the field has tackled this problem by exploiting the expressiveness of deep convolutional networks. In fact, there is another level of ambiguity that is often overlooked: among plausible shapes, there are still multiple shapes that fit the 2D image equally well; i.e., the ground truth shape is non-deterministic given a single-view input. Existing fully supervised approaches fail to address this issue, and often produce blurry mean shapes with smooth surfaces but no fine details. In this paper, we propose ShapeHD, pushing the limit of single-view shape completion and reconstruction by integrating deep generative models with adversarially learned shape priors. The learned priors serve as a regularizer, penalizing the model only if its output is unrealistic, not if it deviates from the ground truth. Our design thus overcomes both levels of ambiguity aforementioned. Experiments demonstrate that ShapeHD outperforms state of the art by a large margin in both shape completion and shape reconstruction on multiple real datasets.en_US
dc.language.isoen
dc.publisherSpringer International Publishingen_US
dc.relation.isversionof10.1007/978-3-030-01252-6_40en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleLearning Shape Priors for Single-View 3D Completion And Reconstructionen_US
dc.typeArticleen_US
dc.identifier.citationWu, Jiajun, Zhang, Chengkai, Zhang, Xiuming, Zhang, Zhoutong, Freeman, William T. et al. 2018. "Learning Shape Priors for Single-View 3D Completion And Reconstruction."
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-05-28T13:12:40Z
dspace.date.submission2019-05-28T13:12:42Z
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version