Show simple item record

dc.contributor.authorYao, Shunyu
dc.contributor.authorHsu, Tzu Ming
dc.contributor.authorZhu, Jun-Yan
dc.contributor.authorWu, Jiajun
dc.contributor.authorTorralba, Antonio
dc.contributor.authorFreeman, William T.
dc.contributor.authorTenenbaum, Joshua B.
dc.date.accessioned2020-04-07T20:28:53Z
dc.date.available2020-04-07T20:28:53Z
dc.date.issued2018
dc.identifier.urihttps://hdl.handle.net/1721.1/124516
dc.description.abstractWe aim to obtain an interpretable, expressive, and disentangled scene representation that contains comprehensive structural and textural information for each object. Previous scene representations learned by neural networks are often uninterpretable, limited to a single object, or lacking 3D knowledge. In this work, we propose 3D scene de-rendering networks (3D-SDN) to address the above issues by integrating disentangled representations for semantics, geometry, and appearance into a deep generative model. Our scene encoder performs inverse graphics, translating a scene into a structured object-wise representation. Our decoder has two components: a differentiable shape renderer and a neural texture generator. The disentanglement of semantics, geometry, and appearance supports 3D-aware scene manipulation, e.g., rotating and moving objects freely while keeping the consistent shape and texture, and changing the object appearance without affecting its shape. Experiments demonstrate that our editing scheme based on 3D-SDN is superior to its 2D counterpart. ©2018 Poster presentation at the 32nd annual Conference on Neural Information Processing Systems (NIPS 2018), December 3-5, 2018, Montréal, Québec.en_US
dc.description.sponsorshipNSF (no. 1231216)en_US
dc.description.sponsorshipNSF (no. 1447476)en_US
dc.description.sponsorshipNSF (no. 1524817)en_US
dc.description.sponsorshipONR MURI (no. N00014-16-1-2007)en_US
dc.language.isoen
dc.publisherNeural Information Processing Systems Foundation, Inc.en_US
dc.relation.isversionofhttps://papers.nips.cc/paper/7459-3d-aware-scene-manipulation-via-inverse-graphicsen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.title3D-aware scene manipulation via inverse graphicsen_US
dc.typeArticleen_US
dc.identifier.citationYao, Shunyu, et al., "3D-aware scene manipulation via inverse graphics." Advances in Neural Information Processing Systems 31 (2018) url https://papers.nips.cc/book/advances-in-neural-information-processing-systems-31-2018en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.relation.journalAdvances in Neural Information Processing Systemsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-05-28T12:47:10Z
dspace.date.submission2019-05-28T12:47:11Z
mit.journal.volume31en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record