Show simple item record

dc.contributor.authorWu, Jiajun
dc.contributor.authorWang, Yifan
dc.contributor.authorXue, Tianfan
dc.contributor.authorSun, Xingyuan
dc.contributor.authorFreeman, William T.
dc.contributor.authorTenenbaum, Joshua B.
dc.date.accessioned2020-04-22T19:21:17Z
dc.date.available2020-04-22T19:21:17Z
dc.date.issued2017
dc.identifier.urihttps://hdl.handle.net/1721.1/124813
dc.description.abstract3D object reconstruction from a single image is a highly under-determined problem, requiring strong prior knowledge of plausible 3D shapes. This introduces challenges for learning-based approaches, as 3D object annotations are scarce in real images. Previous work chose to train on synthetic data with ground truth 3D information, but suffered from domain adaptation when tested on real data. In this work, we propose MarrNet, an end-to-end trainable model that sequentially estimates 2.5D sketches and 3D object shape. Our disentangled, two-step formulation has three advantages. First, compared to full 3D shape, 2.5D sketches are much easier to be recovered from a 2D image; models that recover 2.5D sketches are also more likely to transfer from synthetic to real data. Second, for 3D reconstruction from 2.5D sketches, systems can learn purely from synthetic data. This is because we can easily render realistic 2.5D sketches without modeling object appearance variations in real images, including lighting, texture, etc. This further relieves the domain adaptation problem. Third, we derive differentiable projective functions from 3D shape to 2.5D sketches; the framework is therefore end-to-end trainable on real images, requiring no human annotations. Our model achieves state-of-the-art performance on 3D shape reconstruction. ©2017 Presented as a poster session at the 31st Conference on Neural Information Processing Systems (NeurIPS 2017), December 4-9, 2017, Long Beach, Californiaen_US
dc.language.isoen
dc.publisherNeural Information Processing Systems Foundation, Inc.en_US
dc.relation.isversionofhttps://papers.nips.cc/paper/6657-marrnet-3d-shape-reconstruction-via-25d-sketchesen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleMarrNet: 3D shape reconstruction via 2.5D sketchesen_US
dc.typeArticleen_US
dc.identifier.citationWu, Jiajun, et al., "MarrNet: 3D shape reconstruction via 2.5D sketches." In Guyon, I., et al., eds., Advances in Neural Information Processing Systems 30 (San Diego, California: Neural Information Processing Systems Foundation, Inc., 2017) url https://papers.nips.cc/paper/6657-marrnet-3d-shape-reconstruction-via-25d-sketches ©2017 Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.relation.journalAdvances in Neural Information Processing Systemsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-05-28T12:52:50Z
dspace.date.submission2019-05-28T12:52:51Z
mit.journal.volume30en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record