Show simple item record

dc.contributor.authorRao, Pramod
dc.contributor.authorMallikarjun, B. R.
dc.contributor.authorFox, Gereon
dc.contributor.authorWeyrich, Tim
dc.contributor.authorBickel, Bernd
dc.contributor.authorPfister, Hanspeter
dc.contributor.authorMatusik, Wojciech
dc.contributor.authorZhan, Fangneng
dc.contributor.authorTewari, Ayush
dc.contributor.authorTheobalt, Christian
dc.contributor.authorElgharib, Mohamed
dc.date.accessioned2023-11-06T16:33:58Z
dc.date.available2023-11-06T16:33:58Z
dc.date.issued2023-10-31
dc.identifier.urihttps://hdl.handle.net/1721.1/152909
dc.description.abstractAbstract Portrait viewpoint and illumination editing is an important problem with several applications in VR/AR, movies, and photography. Comprehensive knowledge of geometry and illumination is critical for obtaining photorealistic results. Current methods are unable to explicitly model in 3D while handling both viewpoint and illumination editing from a single image. In this paper, we propose VoRF, a novel approach that can take even a single portrait image as input and relight human heads under novel illuminations that can be viewed from arbitrary viewpoints. VoRF represents a human head as a continuous volumetric field and learns a prior model of human heads using a coordinate-based MLP with individual latent spaces for identity and illumination. The prior model is learned in an auto-decoder manner over a diverse class of head shapes and appearances, allowing VoRF to generalize to novel test identities from a single input image. Additionally, VoRF has a reflectance MLP that uses the intermediate features of the prior model for rendering One-Light-at-A-Time (OLAT) images under novel views. We synthesize novel illuminations by combining these OLAT images with target environment maps. Qualitative and quantitative evaluations demonstrate the effectiveness of VoRF for relighting and novel view synthesis, even when applied to unseen subjects under uncontrolled illumination. This work is an extension of Rao et al. (VoRF: Volumetric Relightable Faces 2022). We provide extensive evaluation and ablative studies of our model and also provide an application, where any face can be relighted using textual input.en_US
dc.publisherSpringer USen_US
dc.relation.isversionofhttps://doi.org/10.1007/s11263-023-01899-3en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSpringer USen_US
dc.titleA Deeper Analysis of Volumetric Relightiable Facesen_US
dc.typeArticleen_US
dc.identifier.citationRao, Pramod, Mallikarjun, B. R., Fox, Gereon, Weyrich, Tim, Bickel, Bernd et al. 2023. "A Deeper Analysis of Volumetric Relightiable Faces."
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2023-11-05T04:12:09Z
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dspace.embargo.termsN
dspace.date.submission2023-11-05T04:12:09Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record