Show simple item record

dc.contributor.authorSun, Tiancheng
dc.contributor.authorXu, Zexiang
dc.contributor.authorZhang, Xiuming
dc.contributor.authorFanello, Sean
dc.contributor.authorRhemann, Christoph
dc.contributor.authorDebevec, Paul
dc.contributor.authorTsai, Yun-Ta
dc.contributor.authorBarron, Jonathan
dc.contributor.authorRamamoorthi, Ravi
dc.date.accessioned2025-02-18T18:18:23Z
dc.date.available2025-02-18T18:18:23Z
dc.date.issued2020-11-26
dc.identifier.isbn978-1-4503-8107-9
dc.identifier.urihttps://hdl.handle.net/1721.1/158233
dc.description.abstractThe light stage has been widely used in computer graphics for the past two decades, primarily to enable the relighting of human faces. By capturing the appearance of the human subject under different light sources, one obtains the light transport matrix of that subject, which enables image-based relighting in novel environments. However, due to the finite number of lights in the stage, the light transport matrix only represents a sparse sampling on the entire sphere. As a consequence, relighting the subject with a point light or a directional source that does not coincide exactly with one of the lights in the stage requires interpolation and resampling the images corresponding to nearby lights, and this leads to ghosting shadows, aliased specularities, and other artifacts. To ameliorate these artifacts and produce better results under arbitrary high-frequency lighting, this paper proposes a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage. Given an arbitrary "query" light direction, our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face that appears to be illuminated by a "virtual" light source at the query location. This neural network must circumvent the inherent aliasing and regularity of the light stage data that was used for training, which we accomplish through the use of regularized traditional interpolation methods within our network. Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights, and is able to generalize across a wide variety of subjects. Our super-resolution approach enables more accurate renderings of human subjects under detailed environment maps, or the construction of simpler light stages that contain fewer light sources while still yielding comparable quality renderings as light stages with more densely sampled lights.en_US
dc.publisherAssociation for Computing Machineryen_US
dc.relation.isversionofhttps://doi.org/10.1145/3414685.3417821en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleLight Stage Super-Resolution: Continuous High-Frequency Relightingen_US
dc.typeArticleen_US
dc.identifier.citationSun, Tiancheng, Xu, Zexiang, Zhang, Xiuming, Fanello, Sean, Rhemann, Christoph et al. 2020. "Light Stage Super-Resolution: Continuous High-Frequency Relighting." ACM Transactions on Graphics, 39 (6).
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.relation.journalACM Transactions on Graphicsen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-02-01T08:50:23Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-02-01T08:50:24Z
mit.journal.volume39en_US
mit.journal.issue6en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record