Show simple item record

dc.contributor.authorMendiratta, Mohit
dc.contributor.authorPan, Xingang
dc.contributor.authorElgharib, Mohamed
dc.contributor.authorTeotia, Kartik
dc.contributor.authorB R, Mallikarjun
dc.contributor.authorTewari, Ayush
dc.contributor.authorGolyanik, Vladislav
dc.contributor.authorKortylewski, Adam
dc.contributor.authorTheobalt, Christian
dc.date.accessioned2024-01-04T14:04:55Z
dc.date.available2024-01-04T14:04:55Z
dc.date.issued2023-12-04
dc.identifier.issn0730-0301
dc.identifier.urihttps://hdl.handle.net/1721.1/153278
dc.description.abstractCapturing and editing full-head performances enables the creation of virtual characters with various applications such as extended reality and media production. The past few years witnessed a steep rise in the photorealism of human head avatars. Such avatars can be controlled through different input data modalities, including RGB, audio, depth, IMUs, and others. While these data modalities provide effective means of control, they mostly focus on editing the head movements such as the facial expressions, head pose, and/or camera viewpoint. In this paper, we propose AvatarStudio, a text-based method for editing the appearance of a dynamic full head avatar. Our approach builds on existing work to capture dynamic performances of human heads using Neural Radiance Field (NeRF) and edits this representation with a text-to-image diffusion model. Specifically, we introduce an optimization strategy for incorporating multiple keyframes representing different camera viewpoints and time stamps of a video performance into a single diffusion model. Using this personalized diffusion model, we edit the dynamic NeRF by introducing view-and-time-aware Score Distillation Sampling (VT-SDS) following a model-based guidance approach. Our method edits the full head in a canonical space and then propagates these edits to the remaining time steps via a pre-trained deformation network. We evaluate our method visually and numerically via a user study, and results show that our method outperforms existing approaches. Our experiments validate the design choices of our method and highlight that our edits are genuine, personalized, as well as 3D- and time-consistent.en_US
dc.publisherACMen_US
dc.relation.isversionofhttps://doi.org/10.1145/3618368en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleAvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatarsen_US
dc.typeArticleen_US
dc.identifier.citationMendiratta, Mohit, Pan, Xingang, Elgharib, Mohamed, Teotia, Kartik, B R, Mallikarjun et al. 2023. "AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars." ACM Transactions on Graphics, 42 (6).
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.relation.journalACM Transactions on Graphicsen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-01-01T08:50:01Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-01-01T08:50:02Z
mit.journal.volume42en_US
mit.journal.issue6en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record