Speech2Face: Learning the Face Behind a Voice
Author(s)
Oh, Taehyun; Dekel, Tali; Kim, Changil; Mosseri, Inbar; Freeman, William T; Rubinstein, Michael; Matusik, Wojciech; ... Show more Show less
DownloadSubmitted version (5.054Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
How much can we infer about a person's looks from the way they speak? In this paper, we study the task of reconstructing a facial image of a person from a short audio recording of that person speaking. We design and train a deep neural network to perform this task using millions of natural Internet/Youtube videos of people speaking. During training, our model learns voice-face correlations that allow it to produce images that capture various physical attributes of the speakers such as age, gender and ethnicity. This is done in a self-supervised manner, by utilizing the natural co-occurrence of faces and speech in Internet videos, without the need to model attributes explicitly. We evaluate and numerically quantify how-and in what manner-our Speech2Face reconstructions, obtained directly from audio, resemble the true face images of the speakers.
Date issued
2020-01Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Oh, Tae-Hyun et al. "Speech2Face: Learning the Face Behind a Voice." 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, Long Beach, California, Institute of Electrical and Electronics Engineers, January 2020. © 2019 IEEE
Version: Original manuscript
ISBN
9781728132938
ISSN
2575-7075