On learning associations of faces and voices
Author(s)
Kim, Changil; Shin, Hijung Valentina; Oh, Tae-Hyun; Kaspar, Alexandre; Elgharib, Mohamed; Matusik, Wojciech; ... Show more Show less
DownloadSubmitted version (9.984Mb)
Terms of use
Metadata
Show full item recordAbstract
In this paper, we study the associations between human faces and voices. Audiovisual integration, specifically the integration of facial and vocal information is a well-researched area in neuroscience. It is shown that the overlapping information between the two modalities plays a significant role in perceptual tasks such as speaker identification. Through an online study on a new dataset we created, we confirm previous findings that people can associate unseen faces with corresponding voices and vice versa with greater than chance accuracy. We computationally model the overlapping information between faces and voices and show that the learned cross-modal representation contains enough information to identify matching faces and voices with performance similar to that of humans. Our representation exhibits correlations to certain demographic attributes and features obtained from either visual or aural modality alone. We release our dataset of audiovisual recordings and demographic annotations of people reading out short text used in our studies. ©2019 keywords: face-voice association; multi-modal representation learning
Date issued
2018-12Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
Lecture notes in computer science
Publisher
Springer Nature
Citation
Kim, Changil, et al., "On learning associations of faces and voices." In Jawahar, C., H. Li, G.Mori, and K. Schindler, eds., Computer vision: 14th Asian Conference on Computer Vision (ACCV 2018), December 2–6, 2018, Perth, Western Australia. Lecture notes in computer science 11365 (Cham: Springer Nature, 2018): p. 276-92 doi 10.1007/978-3-030-20873-8_18 ©2018 Author(s)
Version: Original manuscript
ISBN
978-3-030-20873-8
978-3-030-20872-1
ISSN
1611-3349
0302-9743