Generative Modeling of Audible Shapes for Object Perception
Author(s)
Zhang, Zhoutong; Wu, Jiajun; Li, Qiujia; Huang, Zhengjia; Traer, James; McDermott, Josh H.; Tenenbaum, Joshua B.; Freeman, William T.; ... Show more Show less
DownloadAccepted version (9.400Mb)
Terms of use
Metadata
Show full item recordAbstract
© 2017 IEEE. Humans infer rich knowledge of objects from both auditory and visual cues. Building a machine of such competency, however, is very challenging, due to the great difficulty in capturing large-scale, clean data of objects with both their appearance and the sound they make. In this paper, we present a novel, open-source pipeline that generates audiovisual data, purely from 3D object shapes and their physical properties. Through comparison with audio recordings and human behavioral studies, we validate the accuracy of the sounds it generates. Using this generative model, we are able to construct a synthetic audio-visual dataset, namely Sound-20K, for object perception tasks. We demonstrate that auditory and visual information play complementary roles in object perception, and further, that the representation learned on synthetic audio-visual data can transfer to real-world scenarios.
Date issued
2017-10Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science; Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Center for Brains, Minds, and Machines; Massachusetts Institute of Technology. Department of Brain and Cognitive SciencesPublisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Zhang, Zhoutong, Wu, Jiajun, Li, Qiujia, Huang, Zhengjia, Traer, James et al. 2017. "Generative Modeling of Audible Shapes for Object Perception."
Version: Author's final manuscript