The Sound of Pixels
Author(s)
Zhao, Hang; Gan, Chuang; Rouditchenko, Andrew; Vondrick, Carl Martin; McDermott, Joshua Hartman; Torralba, Antonio; ... Show more Show less
DownloadAccepted version (5.574Mb)
Terms of use
Metadata
Show full item recordAbstract
We introduce PixelPlayer, a system that, by leveraging large amounts of unlabeled videos, learns to locate image regions which produce sounds and separate the input sounds into a set of components that represents the sound from each pixel. Our approach capitalizes on the natural synchronization of the visual and audio modalities to learn models that jointly parse sounds and images, without requiring additional manual supervision. Experimental results on a newly collected MUSIC dataset show that our proposed Mix-and-Separate framework outperforms several baselines on source separation. Qualitative results suggest our model learns to ground sounds in vision, enabling applications such as independently adjusting the volume of sound sources. Keywords: Cross-modal learning; Sound separation and localization
Date issued
2018-10-06Department
MIT-IBM Watson AI Lab; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Springer Nature
Citation
Zhao, Hang et al. "The Sound of Pixels." Computer Vision – European Conference on Computer Vision (ECCV 2018), September 4-18, 2018, Munich, Germany, edited by V. Ferrari, M. Hebert, C. Sminchisescu C., and Y. Weiss. Lecture Notes in Computer Science, vol 11205, pages 587-604. Springer, Cham, 2018. © 2018 Springer Nature
Version: Author's final manuscript
ISBN
9783030012458
9783030012465
ISSN
0302-9743
1611-3349