Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input
Author(s)
Harwath, David F.; Recasens, Adria; Suris Coll-Vinent, Didac; Chuang, Galen; Torralba, Antonio; Glass, James R; ... Show more Show less
DownloadSubmitted version (4.662Mb)
Terms of use
Metadata
Show full item recordAbstract
In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically-coupled object and word detectors. Keywords: vision and language; sound; speech; convolutional networks; multimodal learning; unsupervised learning
Date issued
2018-10-06Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Computer Vision – ECCV 2018
Publisher
Springer International Publishing
Citation
Harwath, David et al. "Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input." Computer Vision – ECCV 2018, September 8–14, 2018, Munich, Germany, edited by V. Ferrari et al., Springer, 2018
Version: Original manuscript
ISBN
9783030012304
9783030012311
ISSN
0302-9743
1611-3349