Unsupervised learning of spoken language with visual context
Author(s)
Harwath, David; Torralba, Antonio; Glass, James R.
DownloadPublished version (4.671Mb)
Terms of use
Metadata
Show full item recordAbstract
Humans learn to speak before they can read or write, so why can't computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We also provide some visualizations which suggest that our model is learning to recognize meaningful words within the caption spectrograms.
Date issued
2017Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Advances in Neural Information Processing Systems 29 (NIPS 2016)
Publisher
Neural Information Processing Systems Foundation, Inc.
Citation
Harwath, David et al. "Unsupervised Learning of Spoken Language with Visual Context." Advances in Neural Information Processing Systems 29 (NIPS 2016), December 2016, Barcelona, Spain, NIPS, 2017. © 2016 NIPS Foundation.
Version: Final published version
ISSN
1049-5258