MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Learning Aligned Cross-Modal Representations from Weakly Aligned Data

Author(s)
Castrejon, Lluis; Pirsiavash, Hamed; Aytar, Yusuf; Vondrick, Carl Martin; Torralba, Antonio
Thumbnail
DownloadTorralba_Learning aligned.pdf (6.239Mb)
OPEN_ACCESS_POLICY

Open Access Policy

Creative Commons Attribution-Noncommercial-Share Alike

Terms of use
Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata
Show full item record
Abstract
People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.
Date issued
2016-12
URI
http://hdl.handle.net/1721.1/112989
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Journal
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Castrejon, Lluis, et al. "Learning Aligned Cross-Modal Representations from Weakly Aligned Data." 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016, Las Vegas, NV, IEEE, 2016, pp. 2940–49.
Version: Original manuscript
ISBN
978-1-4673-8851-1

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.