Show simple item record

dc.contributor.authorCastrejon, Lluis
dc.contributor.authorPirsiavash, Hamed
dc.contributor.authorAytar, Yusuf
dc.contributor.authorVondrick, Carl Martin
dc.contributor.authorTorralba, Antonio
dc.date.accessioned2017-12-29T19:43:54Z
dc.date.available2017-12-29T19:43:54Z
dc.date.issued2016-12
dc.date.submitted2016-06
dc.identifier.isbn978-1-4673-8851-1
dc.identifier.urihttp://hdl.handle.net/1721.1/112989
dc.description.abstractPeople can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant IIS-1524817)en_US
dc.description.sponsorshipGoogle (Firm) (Faculty Research Award)en_US
dc.description.sponsorshipGoogle (Firm) (Ph.D. Fellowship)en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/CVPR.2016.321en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleLearning Aligned Cross-Modal Representations from Weakly Aligned Dataen_US
dc.typeArticleen_US
dc.identifier.citationCastrejon, Lluis, et al. "Learning Aligned Cross-Modal Representations from Weakly Aligned Data." 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016, Las Vegas, NV, IEEE, 2016, pp. 2940–49.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorAytar, Yusuf
dc.contributor.mitauthorVondrick, Carl Martin
dc.contributor.mitauthorTorralba, Antonio
dc.relation.journal2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)en_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsCastrejon, Lluis; Aytar, Yusuf; Vondrick, Carl; Pirsiavash, Hamed; Torralba, Antonioen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0003-1631-4525
dc.identifier.orcidhttps://orcid.org/0000-0001-5676-2387
dc.identifier.orcidhttps://orcid.org/0000-0003-4915-0256
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record