Show simple item record

dc.contributor.authorYildirim, Ilker
dc.contributor.authorJacobs, Robert A.
dc.date.accessioned2016-06-28T21:10:28Z
dc.date.available2016-06-28T21:10:28Z
dc.date.issued2014-10
dc.date.submitted2014-08
dc.identifier.issn1069-9384
dc.identifier.issn1531-5320
dc.identifier.urihttp://hdl.handle.net/1721.1/103377
dc.description.abstractIf a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic “computer programs” and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects’ experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects’ and events’ intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (DRL-0817250)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (BCS-1400784)en_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research (FA9550-12-1-0303)en_US
dc.publisherSpringer USen_US
dc.relation.isversionofhttp://dx.doi.org/10.3758/s13423-014-0734-yen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSpringer USen_US
dc.titleLearning multisensory representations for auditory-visual transfer of sequence category knowledge: a probabilistic language of thought approachen_US
dc.typeArticleen_US
dc.identifier.citationYildirim, Ilker, and Robert A. Jacobs. “Learning Multisensory Representations for Auditory-Visual Transfer of Sequence Category Knowledge: a Probabilistic Language of Thought Approach.” Psychon Bull Rev 22, no. 3 (October 23, 2014): 673–686.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.mitauthorYildirim, Ilkeren_US
dc.relation.journalPsychonomic Bulletin & Reviewen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2016-05-23T12:18:08Z
dc.language.rfc3066en
dc.rights.holderPsychonomic Society, Inc.
dspace.orderedauthorsYildirim, Ilker; Jacobs, Robert A.en_US
dspace.embargo.termsNen
dc.identifier.orcidhttps://orcid.org/0000-0001-6262-399X
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record