Show simple item record

dc.contributor.authorErdogan, Goker
dc.contributor.authorYildirim, Ilker
dc.contributor.authorJacobs, Robert A.
dc.date.accessioned2016-01-04T14:40:30Z
dc.date.available2016-01-04T14:40:30Z
dc.date.issued2015-11
dc.date.submitted2015-04
dc.identifier.issn1553-7358
dc.identifier.issn1553-734X
dc.identifier.urihttp://hdl.handle.net/1721.1/100572
dc.description.abstractPeople learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.en_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research (Grant FA9550-12-1-0303)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant BCS-1400784)en_US
dc.language.isoen_US
dc.publisherPublic Library of Scienceen_US
dc.relation.isversionofhttp://dx.doi.org/10.1371/journal.pcbi.1004610en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_US
dc.sourcePublic Library of Scienceen_US
dc.titleFrom Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approachen_US
dc.typeArticleen_US
dc.identifier.citationErdogan, Goker, Ilker Yildirim, and Robert A. Jacobs. “From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach.” Edited by Paul Schrater. PLoS Comput Biol 11, no. 11 (November 10, 2015): e1004610.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.mitauthorYildirim, Ilkeren_US
dc.relation.journalPLOS Computational Biologyen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsErdogan, Goker; Yildirim, Ilker; Jacobs, Robert A.en_US
dc.identifier.orcidhttps://orcid.org/0000-0001-6262-399X
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record