Show simple item record

dc.contributor.authorMajaj, Najib J.
dc.contributor.authorHong, Ha
dc.contributor.authorSolomon, Ethan A.
dc.contributor.authorDiCarlo, James
dc.date.accessioned2016-04-04T17:48:47Z
dc.date.available2016-04-04T17:48:47Z
dc.date.issued2015-09
dc.date.submitted2015-07
dc.identifier.issn0270-6474
dc.identifier.issn1529-2401
dc.identifier.urihttp://hdl.handle.net/1721.1/102140
dc.description.abstractTo go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates.en_US
dc.description.sponsorshipUnited States. Defense Advanced Research Projects Agency (DARPA Neovision2)en_US
dc.description.sponsorshipNational Institutes of Health (U.S.) (Grant NEI-R01 EY014970)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant IIS-0964269)en_US
dc.description.sponsorshipSamsung (Firm) (Fellowship)en_US
dc.language.isoen_US
dc.publisherSociety for Neuroscienceen_US
dc.relation.isversionofhttp://dx.doi.org/10.1523/jneurosci.5181-14.2015en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSociety for Neuroscienceen_US
dc.titleSimple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performanceen_US
dc.typeArticleen_US
dc.identifier.citationMajaj, N. J., H. Hong, E. A. Solomon, and J. J. DiCarlo. “Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.” Journal of Neuroscience 35, no. 39 (September 30, 2015): 13402–13418.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.departmentMcGovern Institute for Brain Research at MITen_US
dc.contributor.mitauthorMajaj, Najib J.en_US
dc.contributor.mitauthorHong, Haen_US
dc.contributor.mitauthorSolomon, Ethan A.en_US
dc.contributor.mitauthorDiCarlo, Jamesen_US
dc.relation.journalJournal of Neuroscienceen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsMajaj, N. J.; Hong, H.; Solomon, E. A.; DiCarlo, J. J.en_US
dc.identifier.orcidhttps://orcid.org/0000-0001-9910-5627
dc.identifier.orcidhttps://orcid.org/0000-0002-1592-5896
mit.licensePUBLISHER_CCen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record