Show simple item record

dc.contributor.authorJozwik, Kamila Maria
dc.contributor.authorLee, Hyo-Dong
dc.contributor.authorKanwisher, Nancy
dc.contributor.authorDiCarlo, James
dc.date.accessioned2021-04-01T15:13:07Z
dc.date.available2021-04-01T15:13:07Z
dc.date.issued2019-12
dc.date.submitted2019-09
dc.identifier.urihttps://hdl.handle.net/1721.1/130332
dc.description.abstractNeural computations along the ventral visual stream, -- which culminates in the inferior temporal (IT) cortex -- enable humans and monkeys to recognize objects quickly. Primate IT is organized topographically: nearby neurons have similar response properties. Yet the best models of the ventral visual stream - deep artificial neural networks (ANNs) – have “IT” layers that lack topography. We built Topographic Deep ANNs (TDANNs) by incorporating a proxy wiring cost alongside the standard ImageNet categorization cost in the two “IT-like” layers of AlexNet (Lee et al., 2018), by specifying that “neurons” that have similar response properties should be physically close to each other. This cost both induced topographic structure and altered tuning characteristics of model IT neurons. We presented 2560 naturalistic images to monkeys and to ANNs. We found that, relative to the base (nontopographic) model, the “neurons” in the “IT” layer of some of the TDANN models matched actual IT neurons slightly better, and the dimensionality of the TDANN “IT” neural population was much closer to that of the measured monkey IT neural population. We also found that, while TDANNs did not show a statistically significant better match to human object discrimination behavior, detailed analysis suggests a trend in that direction. Taken together, TDANNs may better capture properties of IT cortex and wiring costs might be the cause of topographic organization in primate IT.en_US
dc.description.sponsorshipWellcome Trust (Award 206521/Z/17/Z)en_US
dc.description.sponsorshipNational Institutes of Health (Grant DP1HD091947)en_US
dc.description.sponsorshipSimons Foundation (Grants SCGB 325500, 542965)en_US
dc.description.sponsorshipNSF (Award CCF-1231216)en_US
dc.language.isoen
dc.publisherCognitive Computational Neuroscienceen_US
dc.relation.isversionofhttp://dx.doi.org/10.32470/ccn.2019.1019-0en_US
dc.rightsCreative Commons Attribution 3.0 unported licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/3.0/en_US
dc.sourceCognitive Computational Neuroscienceen_US
dc.titleAre Topographic Deep Convolutional Neural Networks Better Models of the Ventral Visual Stream?en_US
dc.typeArticleen_US
dc.identifier.citationJozwik, Kamila Maria et al. "Are Topographic Deep Convolutional Neural Networks Better Models of the Ventral Visual Stream?" 2019 Conference on Cognitive Computational Neuroscience, September 2019, Berlin, Germany, Cognitive Computational Neuroscience, December 2019.en_US
dc.contributor.departmentMcGovern Institute for Brain Research at MITen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journal2019 Conference on Cognitive Computational Neuroscienceen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-03-29T16:15:20Z
dspace.orderedauthorsJozwik, KM; Lee, H; Kanwisher, N; DiCarlo, Jen_US
dspace.date.submission2021-03-29T16:15:21Z
mit.licensePUBLISHER_CC
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record