Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/133685.2

Show simple item record

dc.contributor.authorZhuang, Chengxu
dc.contributor.authorYan, Siming
dc.contributor.authorNayebi, Aran
dc.contributor.authorSchrimpf, Martin
dc.contributor.authorFrank, Michael C
dc.contributor.authorDiCarlo, James J
dc.contributor.authorYamins, Daniel LK
dc.date.accessioned2021-10-27T19:54:08Z
dc.date.available2021-10-27T19:54:08Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/133685
dc.description.abstract© 2021 National Academy of Sciences. All rights reserved. Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods and that the mapping of these neural network models’ hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.
dc.language.isoen
dc.publisherProceedings of the National Academy of Sciences
dc.relation.isversionof10.1073/pnas.2014196118
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
dc.sourcePNAS
dc.titleUnsupervised neural network models of the ventral visual stream
dc.typeArticle
dc.relation.journalProceedings of the National Academy of Sciences of the United States of America
dc.eprint.versionFinal published version
dc.type.urihttp://purl.org/eprint/type/JournalArticle
eprint.statushttp://purl.org/eprint/status/PeerReviewed
dc.date.updated2021-03-16T12:03:40Z
dspace.orderedauthorsZhuang, C; Yan, S; Nayebi, A; Schrimpf, M; Frank, MC; DiCarlo, JJ; Yamins, DLK
dspace.date.submission2021-03-16T12:03:42Z
mit.journal.volume118
mit.journal.issue3
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version