Show simple item record

dc.contributor.authorKubilius, Jonas
dc.contributor.authorKar, Kohitij
dc.contributor.authorDiCarlo, James
dc.date.accessioned2020-08-20T13:02:13Z
dc.date.available2020-08-20T13:02:13Z
dc.date.issued2018-12
dc.identifier.urihttps://hdl.handle.net/1721.1/126698
dc.description.abstractFeed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet. Further, they are quantitatively accurate models of temporally-averaged responses of neurons in the primate brain's visual system. However, biological visual systems have two ubiquitous architectural features not shared with typical CNNs: local recurrence within cortical areas, and long-range feedback from downstream areas to upstream areas. Here we explored the role of recurrence in improving classification performance. We found that standard forms of recurrence (vanilla RNNs and LSTMs) do not perform well within deep CNNs on the ImageNet task. In contrast, novel cells that incorporated two structural features, bypassing and gating, were able to boost task accuracy substantially. We extended these design principles in an automated search over thousands of model architectures, which identified novel local recurrent cells and long-range feedback connections useful for object recognition. Moreover, these task-optimized ConvRNNs matched the dynamics of neural activity in the primate visual system better than feedforward networks, suggesting a role for the brain's recurrent connections in performing difficult visual behaviors.en_US
dc.description.sponsorshipSimons Foundation (Grant 325500/542965)en_US
dc.description.sponsorshipEuropean Union. Horizon 2020 Research and Innovation Programme (Grant 705498)en_US
dc.description.sponsorshipNational Institutes of Health (U.S.). ǂb National Eye Institute (Grant R01-EY014970)en_US
dc.description.sponsorshipUnited States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant MURI-114407)en_US
dc.language.isoen
dc.publisherIEEEen_US
dc.relation.isversionofhttp://papers.neurips.cc/paper/7775-task-driven-convolutional-recurrent-models-of-the-visual-systemen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleTask-driven convolutional recurrent models of the visual systemen_US
dc.typeArticleen_US
dc.identifier.citationNayebi, Aran et al. “Task-driven convolutional recurrent models of the visual system.” NIPS'18: Proceedings of the 32nd International Conference on Neural Information Processing , vol. 2018, 2018, pp. 5295–5306 © 2018 The Author(s)en_US
dc.contributor.departmentMcGovern Institute for Brain Research at MITen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.relation.journalNIPS'18: Proceedings of the 32nd International Conference on Neural Information Processingen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-09-30T17:20:02Z
dspace.date.submission2019-09-30T17:20:05Z
mit.journal.volume2018en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record