Show simple item record

dc.contributor.authorSchrimpf, Martin
dc.contributor.authorBlank, Idan Asher
dc.contributor.authorTuckute, Greta
dc.contributor.authorKauf, Carina
dc.contributor.authorHosseini, Eghbal A
dc.contributor.authorKanwisher, Nancy
dc.contributor.authorTenenbaum, Joshua B
dc.contributor.authorFedorenko, Evelina
dc.date.accessioned2021-11-23T17:26:58Z
dc.date.available2021-11-23T17:26:58Z
dc.date.issued2021-11-09
dc.identifier.urihttps://hdl.handle.net/1721.1/138214
dc.description.abstract<jats:p>The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models. By revealing trends across models, this approach yields novel insights into cognitive and neural mechanisms in the target domain. We here present a systematic study taking this approach to higher-level cognition: human language processing, our species’ signature cognitive skill. We find that the most powerful “transformer” models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities (functional MRI and electrocorticography). Models’ neural fits (“brain score”) and fits to behavioral responses are both strongly correlated with model accuracy on the next-word prediction task (but not other language tasks). Model architecture appears to substantially contribute to neural fit. These results provide computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the human brain.</jats:p>en_US
dc.language.isoen
dc.publisherProceedings of the National Academy of Sciencesen_US
dc.relation.isversionof10.1073/pnas.2105646118en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourcePNASen_US
dc.titleThe neural architecture of language: Integrative modeling converges on predictive processingen_US
dc.typeArticleen_US
dc.identifier.citationSchrimpf, Martin, Blank, Idan Asher, Tuckute, Greta, Kauf, Carina, Hosseini, Eghbal A et al. 2021. "The neural architecture of language: Integrative modeling converges on predictive processing." Proceedings of the National Academy of Sciences, 118 (45).
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.contributor.departmentMcGovern Institute for Brain Research at MIT
dc.contributor.departmentCenter for Brains, Minds, and Machines
dc.relation.journalProceedings of the National Academy of Sciencesen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2021-11-23T17:22:16Z
dspace.orderedauthorsSchrimpf, M; Blank, IA; Tuckute, G; Kauf, C; Hosseini, EA; Kanwisher, N; Tenenbaum, JB; Fedorenko, Een_US
dspace.date.submission2021-11-23T17:22:18Z
mit.journal.volume118en_US
mit.journal.issue45en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record