Computational models of category-selective brain regions enable high-throughput tests of selectivity
Author(s)
Ratan Murty, N Apurva; Bashivan, Pouya; Abate, Alex; DiCarlo, James J; Kanwisher, Nancy
DownloadPublished version (6.474Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
<jats:title>Abstract</jats:title><jats:p>Cortical regions apparently selective to faces, places, and bodies have provided important evidence for domain-specific theories of human cognition, development, and evolution. But claims of category selectivity are not quantitatively precise and remain vulnerable to empirical refutation. Here we develop artificial neural network-based encoding models that accurately predict the response to novel images in the fusiform face area, parahippocampal place area, and extrastriate body area, outperforming descriptive models and experts. We use these models to subject claims of category selectivity to strong tests, by screening for and synthesizing images predicted to produce high responses. We find that these high-response-predicted images are all unambiguous members of the hypothesized preferred category for each region. These results provide accurate, image-computable encoding models of each category-selective region, strengthen evidence for domain specificity in the brain, and point the way for future research characterizing the functional organization of the brain with unprecedented computational precision.</jats:p>
Date issued
2021-12Department
Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; McGovern Institute for Brain Research at MIT; Center for Brains, Minds, and MachinesJournal
Nature Communications
Publisher
Springer Science and Business Media LLC
Citation
Ratan Murty, N Apurva, Bashivan, Pouya, Abate, Alex, DiCarlo, James J and Kanwisher, Nancy. 2021. "Computational models of category-selective brain regions enable high-throughput tests of selectivity." Nature Communications, 12 (1).
Version: Final published version