Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/138360.2

Show simple item record

dc.contributor.authorYildirim, Ilker
dc.contributor.authorBelledonne, Mario
dc.contributor.authorFreiwald, Winrich
dc.contributor.authorTenenbaum, Josh
dc.date.accessioned2021-12-07T19:17:09Z
dc.date.available2021-12-07T19:17:09Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/138360
dc.description.abstract© 2020 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC). Vision not only detects and recognizes objects, but performs rich inferences about the underlying scene structure that causes the patterns of light we see. Inverting generative models, or “analysis-by-synthesis”, presents a possible solution, but its mechanistic implementations have typically been too slow for online perception, and their mapping to neural circuits remains unclear. Here we present a neurally plausible efficient inverse graphics model and test it in the domain of face recognition. The model is based on a deep neural network that learns to invert a three-dimensional face graphics program in a single fast feedforward pass. It explains human behavior qualitatively and quantitatively, including the classic “hollow face” illusion, and it maps directly onto a specialized face-processing circuit in the primate brain. The model fits both behavioral and neural data better than state-of-the-art computer vision models, and suggests an interpretable reverse-engineering account of how the brain transforms images into percepts.en_US
dc.language.isoen
dc.publisherAmerican Association for the Advancement of Science (AAAS)en_US
dc.relation.isversionof10.1126/SCIADV.AAX5979en_US
dc.rightsCreative Commons Attribution NonCommercial License 4.0en_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/en_US
dc.sourceScience Advancesen_US
dc.titleEfficient inverse graphics in biological face processingen_US
dc.typeArticleen_US
dc.identifier.citationYildirim, Ilker, Belledonne, Mario, Freiwald, Winrich and Tenenbaum, Josh. 2020. "Efficient inverse graphics in biological face processing." Science Advances, 6 (10).
dc.relation.journalScience Advancesen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2021-12-07T19:13:37Z
dspace.orderedauthorsYildirim, I; Belledonne, M; Freiwald, W; Tenenbaum, Jen_US
dspace.date.submission2021-12-07T19:13:39Z
mit.journal.volume6en_US
mit.journal.issue10en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version