Show simple item record

dc.contributor.authorCichy, Radoslaw
dc.contributor.authorKhosla, Aditya
dc.contributor.authorPantazis, Dimitrios
dc.contributor.authorTorralba, Antonio
dc.contributor.authorOliva, Aude
dc.date.accessioned2016-07-13T15:28:55Z
dc.date.available2016-07-13T15:28:55Z
dc.date.issued2016-06
dc.date.submitted2016-01
dc.identifier.issn2045-2322
dc.identifier.urihttp://hdl.handle.net/1721.1/103585
dc.description.abstractThe complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.en_US
dc.description.sponsorshipNational Eye Institute (EY020484)en_US
dc.description.sponsorshipGoogle (Firm) (Google Research Faculty Award)en_US
dc.description.sponsorshipAlexander von Humboldt-Stiftung (Feodor Lynen Postdoctoral Fellowship)en_US
dc.description.sponsorshipDeutsche Forschungsgemeinschaft (Emmy Noether Program, CI 241/1-1)en_US
dc.description.sponsorshipMcGovern Institute Neurotechnology (MINT) programen_US
dc.description.sponsorshipNational Science Foundation (U.S.) (NSF Award 1532591)en_US
dc.language.isoen_US
dc.publisherSpringer Natureen_US
dc.relation.isversionofhttp://dx.doi.org/10.1038/srep27755en_US
dc.rightsCreative Commons Attribution 4.0 International Licenseen_US
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_US
dc.sourceScientific Reportsen_US
dc.titleComparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondenceen_US
dc.typeArticleen_US
dc.identifier.citationCichy, Radoslaw Martin, Aditya Khosla, Dimitrios Pantazis, Antonio Torralba, and Aude Oliva. "Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence." Scientific Reports 6, Article number:27755 (2016), p.1-12.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMcGovern Institute for Brain Research at MITen_US
dc.contributor.mitauthorCichy, Radoslawen_US
dc.contributor.mitauthorKhosla, Adityaen_US
dc.contributor.mitauthorPantazis, Dimitriosen_US
dc.contributor.mitauthorTorralba, Antonioen_US
dc.contributor.mitauthorOliva, Audeen_US
dc.relation.journalScientific Reportsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsCichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Audeen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-0007-3352
dc.identifier.orcidhttps://orcid.org/0000-0003-4915-0256
mit.licensePUBLISHER_CCen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record