Show simple item record

dc.contributor.authorIsik, Leyla
dc.contributor.authorTacchetti, Andrea
dc.contributor.authorPoggio, Tomaso A
dc.date.accessioned2018-01-19T15:06:33Z
dc.date.available2018-01-19T15:06:33Z
dc.date.issued2017-10
dc.date.submitted2017-04
dc.identifier.issn1553-7358
dc.identifier.issn1553-734X
dc.identifier.urihttp://hdl.handle.net/1721.1/113231
dc.description.abstractRecognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences.en_US
dc.description.sponsorshipEugene McDermott Foundationen_US
dc.description.sponsorshipNVIDIA Corporationen_US
dc.description.sponsorshipMcGovern Institute for Brain Research at MITen_US
dc.publisherPublic Library of Scienceen_US
dc.relation.isversionofhttp://dx.doi.org/10.1371/journal.pcbi.1005859en_US
dc.rightsCreative Commons Attribution 4.0 International Licenseen_US
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_US
dc.sourcePLoSen_US
dc.titleInvariant recognition drives neural representations of action sequencesen_US
dc.typeArticleen_US
dc.identifier.citationTacchetti, Andrea, Leyla Isik, and Tomaso Poggio. “Invariant Recognition Drives Neural Representations of Action Sequences.” Edited by Max Berniker. PLOS Computational Biology 13, no. 12 (December 18, 2017): e1005859.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.departmentMcGovern Institute for Brain Research at MITen_US
dc.contributor.mitauthorIsik, Leyla
dc.contributor.mitauthorTacchetti, Andrea
dc.contributor.mitauthorPoggio, Tomaso A
dc.relation.journalPLOS Computational Biologyen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2018-01-19T14:22:05Z
dspace.orderedauthorsTacchetti, Andrea; Isik, Leyla; Poggio, Tomasoen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-9255-0151
dc.identifier.orcidhttps://orcid.org/0000-0001-9311-9171
dc.identifier.orcidhttps://orcid.org/0000-0002-3944-0455
mit.licensePUBLISHER_CCen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record