Fast, invariant representation for human action in the visual system
Author(s)
Isik, Leyla; Tacchetti, Andrea; Poggio, Tomaso
DownloadCBMM-Memo-042.pdf (3.032Mb)
Terms of use
Metadata
Show full item recordAbstract
The ability to recognize the actions of others from visual input is essential to humans' daily lives. The neural computations underlying action recognition, however, are still poorly understood. We use magnetoencephalography (MEG) decoding and a computational model to study action recognition from a novel dataset of well-controlled, naturalistic videos of five actions (run, walk, jump, eat drink) performed by five actors at five viewpoints. We show for the first that that actor- and view-invariant representations for action arise in the human brain as early as 200 ms. We next extend a class of biologically inspired hierarchical computational models of object recognition to recognize actions from videos and explain the computations underlying our MEG findings. This model achieves 3D viewpoint-invariance by the same biologically inspired computational mechanism it uses to build invariance to position and scale. These results suggest that robustness to complex transformations, such as 3D viewpoint invariance, does not require special neural architectures, and further provide a mechanistic explanation of the computations driving invariant action recognition.
Date issued
2016-01-06Publisher
Center for Brains, Minds and Machines (CBMM), arXiv
Series/Report no.
CBMM Memo Series;042
Keywords
Magnetoencephalography (MEG), Invariance, Computer vision
Collections
The following license files are associated with this item: