Learning invariant representations of actions and faces
Author(s)
Tacchetti, Andrea
DownloadFull printable version (28.51Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Tomaso A. Poggio.
Terms of use
Metadata
Show full item recordAbstract
Recognizing other people and their actions from visual input is a crucial aspect of human perception that allows individuals to respond to social cues. Humans effortlessly identify familiar faces and are able to make fine distinctions between others' behaviors, despite transformations, like changes in viewpoint, lighting or facial expression, that substantially alter the appearance of a visual scene. The ability to generalize across these complex transformations is a hallmark of human visual intelligence, and the neural mechanisms supporting it have been the subject of wide ranging investigation in systems and computational neuroscience. However, advances in understanding the neural machinery of visual perception have not always translated in precise accounts of the computational principles dictating which representations of sensory input the human visual system learned to compute; nor how our visual system acquires the information necessary to support this learning process. Here we present results in support of the hypothesis that invariant discrimination and time continuity might fill these gaps. In particular, we use Magnetoencephalography decoding and a dataset of well-controlled, naturalistic videos to study invariant action recognition and find that representations of action sequences that support invariant recognition can be measured in the human brain. Moreover, we establish a direct link between how well artificial video representations support invariant action recognition and the extent to which they match neural correlation patterns. Finally, we show that visual representations of visual input that are robust to changes in appearance, can be learned by exploiting time continuity in video sequences. Taken as a whole our results suggest that supporting invariant discrimination tasks is the computational principle dictating which representations of sensory input are computed by human visual cortex and that time continuity in visual scenes is sufficient to learn such representations.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 125-139).
Date issued
2017Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.