MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Learning invariant representations of actions and faces

Author(s)
Tacchetti, Andrea
Thumbnail
DownloadFull printable version (28.51Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Tomaso A. Poggio.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Recognizing other people and their actions from visual input is a crucial aspect of human perception that allows individuals to respond to social cues. Humans effortlessly identify familiar faces and are able to make fine distinctions between others' behaviors, despite transformations, like changes in viewpoint, lighting or facial expression, that substantially alter the appearance of a visual scene. The ability to generalize across these complex transformations is a hallmark of human visual intelligence, and the neural mechanisms supporting it have been the subject of wide ranging investigation in systems and computational neuroscience. However, advances in understanding the neural machinery of visual perception have not always translated in precise accounts of the computational principles dictating which representations of sensory input the human visual system learned to compute; nor how our visual system acquires the information necessary to support this learning process. Here we present results in support of the hypothesis that invariant discrimination and time continuity might fill these gaps. In particular, we use Magnetoencephalography decoding and a dataset of well-controlled, naturalistic videos to study invariant action recognition and find that representations of action sequences that support invariant recognition can be measured in the human brain. Moreover, we establish a direct link between how well artificial video representations support invariant action recognition and the extent to which they match neural correlation patterns. Finally, we show that visual representations of visual input that are robust to changes in appearance, can be learned by exploiting time continuity in video sequences. Taken as a whole our results suggest that supporting invariant discrimination tasks is the computational principle dictating which representations of sensory input are computed by human visual cortex and that time continuity in visual scenes is sufficient to learn such representations.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
 
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
 
Cataloged from student-submitted PDF version of thesis.
 
Includes bibliographical references (pages 125-139).
 
Date issued
2017
URI
http://hdl.handle.net/1721.1/113935
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.