Show simple item record

dc.contributor.advisorTomaso A. Poggio.en_US
dc.contributor.authorTacchetti, Andreaen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2018-03-02T21:39:55Z
dc.date.available2018-03-02T21:39:55Z
dc.date.copyright2017en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/113935
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 125-139).en_US
dc.description.abstractRecognizing other people and their actions from visual input is a crucial aspect of human perception that allows individuals to respond to social cues. Humans effortlessly identify familiar faces and are able to make fine distinctions between others' behaviors, despite transformations, like changes in viewpoint, lighting or facial expression, that substantially alter the appearance of a visual scene. The ability to generalize across these complex transformations is a hallmark of human visual intelligence, and the neural mechanisms supporting it have been the subject of wide ranging investigation in systems and computational neuroscience. However, advances in understanding the neural machinery of visual perception have not always translated in precise accounts of the computational principles dictating which representations of sensory input the human visual system learned to compute; nor how our visual system acquires the information necessary to support this learning process. Here we present results in support of the hypothesis that invariant discrimination and time continuity might fill these gaps. In particular, we use Magnetoencephalography decoding and a dataset of well-controlled, naturalistic videos to study invariant action recognition and find that representations of action sequences that support invariant recognition can be measured in the human brain. Moreover, we establish a direct link between how well artificial video representations support invariant action recognition and the extent to which they match neural correlation patterns. Finally, we show that visual representations of visual input that are robust to changes in appearance, can be learned by exploiting time continuity in video sequences. Taken as a whole our results suggest that supporting invariant discrimination tasks is the computational principle dictating which representations of sensory input are computed by human visual cortex and that time continuity in visual scenes is sufficient to learn such representations.en_US
dc.description.statementofresponsibilityby Andrea Tacchetti.en_US
dc.format.extent139 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleLearning invariant representations of actions and facesen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1023862026en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record