Show simple item record

dc.contributor.authorLiao, Qianli
dc.contributor.authorLeibo, Joel Z.
dc.contributor.authorPoggio, Tomaso A.
dc.date.accessioned2014-12-16T15:01:38Z
dc.date.available2014-12-16T15:01:38Z
dc.date.issued2013
dc.identifier.issn1049-5258
dc.identifier.urihttp://hdl.handle.net/1721.1/92318
dc.description.abstractOne approach to computer object recognition and modeling the brain's ventral stream involves unsupervised learning of representations that are invariant to common transformations. However, applications of these ideas have usually been limited to 2D affine transformations, e.g., translation and scaling, since they are easiest to solve via convolution. In accord with a recent theory of transformation-invariance, we propose a model that, while capturing other common convolutional networks as special cases, can also be used with arbitrary identity-preserving transformations. The model's wiring can be learned from videos of transforming objects---or any other grouping of images into sets by their depicted object. Through a series of successively more complex empirical tests, we study the invariance/discriminability properties of this model with respect to different transformations. First, we empirically confirm theoretical predictions for the case of 2D affine transformations. Next, we apply the model to non-affine transformations: as expected, it performs well on face verification tasks requiring invariance to the relatively smooth transformations of 3D rotation-in-depth and changes in illumination direction. Surprisingly, it can also tolerate clutter transformations'' which map an image of a face on one background to an image of the same face on a different background. Motivated by these empirical findings, we tested the same model on face verification benchmark tasks from the computer vision literature: Labeled Faces in the Wild, PubFig and a new dataset we gathered---achieving strong performance in these highly unconstrained cases as well."en_US
dc.language.isoen_US
dc.publisherNeural Information Processing Systems Foundationen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT Web Domainen_US
dc.titleLearning invariant representations and applications to face verificationen_US
dc.typeArticleen_US
dc.identifier.citationLiao, Qianli, Joel Z. Leibo, and Tomaso Poggio. "Learning invariant representations and applications to face verification." Advances in Neural Information Processing Systems 26 (NIPS 2013).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMcGovern Institute for Brain Research at MITen_US
dc.contributor.mitauthorLiao, Qianlien_US
dc.contributor.mitauthorLeibo, Joel Z.en_US
dc.contributor.mitauthorPoggio, Tomaso A.en_US
dc.relation.journalAdvances in Neural Information Processing Systems (NIPS)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsLiao, Qianli; Leibo, Joel Z.; Poggio, Tomaso.en_US
dc.identifier.orcidhttps://orcid.org/0000-0002-3153-916X
dc.identifier.orcidhttps://orcid.org/0000-0002-3944-0455
dc.identifier.orcidhttps://orcid.org/0000-0003-0076-621X
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record