Show simple item record

dc.contributor.authorAnselmi, Fabio
dc.contributor.authorLeibo, Joel Z
dc.contributor.authorRosasco, Lorenzo
dc.contributor.authorMutch, James Vincent
dc.contributor.authorTacchetti, Andrea
dc.contributor.authorPoggio, Tomaso A
dc.date.accessioned2018-06-06T14:01:56Z
dc.date.available2018-06-06T14:01:56Z
dc.date.issued2015-06
dc.date.submitted2015-04
dc.identifier.issn0304-3975
dc.identifier.urihttp://hdl.handle.net/1721.1/116137
dc.description.abstractThe present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n → ∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → 1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a “good” representation for supervised learning, characterized by small sample complexity. We consider the case of visual object recognition, though the theory also applies to other domains like speech. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translation, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and selective signature can be computed for each image or image patch: the invariance can be exact in the case of group transformations and approximate under non-group transformations. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such signature. The theory offers novel unsupervised learning algorithms for “deep” architectures for image and speech recognition. We conjecture that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and selective for recognition—and show how this representation may be continuously learned in an unsupervised way during development and visual experience.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Award CCF - 1231216)en_US
dc.language.isoen_US
dc.publisherElsevieren_US
dc.relation.isversionofhttp://dx.doi.org/10.1016/j.tcs.2015.06.048en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleUnsupervised learning of invariant representationsen_US
dc.typeArticleen_US
dc.identifier.citationAnselmi, Fabio, et al. “Unsupervised Learning of Invariant Representations.” Theoretical Computer Science, vol. 633, June 2016, pp. 112–21.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.departmentMcGovern Institute for Brain Research at MITen_US
dc.contributor.mitauthorAnselmi, Fabio
dc.contributor.mitauthorLeibo, Joel Z
dc.contributor.mitauthorRosasco, Lorenzo
dc.contributor.mitauthorMutch, James Vincent
dc.contributor.mitauthorTacchetti, Andrea
dc.contributor.mitauthorPoggio, Tomaso A
dc.relation.journalTheoretical Computer Scienceen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsAnselmi, Fabio; Leibo, Joel Z.; Rosasco, Lorenzo; Mutch, Jim; Tacchetti, Andrea; Poggio, Tomasoen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-0264-4761
dc.identifier.orcidhttps://orcid.org/0000-0002-3153-916X
dc.identifier.orcidhttps://orcid.org/0000-0001-6376-4786
dc.identifier.orcidhttps://orcid.org/0000-0001-6130-5631
dc.identifier.orcidhttps://orcid.org/0000-0001-9311-9171
dc.identifier.orcidhttps://orcid.org/0000-0002-3944-0455
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record