dc.contributor.advisor | Tomaso Poggio | |
dc.contributor.author | Leibo, Joel Z | en_US |
dc.contributor.author | Mutch, Jim | en_US |
dc.contributor.author | Rosasco, Lorenzo | en_US |
dc.contributor.author | Ullman, Shimon | en_US |
dc.contributor.author | Poggio, Tomaso | en_US |
dc.contributor.other | Center for Biological and Computational Learning (CBCL) | en_US |
dc.date.accessioned | 2011-01-04T22:30:15Z | |
dc.date.available | 2011-01-04T22:30:15Z | |
dc.date.issued | 2010-12-30 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/60378 | |
dc.description.abstract | Invariance to various transformations is key to object recognition but existing definitions of invariance are somewhat confusing while discussions of invariance are often confused. In this report, we provide an operational definition of invariance by formally defining perceptual tasks as classification problems. The definition should be appropriate for physiology, psychophysics and computational modeling. For any specific object, invariance can be trivially ``learned'' by memorizing a sufficient number of example images of the transformed object. While our formal definition of invariance also covers such cases, this report focuses instead on invariance from very few images and mostly on invariances from one example. Image-plane invariances -- such as translation, rotation and scaling -- can be computed from a single image for any object. They are called generic since in principle they can be hardwired or learned (during development) for any object. In this perspective, we characterize the invariance range of a class of feedforward architectures for visual recognition that mimic the hierarchical organization of the ventral stream. We show that this class of models achieves essentially perfect translation and scaling invariance for novel images. In this architecture a new image is represented in terms of weights of "templates" (e.g. "centers" or "basis functions") at each level in the hierarchy. Such a representation inherits the invariance of each template, which is implemented through replication of the corresponding "simple" units across positions or scales and their "association" in a "complex" unit. We show simulations on real images that characterize the type and number of templates needed to support the invariant recognition of novel objects. We find that 1) the templates need not be visually similar to the target objects and that 2) a very small number of them is sufficient for good recognition. These somewhat surprising empirical results have intriguing implications for the learning of invariant recognition during the development of a biological organism, such as a human baby. In particular, we conjecture that invariance to translation and scale may be learned by the association -- through temporal contiguity -- of a small number of primal templates, that is patches extracted from the images of an object moving on the retina across positions and scales. The number of templates can later be augmented by bootstrapping mechanisms using the correspondence provided by the primal templates -- without the need of temporal contiguity. | en_US |
dc.format.extent | 27 p. | en_US |
dc.relation.ispartofseries | MIT-CSAIL-TR-2010-061 | |
dc.relation.ispartofseries | CBCL-294 | |
dc.relation.replaces | CBCL-291 | |
dc.rights | Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported | en |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/ | |
dc.subject | vision | en_US |
dc.subject | object recognition | en_US |
dc.subject | generic transformations | en_US |
dc.subject | selectivity-invariance trade-off | en_US |
dc.subject | primal templates | en_US |
dc.title | Learning Generic Invariances in Object Recognition: Translation and Scale | en_US |