Unsupervised learning of clutter-resistant visual representations from natural videos
Author(s)
Liao, Qianli; Leibo, Joel Z; Poggio, Tomaso
DownloadCBMM-Memo-023.pdf (3.640Mb)
Terms of use
Metadata
Show full item recordAbstract
Populations of neurons in inferotemporal cortex (IT) maintain an explicit code for object identity that also tolerates transformations of object appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning rules are not known, recent results [4, 5, 6] suggest the operation of an unsupervised temporal-association-based method e.g., Foldiak’s trace rule [7]. Such methods exploit the temporal continuity of the visual world by assuming that visual experience over short timescales will tend to have invariant identity content. Thus, by associating representations of frames from nearby times, a representation that tolerates whatever transformations occurred in the video may be achieved. Many previous studies verified that such rules can work in simple situations without background clutter, but the presence of visual clutter has remained problematic for this approach. Here we show that temporal association based on large class-specific filters (templates) avoids the problem of clutter. Our system learns in an unsupervised way from natural videos gathered from the internet, and is able to perform a difficult unconstrained face recognition task on natural images (Labeled Faces in the Wild [8]).
Date issued
2015-04-27Publisher
Center for Brains, Minds and Machines (CBMM), arXiv
Citation
arXiv:1409.3879v2
Series/Report no.
CBMM Memo Series;023
Keywords
Object Recognition, Computer vision, Machine Learning, Artificial Intelligence
Collections
The following license files are associated with this item: