Transfer learning for image classification with sparse prototype representations
Author(s)
Quattoni, Ariadna; Collins, Michael; Darrell, Trevor
DownloadMIT-CSAIL-TR-2008-012.pdf (234.1Kb)
Additional downloads
Other Contributors
Vision
Advisor
Trevor Darrell
Metadata
Show full item recordAbstract
To learn a new visual category from few examples, prior knowledge from unlabeled data as well as previous related categories may be useful. We develop a new method for transfer learning which exploits available unlabeled data and an arbitrary kernel function; we form a representation based on kernel distances to a large set of unlabeled data points. To transfer knowledge from previous related problems we observe that a category might be learnable using only a small subset of reference prototypes. Related problems may share a significant number of relevant prototypes; we find such a reduced representation by performing a joint loss minimization over the training sets of related problems with a shared regularization penalty that minimizes the total number of prototypes involved in the approximation.This optimization problem can be formulated as a linear program thatcan be solved efficiently. We conduct experiments on a news-topic prediction task where the goal is to predict whether an image belongs to a particularnews topic. Our results show that when only few examples are available for training a target topic, leveraging knowledge learnt from other topics can significantly improve performance.
Date issued
2008-03-03Other identifiers
MIT-CSAIL-TR-2008-012
Keywords
transfer learning, image classification