Transfering Nonlinear Representations using Gaussian Processes with a Shared Latent Space
Author(s)
Urtasun, Raquel; Quattoni, Ariadna; Darrell, Trevor
DownloadMIT-CSAIL-TR-2007-053.pdf (319.7Kb)
Additional downloads
Other Contributors
Vision
Advisor
Trevor Darrell
Metadata
Show full item recordAbstract
When a series of problems are related, representations derived fromlearning earlier tasks may be useful in solving later problems. Inthis paper we propose a novel approach to transfer learning withlow-dimensional, non-linear latent spaces. We show how suchrepresentations can be jointly learned across multiple tasks in adiscriminative probabilistic regression framework. When transferred tonew tasks with relatively few training examples, learning can befaster and/or more accurate. Experiments on a digit recognition taskshow significantly improved performance when compared to baselineperformance with the original feature representation or with arepresentation derived from a semi-supervised learning approach.
Date issued
2007-11-06Other identifiers
MIT-CSAIL-TR-2007-053
Keywords
transfer learning, latent variable models
Collections
The following license files are associated with this item: