Show simple item record

dc.contributor.advisorTrevor Darrellen_US
dc.contributor.authorUrtasun, Raquelen_US
dc.contributor.authorQuattoni, Ariadnaen_US
dc.contributor.authorLawrence, Neilen_US
dc.contributor.authorDarrell, Trevoren_US
dc.contributor.otherVisionen_US
dc.date.accessioned2008-05-05T15:46:05Z
dc.date.available2008-05-05T15:46:05Z
dc.date.issued2008-04-11en_US
dc.identifier.otherMIT-CSAIL-TR-2008-020en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/41517
dc.description.abstractWhen a series of problems are related, representations derived from learning earlier tasks may be useful in solving later problems. In this paper we propose a novel approach to transfer learning with low-dimensional, non-linear latent spaces. We show how such representations can be jointly learned across multiple tasks in a Gaussian Process framework. When transferred to new tasks with relatively few training examples, learning can be faster and/or more accurate. Experiments on digit recognition and newsgroup classification tasks show significantly improved performance when compared to baseline performance with a representation derived from a semi-supervised learning approach or with a discriminative approach that uses only the target data.en_US
dc.format.extent10 p.en_US
dc.relationMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratoryen_US
dc.relationen_US
dc.titleTransferring Nonlinear Representations using Gaussian Processes with a Shared Latent Spaceen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record