Show simple item record

dc.contributor.advisorTrevor Darrellen_US
dc.contributor.authorQuattoni, Ariadnaen_US
dc.contributor.authorCarreras, Xavieren_US
dc.contributor.authorCollins, Michaelen_US
dc.contributor.authorDarrell, Trevoren_US
dc.contributor.otherVisionen_US
dc.date.accessioned2008-07-24T20:00:14Z
dc.date.available2008-07-24T20:00:14Z
dc.date.issued2008-07-23en_US
dc.identifier.otherMIT-CSAIL-TR-2008-045en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/41888
dc.description.abstractRecent approaches to multi-task learning have investigated the use of a variety of matrix norm regularization schemes for promoting feature sharing across tasks.In essence, these approaches aim at extending the l1 framework for sparse single task approximation to the multi-task setting. In this paper we focus on the computational complexity of training a jointly regularized model and propose an optimization algorithm whose complexity is linear with the number of training examples and O(n log n) with n being the number of parameters of the joint model. Our algorithm is based on setting jointly regularized loss minimization as a convex constrained optimization problem for which we develop an efficient projected gradient algorithm. The main contribution of this paper is the derivation of a gradient projection method with l1−∞ constraints that can be performed efficiently and which has convergence rates.en_US
dc.format.extent8 p.en_US
dc.relationMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratoryen_US
dc.relationen_US
dc.titleA Projected Subgradient Method for Scalable Multi-Task Learningen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record