Transfer in Reinforcement Learning via Shared Features
Author(s)
Konidaris, George; Scheidwasser, Ilya; Barto, Andrew G.
DownloadKonidaris-2012-Transfer in Reinforcement Learning via Shared Features.pdf (424.0Kb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
We present a framework for transfer in reinforcement learning based on the idea that related tasks share some common features, and that transfer can be achieved via those shared features. The framework attempts to capture the notion of tasks that are related but distinct, and provides some insight into when transfer can be usefully applied to a problem sequence and when it cannot. We apply the framework to the knowledge transfer problem, and show that an agent can learn a portable shaping function from experience in a sequence of tasks to significantly improve performance in a later related task, even given a very brief training period. We also apply the framework to skill transfer, to show that agents can learn portable skills across a sequence of tasks that significantly improve performance on later related tasks, approaching the performance of agents given perfectly learned problem-specific skills.
Date issued
2012-06Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
Journal of Machine Learning Research
Publisher
Journal of Machine Learning Research
Citation
George Konidaris, Ilya Scheidwasser, and Andrew Barto. 2012. Transfer in Reinforcement Learning via Shared Features. Journal of Machine Learning Research 98888 (June 2012), 1333-1371.
Version: Final published version
ISSN
1532-4435
1533-7928