Automatic shaping and decomposition of reward functions
Author(s)
Marthi, Bhaskara
DownloadMIT-CSAIL-TR-2007-010.pdf (261.6Kb)
Additional downloads
Other Contributors
Learning and Intelligent Systems
Advisor
Leslie Kaelbling
Metadata
Show full item recordAbstract
This paper investigates the problem of automatically learning how torestructure the reward function of a Markov decision process so as tospeed up reinforcement learning. We begin by describing a method thatlearns a shaped reward function given a set of state and temporalabstractions. Next, we consider decomposition of the per-timestepreward in multieffector problems, in which the overall agent can bedecomposed into multiple units that are concurrently carrying outvarious tasks. We show by example that to find a good rewarddecomposition, it is often necessary to first shape the rewardsappropriately. We then give a function approximation algorithm forsolving both problems together. Standard reinforcement learningalgorithms can be augmented with our methods, and we showexperimentally that in each case, significantly faster learningresults.
Date issued
2007-02-13Other identifiers
MIT-CSAIL-TR-2007-010
Series/Report no.
Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory