Scalable reward learning from demonstration
Author(s)Michini, Bernard J.; How, Jonathan P.; Cutler, Mark Johnson
MetadataShow full item record
Reward learning from demonstration is the task of inferring the intents or goals of an agent demonstrating a task. Inverse reinforcement learning methods utilize the Markov decision process (MDP) framework to learn rewards, but typically scale poorly since they rely on the calculation of optimal value functions. Several key modifications are made to a previously developed Bayesian nonparametric inverse reinforcement learning algorithm that avoid calculation of an optimal value function and no longer require discretization of the state or action spaces. Experimental results given demonstrate the ability of the resulting algorithm to scale to larger problems and learn in domains with continuous demonstrations.
DepartmentMassachusetts Institute of Technology. Aerospace Controls Laboratory; Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Proceedings of the 2013 IEEE International Conference on Robotics and Automation
Institute of Electrical and Electronics Engineers (IEEE)
Michini, Bernard, Mark Cutler, and Jonathan P. How. “Scalable Reward Learning from Demonstration.” 2013 IEEE International Conference on Robotics and Automation (May 2013).
Author's final manuscript