Nonparametric Bayesian Policy Priors for Reinforcement Learning
Author(s)Doshi-Velez, Finale P.; Wingate, David; Roy, Nicholas; Tenenbaum, Joshua B.
MetadataShow full item record
We consider reinforcement learning in partially observable domains where the agent can query an expert for demonstrations. Our nonparametric Bayesian approach combines model knowledge, inferred from expert information and independent exploration, with policy knowledge inferred from expert trajectories. We introduce priors that bias the agent towards models with both simple representations and simple policies, resulting in improved policy and model learning.
DepartmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics; Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Proceedings of the 24th Annual Conference on Neural Information Processing Systems, (NIPS 2010)
Neural Information Processing Systems Foundation
Doshi-Velez, Finale, David Wingate, Nicholas Roy, and Joshua Tenenbaum. "Nonparametric Bayesian Policy Priors for Reinforcement Learning." Proceedings of the 24th Annual Conference on Neural Information Processing Systems, NIPS 2010, December 6-9, 2010, Vancouver, British Columbia.
Author's final manuscript