Approximate Dynamic Programming Using Bellman Residual Elimination and Gaussian Process Regression
Author(s)How, Jonathan P.; Bethke, Brett M.
DownloadBethke-2009-Approximate Dynamic Programming Using Bellman Residual Elimination and Gaussian Process Regression.pdf (865.8Kb)
MetadataShow full item record
This paper presents an approximate policy iteration algorithm for solving infinite-horizon, discounted Markov decision processes (MDPs) for which a model of the system is available. The algorithm is similar in spirit to Bellman residual minimization methods. However, by using Gaussian process regression with nondegenerate kernel functions as the underlying cost-to-go function approximation architecture, the algorithm is able to explicitly construct cost-to-go solutions for which the Bellman residuals are identically zero at a set of chosen sample states. For this reason, we have named our approach Bellman residual elimination (BRE). Since the Bellman residuals are zero at the sample states, our BRE algorithm can be proven to reduce to exact policy iteration in the limit of sampling the entire state space. Furthermore, the algorithm can automatically optimize the choice of any free kernel parameters and provide error bounds on the resulting cost-to-go solution. Computational results on a classic reinforcement learning problem indicate that the algorithm yields a high-quality policy and cost approximation.
DepartmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
American Control Conference, 2009. ACC '09
Institute of Electrical and Electronics Engineers
Bethke, B., and J.P. How. “Approximate dynamic programming using Bellman residual elimination and Gaussian process regression.” American Control Conference, 2009. ACC '09. 2009. 745-750. © 2009 Institute of Electrical and Electronics Engineers.
Final published version
INSPEC Accession Number: 10775650