dc.contributor.author | Bethke, Brett M. | |
dc.contributor.author | How, Jonathan P. | |
dc.date.accessioned | 2010-10-05T19:42:03Z | |
dc.date.available | 2010-10-05T19:42:03Z | |
dc.date.issued | 2009-07 | |
dc.date.submitted | 2009-06 | |
dc.identifier.isbn | 978-1-4244-4523-3 | |
dc.identifier.issn | 0743-1619 | |
dc.identifier.other | INSPEC Accession Number: 10775650 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/58878 | |
dc.description.abstract | This paper presents an approximate policy iteration algorithm for solving infinite-horizon, discounted Markov decision processes (MDPs) for which a model of the system is available. The algorithm is similar in spirit to Bellman residual minimization methods. However, by using Gaussian process regression with nondegenerate kernel functions as the underlying cost-to-go function approximation architecture, the algorithm is able to explicitly construct cost-to-go solutions for which the Bellman residuals are identically zero at a set of chosen sample states. For this reason, we have named our approach Bellman residual elimination (BRE). Since the Bellman residuals are zero at the sample states, our BRE algorithm can be proven to reduce to exact policy iteration in the limit of sampling the entire state space. Furthermore, the algorithm can automatically optimize the choice of any free kernel parameters and provide error bounds on the resulting cost-to-go solution. Computational results on a classic reinforcement learning problem indicate that the algorithm yields a high-quality policy and cost approximation. | en_US |
dc.description.sponsorship | Boeing Aerospace Company | en_US |
dc.description.sponsorship | United States. Air Force Office of Scientific Research (grant FA9550-08-1-0086) | en_US |
dc.description.sponsorship | American Society for Engineering Education | en_US |
dc.language.iso | en_US | |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1109/ACC.2009.5160344 | en_US |
dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
dc.source | IEEE | en_US |
dc.title | Approximate Dynamic Programming Using Bellman Residual Elimination and Gaussian Process Regression | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Bethke, B., and J.P. How. “Approximate dynamic programming using Bellman residual elimination and Gaussian process regression.” American Control Conference, 2009. ACC '09. 2009. 745-750. © Copyright 2010 | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics | en_US |
dc.contributor.approver | How, Jonathan P. | |
dc.contributor.mitauthor | Bethke, Brett M. | |
dc.contributor.mitauthor | How, Jonathan P. | |
dc.relation.journal | Proceedings of the 2009 conference on American Control Conference | en_US |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
dspace.orderedauthors | Bethke, Brett; How, Jonathan P. | en |
dc.identifier.orcid | https://orcid.org/0000-0001-8576-1930 | |
mit.license | PUBLISHER_POLICY | en_US |
mit.metadata.status | Complete | |