Show simple item record

dc.contributor.authorBethke, Brett M.
dc.contributor.authorHow, Jonathan P.
dc.date.accessioned2010-10-05T19:42:03Z
dc.date.available2010-10-05T19:42:03Z
dc.date.issued2009-07
dc.date.submitted2009-06
dc.identifier.isbn978-1-4244-4523-3
dc.identifier.issn0743-1619
dc.identifier.otherINSPEC Accession Number: 10775650
dc.identifier.urihttp://hdl.handle.net/1721.1/58878
dc.description.abstractThis paper presents an approximate policy iteration algorithm for solving infinite-horizon, discounted Markov decision processes (MDPs) for which a model of the system is available. The algorithm is similar in spirit to Bellman residual minimization methods. However, by using Gaussian process regression with nondegenerate kernel functions as the underlying cost-to-go function approximation architecture, the algorithm is able to explicitly construct cost-to-go solutions for which the Bellman residuals are identically zero at a set of chosen sample states. For this reason, we have named our approach Bellman residual elimination (BRE). Since the Bellman residuals are zero at the sample states, our BRE algorithm can be proven to reduce to exact policy iteration in the limit of sampling the entire state space. Furthermore, the algorithm can automatically optimize the choice of any free kernel parameters and provide error bounds on the resulting cost-to-go solution. Computational results on a classic reinforcement learning problem indicate that the algorithm yields a high-quality policy and cost approximation.en_US
dc.description.sponsorshipBoeing Aerospace Companyen_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research (grant FA9550-08-1-0086)en_US
dc.description.sponsorshipAmerican Society for Engineering Educationen_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/ACC.2009.5160344en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceIEEEen_US
dc.titleApproximate Dynamic Programming Using Bellman Residual Elimination and Gaussian Process Regressionen_US
dc.typeArticleen_US
dc.identifier.citationBethke, B., and J.P. How. “Approximate dynamic programming using Bellman residual elimination and Gaussian process regression.” American Control Conference, 2009. ACC '09. 2009. 745-750. © Copyright 2010en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.approverHow, Jonathan P.
dc.contributor.mitauthorBethke, Brett M.
dc.contributor.mitauthorHow, Jonathan P.
dc.relation.journalProceedings of the 2009 conference on American Control Conferenceen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
dspace.orderedauthorsBethke, Brett; How, Jonathan P.en
dc.identifier.orcidhttps://orcid.org/0000-0001-8576-1930
mit.licensePUBLISHER_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record