dc.contributor.author | How, Jonathan P. | |
dc.contributor.author | Bertuccelli, Luca F. | |
dc.contributor.author | Bethke, Brett M. | |
dc.date.accessioned | 2010-10-06T17:10:04Z | |
dc.date.available | 2010-10-06T17:10:04Z | |
dc.date.issued | 2009-07 | |
dc.date.submitted | 2009-06 | |
dc.identifier.isbn | 978-1-4244-4523-3 | |
dc.identifier.issn | 0743-1619 | |
dc.identifier.other | INSPEC Accession Number: 10775888 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/58906 | |
dc.description.abstract | This paper presents a new robust and adaptive framework for Markov decision processes that accounts for errors in the transition probabilities. Robust policies are typically found off-line, but can be extremely conservative when implemented in the real system. Adaptive policies, on the other hand, are specifically suited for on-line implementation, but may display undesirable transient performance as the model is updated though learning. A new method that exploits the individual strengths of the two approaches is presented in this paper. This robust and adaptive framework protects the adaptation process from exhibiting a worst-case performance during the model updating, and is shown to converge to the true, optimal value function in the limit of a large number of state transition observations. The proposed framework is investigated in simulation and actual flight experiments, and shown to improve transient behavior in the adaptation process and overall mission performance. | en_US |
dc.description.sponsorship | United States. Air Force Office of Scientific Research (grant FA9550-08-1-0086) | en_US |
dc.language.iso | en_US | |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1109/ACC.2009.5160511 | en_US |
dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
dc.source | IEEE | en_US |
dc.title | Robust Adaptive Markov Decision Processes in Multi-vehicle Applications | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Bertuccelli, L.F., B. Bethke, and J.P. How. “Robust adaptive Markov Decision Processes in multi-vehicle applications.” American Control Conference, 2009. ACC '09. 2009. 1304-1309. ©2009 Institute of Electrical and Electronics Engineers. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Aerospace Controls Laboratory | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics | en_US |
dc.contributor.approver | How, Jonathan P. | |
dc.contributor.mitauthor | How, Jonathan P. | |
dc.contributor.mitauthor | Bertuccelli, Luca F. | |
dc.contributor.mitauthor | Bethke, Brett M. | |
dc.relation.journal | American Control Conference, 2009. ACC '09 | en_US |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
dspace.orderedauthors | Bertuccelli, Luca F.; Bethke, Brett; How, Jonathan P. | en |
dc.identifier.orcid | https://orcid.org/0000-0001-8576-1930 | |
mit.license | PUBLISHER_POLICY | en_US |
mit.metadata.status | Complete | |