Show simple item record

dc.contributor.authorBanerjee, Taposh
dc.contributor.authorLiu, Miao
dc.contributor.authorHow, Jonathan P
dc.date.accessioned2018-04-13T20:12:50Z
dc.date.available2018-04-13T20:12:50Z
dc.date.issued2017-07
dc.date.submitted2017-05
dc.identifier.isbn978-1-5090-5992-8
dc.identifier.urihttp://hdl.handle.net/1721.1/114735
dc.description.abstractOptimal control in non-stationary Markov decision processes (MDP) is a challenging problem. The aim in such a control problem is to maximize the long-term discounted reward when the transition dynamics or the reward function can change over time. When a prior knowledge of change statistics is available, the standard Bayesian approach to this problem is to reformulate it as a partially observable MDP (POMDP) and solve it using approximate POMDP solvers, which are typically computationally demanding. In this paper, the problem is analyzed through the viewpoint of quickest change detection (QCD), a set of tools for detecting a change in the distribution of a sequence of random variables. Current methods applying QCD to such problems only passively detect changes by following prescribed policies, without optimizing the choice of actions for long term performance. We demonstrate that ignoring the reward-detection trade-off can cause a significant loss in long term rewards, and propose a two threshold switching strategy to solve the issue. A non-Bayesian problem formulation is also proposed for scenarios where a Bayesian formulation cannot be defined. The performance of the proposed two threshold strategy is examined through numerical analysis on a non-stationary MDP task, and the strategy outperforms the state-of-the-art QCD methods in both Bayesian and non-Bayesian settings.en_US
dc.description.sponsorshipLincoln Laboratoryen_US
dc.description.sponsorshipNorthrop Grumman Corporationen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.23919/ACC.2017.7962986en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleQuickest change detection approach to optimal control in Markov decision processes with model changesen_US
dc.typeArticleen_US
dc.identifier.citationBanerjee, Taposh, Miao Liu, and Jonathan P. How. “Quickest Change Detection Approach to Optimal Control in Markov Decision Processes with Model Changes.” 2017 American Control Conference (ACC), May 2017, Seattle, WA, USA, 2017.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.mitauthorBanerjee, Taposh
dc.contributor.mitauthorLiu, Miao
dc.contributor.mitauthorHow, Jonathan P
dc.relation.journal2017 American Control Conference (ACC)en_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2018-03-21T16:35:37Z
dspace.orderedauthorsBanerjee, Taposh; Miao Liu, Taposh; How, Jonathan P.en_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-1648-8325
dc.identifier.orcidhttps://orcid.org/0000-0001-8576-1930
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record