Show simple item record

dc.contributor.authorCheung, Wang Chi
dc.contributor.authorSimchi-Levi, David
dc.contributor.authorZhu, Ruihao
dc.date.accessioned2021-11-03T17:29:34Z
dc.date.available2021-11-03T17:29:34Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/137255
dc.description.abstractWe consider un-discounted reinforcement learning (RL) in Markov decision processes (MDPs) under drifting non-stationarity, i.e., both the reward and state transition distributions are allowed to evolve over time, as long as their respective total variations, quantified by suitable metrics, do not exceed certain variation budgets. We first develop the Sliding Window Upper-Confidence bound for Reinforcement Learning with Confidence Widening (SWUCRL2-CW) algorithm, and establish its dynamic regret bound when the variation budgets are known. In addition, we propose the Bandit-over-Reinforcement Learning (BORL) algorithm to adaptively tune the SWUCRL2-CW algorithm to achieve the same dynamic regret bound, but in a parameter-free manner, i.e., without knowing the variation budgets. Notably, learning non-stationary MDPs via the conventional optimistic exploration technique presents a unique challenge absent in existing (non-stationary) bandit learning settings. We overcome the challenge by a novel confidence widening technique that incorporates additional optimism.en_US
dc.language.isoen
dc.relation.isversionofhttps://proceedings.mlr.press/v119/cheung20a.htmlen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceProceedings of Machine Learning Researchen_US
dc.titleReinforcement Learning for Non-Stationary Markov Decision Processes: The Blessing of (More) Optimismen_US
dc.typeArticleen_US
dc.identifier.citationCheung, Wang Chi, Simchi-Levi, David and Zhu, Ruihao. 2020. "Reinforcement Learning for Non-Stationary Markov Decision Processes: The Blessing of (More) Optimism." INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 119.
dc.contributor.departmentMassachusetts Institute of Technology. Institute for Data, Systems, and Society
dc.relation.journalINTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119en_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-10-21T16:30:27Z
dspace.orderedauthorsCheung, WC; Simchi-Levi, D; Zhu, Ren_US
dspace.date.submission2021-10-21T16:30:29Z
mit.journal.volume119en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record