Reinforcement Learning for Non-Stationary Markov Decision Processes: The Blessing of (More) Optimism
Author(s)
Cheung, Wang Chi; Simchi-Levi, David; Zhu, Ruihao
DownloadPublished version (921.4Kb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
We consider un-discounted reinforcement learning (RL) in Markov decision processes (MDPs)
under drifting non-stationarity, i.e., both the reward and state transition distributions are allowed
to evolve over time, as long as their respective
total variations, quantified by suitable metrics, do
not exceed certain variation budgets. We first
develop the Sliding Window Upper-Confidence
bound for Reinforcement Learning with Confidence Widening (SWUCRL2-CW) algorithm, and
establish its dynamic regret bound when the variation budgets are known. In addition, we propose
the Bandit-over-Reinforcement Learning (BORL)
algorithm to adaptively tune the SWUCRL2-CW
algorithm to achieve the same dynamic regret
bound, but in a parameter-free manner, i.e., without knowing the variation budgets. Notably, learning non-stationary MDPs via the conventional optimistic exploration technique presents a unique
challenge absent in existing (non-stationary) bandit learning settings. We overcome the challenge
by a novel confidence widening technique that
incorporates additional optimism.
Date issued
2020Department
Massachusetts Institute of Technology. Institute for Data, Systems, and SocietyJournal
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119
Citation
Cheung, Wang Chi, Simchi-Levi, David and Zhu, Ruihao. 2020. "Reinforcement Learning for Non-Stationary Markov Decision Processes: The Blessing of (More) Optimism." INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 119.
Version: Final published version