Universal Reinforcement Learning
Author(s)
Farias, Vivek F.; Moallemi, Ciamac C.; Van Roy, Benjamin; Weissman, Tsachy![Thumbnail](/bitstream/handle/1721.1/59294/Farias-2009-Universal%20Reinforcement%20Learning.pdf.jpg?sequence=5&isAllowed=y)
DownloadFarias-2009-Universal Reinforcement Learning.pdf (338.3Kb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
We consider an agent interacting with an unmodeled environment. At each time, the agent makes an observation, takes an action, and incurs a cost. Its actions can influence future observations and costs. The goal is to minimize the long-term average cost. We propose a novel algorithm, known as the active LZ algorithm, for optimal control based on ideas from the Lempel-Ziv scheme for universal data compression and prediction. We establish that, under the active LZ algorithm, if there exists an integer K such that the future is conditionally independent of the past given a window of K consecutive actions and observations, then the average cost converges to the optimum. Experimental results involving the game of Rock-Paper-Scissors illustrate merits of the algorithm.
Date issued
2010-04Department
Sloan School of ManagementJournal
IEEE Transactions on Information Theory
Publisher
Institute of Electrical and Electronics Engineers
Citation
Farias, V.F. et al. “Universal Reinforcement Learning.” Information Theory, IEEE Transactions on 56.5 (2010): 2441-2454. © Copyright 2010 IEEE
Version: Final published version
Other identifiers
INSPEC Accession Number: 11256626
ISSN
0018-9448
Keywords
value iteration, reinforcement learning, optimal control, dynamic programming, Lempel-Ziv, Context tree