Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles
Author(s)
Golowich, Noah; Moitra, Ankur; Rohatgi, Dhruv
Download3618260.3649710.pdf (268.1Kb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
The key assumption underlying linear Markov Decision Processes (MDPs) is that the learner has access to a known feature map φ(x, a) that maps state-action pairs to d-dimensional vectors, and that the rewards and transition probabilities are linear functions in this representation. But where do these features come from? In the absence of expert domain knowledge, a tempting strategy is to use the “kitchen sink” approach and hope that the true features are included in a much larger set of potential features. In this paper we revisit linear MDPs from the perspective of feature selection. In a k-sparse linear MDP, there is an unknown subset S ⊂ [d] of size k containing all the relevant features, and the goal is to learn a near-optimal policy in only poly(k,logd) interactions with the environment. Our main result is the first polynomial-time algorithm for this problem. In contrast, earlier works either made prohibitively strong assumptions that obviated the need for exploration, or required solving computationally intractable optimization problems. Along the way we introduce the notion of an emulator: a succinct approximate representation of the transitions, that still suffices for computing certain Bellman backups. Since linear MDPs are a non-parametric model, it is not even obvious whether polynomial-sized emulators exist. We show that they do exist, and moreover can be computed efficiently via convex programming. As a corollary of our main result, we give an algorithm for learning a near-optimal policy in block MDPs whose decoding function is a low-depth decision tree; the algorithm runs in quasi-polynomial time and takes a polynomial number of samples (in the size of the decision tree). This can be seen as a reinforcement learning analogue of classic results in computational learning theory. Furthermore, it gives a
Description
STOC ’24, June 24–28, 2024, Vancouver, BC, Canada
Date issued
2024-06-10Department
Massachusetts Institute of Technology. Department of Mathematics; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
ACM
Citation
Golowich, Noah, Moitra, Ankur and Rohatgi, Dhruv. 2024. "Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles."
Version: Final published version
ISBN
979-8-4007-0383-6
Collections
The following license files are associated with this item: