Show simple item record

dc.contributor.advisorRoozbehani, Mardavij
dc.contributor.advisorDahleh, Munther A.
dc.contributor.authorAlharbi, Meshal
dc.date.accessioned2024-07-08T18:56:03Z
dc.date.available2024-07-08T18:56:03Z
dc.date.issued2024-05
dc.date.submitted2024-06-06T19:54:18.706Z
dc.identifier.urihttps://hdl.handle.net/1721.1/155510
dc.description.abstractThe problem of sample complexity of online reinforcement learning is often studied in the literature without taking into account any partial knowledge about the system dynamics that could potentially accelerate the learning process. In this thesis, we study the sample complexity of online Q-learning methods when some prior knowledge about the dynamics is available or can be learned efficiently. We focus on systems that evolve according to an additive disturbance model where the underlying dynamics are described by a deterministic function of states and actions, along with an unknown additive disturbance that is independent of states and actions. In the setting of finite Markov decision processes, we present an optimistic Q-learning algorithm that achieves Õ(√T) regret without polynomial dependency on the number of states and actions under perfect knowledge of the dynamics function. This is in contrast to the typical Õ(√SAT) regret for existing Q-learning methods. Further, if only a noisy estimate of the dynamics function is available, our method can learn an approximately optimal policy in a number of samples that is independent of the cardinalities of state and action spaces. The sub-optimality gap depends on the approximation error of the noisy estimate, as well as the Lipschitz constant of the corresponding optimal value function. Our approach does not require modeling of the transition probabilities and enjoys the same memory complexity as model-free methods.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleSample Efficient Reinforcement Learning with Partial Dynamics Knowledge
dc.typeThesis
dc.description.degreeS.M.
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.contributor.departmentMassachusetts Institute of Technology. Center for Computational Science and Engineering
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Electrical Engineering and Computer Science
thesis.degree.nameMaster of Science in Computational Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record