MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents

Author(s)
Alumootil, Varkey
Thumbnail
DownloadThesis PDF (1.225Mb)
Advisor
Shah, Devavrat
Terms of use
In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Performance of state-of-the art offline and model-based reinforcement learning (RL) algorithms deteriorates significantly when subjected to severe data scarcity and the presence of heterogeneous agents. In this work, we propose a model-based offline RL method to approach this setting. Using all available data from the various agents, we construct personalized simulators for each individual agent, which are then used to train RL policies. We do so by modeling the transition dynamics of the agents as a low rank tensor decomposition of latent factors associated with agents, states, and actions. We perform experiments on various benchmark environments and demonstrate improvement over existing offline approaches in the scarce data regime.
Date issued
2021-06
URI
https://hdl.handle.net/1721.1/139143
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.