MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Learning to Plan by Learning Rules

Author(s)
Araki, Minoru Brandon
Thumbnail
DownloadThesis PDF (51.17Mb)
Advisor
Rus, Daniela
Terms of use
In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Many environments involve following rules and tasks; for example, a chef cooking a dish follows a recipe, and a person driving follows rules of the road. People are naturally fluent with rules: we can learn rules efficiently; we can follow rules; we can interpret rules and explain them to others; and we can rapidly adjust to modified rules such as a new recipe without needing to relearn everything from scratch. By contrast, deep reinforcement learning (DRL) algorithms are ill-suited to learning policies in rule-based environments, as satisfying rules often involves executing lengthy tasks with sparse rewards. Furthermore, learned DRL policies are difficult if not impossible to interpret and are not composable. The aim of this thesis is to develop a reinforcement learning framework for rule-based environments that can efficiently learn policies that are interpretable, satisfying, and composable. We achieve interpretability by representing rules as automata or Linear Temporal Logic (LTL) formulas in a hierarchical Markov Decision Process (MDP). We achieve satisfaction by planning over the hierarchical MDP using a modified version of value iteration. We achieve composability by building off of a hierarchical reinforcement learning (HRL) framework called the options framework, in which low-level options can be composed arbitrarily. And lastly, we achieve data-efficient learning by integrating our HRL framework into a Bayesian model that can infer a distribution over LTL formulas given a low-level environment and a set of expert trajectories. We demonstrate the effectiveness of our approach via a number of rule-learning and planning experiments in both simulated and real-world environments.
Date issued
2021-09
URI
https://hdl.handle.net/1721.1/139998
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.