Planning under uncertainty with Bayesian nonparametric models
Author(s)
Klein, Robert H. (Robert Henry)
DownloadFull printable version (9.090Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics.
Advisor
Jonathan P. How.
Terms of use
Metadata
Show full item recordAbstract
Autonomous agents are increasingly being called upon to perform challenging tasks in complex settings with little information about underlying environment dynamics. To successfully complete such tasks the agent must learn from its interactions with the environment. Many existing techniques make assumptions about problem structure to remain tractable, such as limiting the class of possible models or specifying a fixed model expressive power. Complicating matters, there are many scenarios where the environment exhibits multiple underlying sets of dynamics; in these cases, most existing approaches assume the number of underlying models is known a priori, or ignore the possibility of multiple models altogether. Bayesian nonparametric (BNP) methods provide the flexibility to solve both of these problems, but have high inference complexity that has limited their adoption. This thesis provides several methods to tractably plan under uncertainty using BNPs. The first is Simultaneous Clustering on Representation Expansion (SCORE) for learning Markov Decision Processes (MDPs) that exhibit an underlying multiple-model structure. SCORE addresses the co-dependence between observation clustering and model expansion. The second contribution provides a realtime, non-myopic, risk-aware planning solution for use in camera surveillance scenarios where the number of underlying target behaviors and their parameterization are unknown. A BNP model is used to capture target behaviors, and a solution that reduces uncertainty only as needed to perform a mission is presented for allocating cameras. The final contribution is a reinforcement learning (RL) framework RLPy, a software package to promote collaboration and speed innovation in the RL community. RLPy provides a library of learning agents, function approximators, and problem domains for performing RL experiments. RLPy also provides a suite of tools that help automate tasks throughout the experiment pipeline, from initial prototyping through hyperparameter optimization, parallelization of large-scale experiments, and final publication-ready plotting.
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014. Cataloged from PDF version of thesis. Includes bibliographical references (pages 111-119).
Date issued
2014Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsPublisher
Massachusetts Institute of Technology
Keywords
Aeronautics and Astronautics.