Show simple item record

dc.contributor.advisorJonathan P. How.en_US
dc.contributor.authorGrande, Robert Conlinen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2014-10-08T15:21:42Z
dc.date.available2014-10-08T15:21:42Z
dc.date.copyright2014en_US
dc.date.issued2014en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/90670
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 150-160).en_US
dc.description.abstractMost existing GP regression algorithms assume a single generative model, leading to poor performance when data are nonstationary, i.e. generated from multiple switching processes. Existing methods for GP regression over non-stationary data include clustering and change-point detection algorithms. However, these methods require significant computation, do not come with provable guarantees on correctness and speed, and most algorithms only work in batch settings. This thesis presents an efficient online GP framework, GP-NBC, that leverages the generalized likelihood ratio test to detect changepoints and learn multiple Gaussian Process models from streaming data. Furthermore, GP-NBC can quickly recognize and reuse previously seen models. The algorithm is shown to be theoretically sample efficient in terms of limiting mistaken predictions. Our empirical results on two real-world datasets and one synthetic dataset show GP-NBC outperforms state of the art methods for nonstationary regression in terms of regression error and computational efficiency. The second part of the thesis introduces a Reinforcement Learning (RL) algorithm, UCRL-GP-CPD, for multi-task Reinforcement Learning when the reward function is nonstationary. First, a novel algorithm UCRL-GP is introduced for stationary reward functions. Then, UCRL-GP is combined with GP-NBC to create UCRL-GP-CPD, which is an algorithm for nonstationary reward functions. Unlike previous work in the literature, UCRL-GP-CPD does not make distributional assumptions about task generation, does not assume changepoint times are known, and does not assume that all tasks have been experienced a priori in a training phase. It is proven that UCRL-GP-CPD is sample efficient in the stationary case, will detect changepoints in the environment with high probability, and is theoretically guaranteed to prevent negative transfer. UCRL-GP-CPD is demonstrated empirically on a variety of simulated and real domains.en_US
dc.description.statementofresponsibilityby Robert Conlin Grande.en_US
dc.format.extent160 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleComputationally efficient Gaussian Process changepoint detection and regressionen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc890462047en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record