Show simple item record

dc.contributor.authorAyton, Benjamin James
dc.contributor.authorWilliams, Brian C
dc.contributor.authorCamilli, Richard
dc.date.accessioned2021-11-04T16:44:09Z
dc.date.available2021-11-04T16:44:09Z
dc.date.issued2019-07
dc.identifier.urihttps://hdl.handle.net/1721.1/137367
dc.description.abstract© 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. In autonomous exploration a mobile agent must adapt to new measurements to seek high reward, but disturbances cause a probability of collision that must be traded off against expected reward. This paper considers an autonomous agent tasked with maximizing measurements from a Gaussian Process while subject to unbounded disturbances. We seek an adaptive policy in which the maximum allowed probability of failure is constrained as a function of the expected reward. The policy is found using an extension to Monte Carlo Tree Search (MCTS) which bounds probability of failure. We apply MCTS to a sequence of approximating problems, which allows constraint satisfying actions to be found in an anytime manner. Our innovation lies in defining the approximating problems and replanning strategy such that the probability of failure constraint is guaranteed to be satisfied over the true policy. The approach does not need to plan for all measurements explicitly or constrain planning based only on the measurements that were observed. To the best of our knowledge, our approach is the first to enforce probability of failure constraints in adaptive sampling. Through experiments on real bathymetric data and simulated measurements, we show our algorithm allows an agent to take dangerous actions only when the reward justifies the risk. We then verify through Monte Carlo simulations that failure bounds are satisfied.en_US
dc.language.isoen
dc.publisherAssociation for the Advancement of Artificial Intelligence (AAAI)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1609/AAAI.V33I01.33017511en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleMeasurement Maximizing Adaptive Sampling with Risk Bounding Functionsen_US
dc.typeArticleen_US
dc.identifier.citationAyton, Benjamin James, Williams, Brian C and Camilli, Richard. 2019. "Measurement Maximizing Adaptive Sampling with Risk Bounding Functions." Proceedings of the AAAI Conference on Artificial Intelligence, 33.
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentWoods Hole Oceanographic Institutionen_US
dc.relation.journalProceedings of the AAAI Conference on Artificial Intelligenceen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-05-05T12:50:25Z
dspace.orderedauthorsAyton, B; Williams, B; Camilli, Ren_US
dspace.date.submission2021-05-05T12:50:26Z
mit.journal.volume33en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusPublication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record