Provably Safe Robot Navigation with Obstacle Uncertainty
Author(s)
Axelrod, Brian; Kaelbling, Leslie; Lozano-Perez, Tomas
DownloadAccepted version (3.631Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2017 MIT Press Journals. All rights reserved. As drones and autonomous cars become more widespread it is becoming increasingly important that robots can operate safely under realistic conditions. The noisy information fed into real systems means that robots must use estimates of the environment to plan navigation. Efficiently guaranteeing that the resulting motion plans are safe under these circumstances has proved difficult. We examine how to guarantee that a trajectory or policy is safe with only imperfect observations of the environment. We examine the implications of various mathematical formalisms of safety and arrive at a mathematical notion of safety of a long-term execution, even when conditioned on observational information. We present efficient algorithms that can prove that trajectories or policies are safe with much tighter bounds than in previous work. Notably, the complexity of the environment does not affect our method's ability to evaluate if a trajectory or policy is safe. We then use these safety checking methods to design a safe variant of the RRT planning algorithm.
Date issued
2017-07-12Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Robotics: Science and Systems Foundation
Citation
Axelrod, Brian, Kaelbling, Leslie and Lozano-Perez, Tomas. 2017. "Provably Safe Robot Navigation with Obstacle Uncertainty."
Version: Author's final manuscript