Provably safe robot navigation with obstacle uncertainty
Author(s)
Axelrod, Brian; Kaelbling, Leslie Pack; Lozano-Pérez, Tomás
DownloadSubmitted version (3.548Mb)
Terms of use
Metadata
Show full item recordAbstract
© The Author(s) 2018. As drones and autonomous cars become more widespread, it is becoming increasingly important that robots can operate safely under realistic conditions. The noisy information fed into real systems means that robots must use estimates of the environment to plan navigation. Efficiently guaranteeing that the resulting motion plans are safe under these circumstances has proved difficult. We examine how to guarantee that a trajectory or policy has at most ϵ collision probability (ϵ-safe) with only imperfect observations of the environment. We examine the implications of various mathematical formalisms of safety and arrive at a mathematical notion of safety of a long-term execution, even when conditioned on observational information. We explore the idea of shadows that generalize the notion of a confidence set to estimated shapes and present a theorem that allows us to understand the relationship between shadows and their classical statistical equivalents such as confidence and credible sets. We present efficient algorithms that use shadows to prove that trajectories or policies are safe with much tighter bounds than in previous work. Notably, the complexity of the environment does not affect our method’s ability to evaluate whether a trajectory or policy is safe. We then use these safety-checking methods to design a safe variant of the rapidly exploring random tree (RRT) planning algorithm.
Date issued
2018Department
Massachusetts Institute of Technology. Department of Mechanical Engineering; Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
International Journal of Robotics Research
Publisher
SAGE Publications