Safe Visual Navigation via Deep Learning and Novelty Detection
Author(s)
Richter, Charles Andrew; Roy, Nicholas
Downloadp64.pdf (3.430Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Robots that use learned perceptual models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. However, state-of-the-art deep learning methods are known to produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent ensemble, bootstrap and dropout methods for quantifying neural network uncertainty may not efficiently provide accurate uncertainty estimates when queried with inputs that are very different from their training data. Rather than unconditionally trusting the predictions of a neural network for unpredictable real-world data, we use an autoencoder to recognize when a query is novel, and revert to a safe prior behavior. With this capability, we can deploy an autonomous deep learning system in arbitrary environments, without concern for whether it has received the appropriate training. We demonstrate our method with a vision-guided robot that can leverage its deep neural network to navigate 50% faster than a safe baseline policy in familiar types of environments, while reverting to the prior behavior in novel environments so that it can safely collect additional training data and continually improve. A video illustrating our approach is available at: http://groups.csail.mit.edu/rrg/videos/safe visual navigation.
Date issued
2017-07Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsJournal
Robotics: Science and Systems XIII
Publisher
Robotics: Science and Systems Foundation
Citation
Richter, Charles, and Nicholas Roy. “Safe Visual Navigation via Deep Learning and Novelty Detection.” Robotics: Science and Systems XIII (July 12, 2017).
Version: Author's final manuscript
ISBN
978-0-9923747-3-0