Show simple item record

dc.contributor.advisorNicholas Roy.en_US
dc.contributor.authorRichter, Charles Andrewen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2017-12-05T19:13:40Z
dc.date.available2017-12-05T19:13:40Z
dc.date.copyright2017en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/112457
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 165-175).en_US
dc.description.abstractIn this thesis, we explore the problem of high-speed autonomous navigation for a dynamic mobile robot in unknown environments. Our objective is to navigate from start to goal in minimum time, given no prior knowledge of the map, nor any explicit knowledge of the environment distribution. Faced with this challenge, most practical receding-horizon navigation methods simply restrict their action choices to the known portions of the map, and ignore the effects that future observations will have on their map knowledge, sacrificing performance as a result. In this thesis, we overcome these limitations by efficiently extending the robot's reasoning into unknown parts of the environment through supervised learning. We predict key contributors to the navigation cost before the relevant portions of the environment have been observed, using training examples from similar planning scenarios of interest. Our first contribution is to develop a model of collision probability to predict the outcomes of actions that extend beyond the perceptual horizon. We use this collision probability model as a data-driven replacement for conventional safety constraints in a receding-horizon planner, resulting in collision-free navigation at speeds up to twice as fast as conventional planners. We make these predictions using a Bayesian approach, leveraging training data for performance in familiar situations, and automatically reverting to safe prior behavior in novel situations for which our model is untrained. Our second contribution is to develop a model of future measurement utility, efficiently enabling information-gathering behaviors that can extend the robot's visibility far into unknown regions of the environment, thereby lengthening the perceptual horizon, resulting in faster navigation even under conventional safety constraints. Our third contribution is to adapt our collision prediction methods to operate on raw camera images, using deep neural networks. By making predictions directly from images, we take advantage of rich appearance-based information well beyond the range to which dense, accurate environment geometry can be reliably estimated. Pairing this neural network with novelty detection and a self-supervised labeling technique, we show that we can deploy our system initially with no training, and it will continually improve with experience and expand the set of environment types with which it is familiar.en_US
dc.description.statementofresponsibilityby Charles Andrew Richter.en_US
dc.format.extent175 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleAutonomous navigation in unknown environments using machine learningen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc1010818089en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record