Guiding Navigation of Unknown Environments with Distant Visual Cues
Author(s)
Fahnestock, Ethan Kendall
DownloadThesis PDF (41.34Mb)
Advisor
Roy, Nicholas
Terms of use
Metadata
Show full item recordAbstract
While navigating unknown environments, robots rely primarily on proximate features for guidance in decision making such as depth information from lidar or stereo to build a costmap, or local semantic information from images. The limited range over which these features can be used can result in poor robot behavior when assumptions made by motion planning about the cost of the map beyond the range of proximate features misguide the robot. Integrating “far-field” image features that originate beyond these proximate features into the mapping pipeline has the promise of enabling more intelligent and aware navigation through unknown terrain. To navigate with far-field features, key challenges must be overcome. As far-field features are typically too distant to localize precisely they are difficult to place in a map. Additionally, the large distance between the robot and these features makes connecting these features to their navigation implications more challenging. In this thesis we propose FITAM, an approach that learns to use far-field features to predict navigation costs to guide navigation through unknown environments from previous experience in a self-supervised manner. Unlike previous work, our approach does not rely on flat ground plane assumptions or range sensors to localize observations. We demonstrate the benefits of our approach through simulated trials and real-world deployment on a Clearpath Robotics Warthog navigating through a forest environment.
Date issued
2024-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology