Stochastic constraints for vision-aided inertial navigation
Author(s)
Diel, David D., 1979-
DownloadFull printable version (11.72Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Mechanical Engineering.
Advisor
Paul DeBitetto and Derek Rowell.
Terms of use
Metadata
Show full item recordAbstract
This thesis describes a new method to improve inertial navigation using feature-based constraints from one or more video cameras. The proposed method lengthens the period of time during which a human or vehicle can navigate in GPS-deprived environments. Our approach integrates well with existing navigation systems, because we invoke general sensor models that represent a wide range of available hardware. The inertial model includes errors in bias, scale, and random walk. Any camera and tracking algorithm may be used, as long as the visual output can be expressed as ray vectors extending from known locations on the sensor body. A modified linear Kalman filter performs the data fusion. Unlike traditional Simultaneous Localization and Mapping (SLAM/CML), our state vector contains only inertial sensor errors related to position. This choice allows uncertainty to be properly represented by a covariance matrix. We do not augment the state with feature coordinates. Instead, image data contributes stochastic epipolar constraints over a broad baseline in time and space, resulting in improved observability of the IMU error states. The constraints lead to a relative residual and associated relative covariance, defined partly by the state history. Navigation results are presented using high-quality synthetic data and real fisheye imagery.
Description
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005. Includes bibliographical references (p. 107-110).
Date issued
2005Department
Massachusetts Institute of Technology. Department of Mechanical EngineeringPublisher
Massachusetts Institute of Technology
Keywords
Mechanical Engineering.