Omnidirectional obstacle detection using minimal sensing
Author(s)
Cloitre, Audren Damien Prigent.
Download1149391280-MIT.pdf (18.25Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Mechanical Engineering.
Advisor
Nicholas M. Patrikalakis.
Terms of use
Metadata
Show full item recordAbstract
An integrated approach to visual obstacle detection for aerial multi-rotor vehicles (drones) is introduced. The approach achieves omnidirectional detection of obstacles via suitable synergy of hardware and software. The drone requires a specific arrangement of two cameras, opposing each other, and placed below and above the drone. A total coverage of the drone's surroundings is achieved by fitting each camera with a fisheye lens whose field of view is significantly greater than 180 degrees. The combined field of view of the cameras is omnidirectional, and may be conceptually subdivided into three regions: the monocular portions of each camera (centered at the north and south poles of the drone) and the stereo portion common to both cameras (circling the drone's equator). To use both the stereo and monocular data, a special image projection is developed, based on a model of the world as a 'capsule'. The capsule projection consists in a perspective cylindrical projection in the stereo portion, and a planar projection for the two monocular portions. Fisheye images warped by the capsule projection are called capsule images. A stereo algorithm is applied to the cylindrical portion of the capsule images to produce a stereo point cloud. Image features are tracked on the capsule images, since the projection is continuous across the stereo and monocular portions. The tracked features are used in a structure-from-motion algorithm that estimates their 3D locations, and produces a point cloud representing landmarks. The landmark and stereo point clouds are merged into a single set and projected to the unit sphere centered at the drone's coordinate frame. A 2D spherical Delaunay triangulation algorithm is used to build a triangular mesh from the projected points. The vertices of the mesh are then back-projected to their original 3D location, to create a 3D triangulated surface that represents the obstacles surrounding the drone. The overall method is validated via field experiments conducted with a drone whose design implements our specific camera arrangement. The drone system design is detailed and the experimental results show that this drone can effectively detect obstacles in arbitrary direction, with satisfactory accuracy.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019 Cataloged from PDF version of thesis. Includes bibliographical references (pages 159-168).
Date issued
2019Department
Massachusetts Institute of Technology. Department of Mechanical EngineeringPublisher
Massachusetts Institute of Technology
Keywords
Mechanical Engineering.