Show simple item record

dc.contributor.advisorNicholas M. Patrikalakis.en_US
dc.contributor.authorCloitre, Audren Damien Prigent.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Mechanical Engineering.en_US
dc.date.accessioned2020-04-13T18:34:08Z
dc.date.available2020-04-13T18:34:08Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/124595
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 159-168).en_US
dc.description.abstractAn integrated approach to visual obstacle detection for aerial multi-rotor vehicles (drones) is introduced. The approach achieves omnidirectional detection of obstacles via suitable synergy of hardware and software. The drone requires a specific arrangement of two cameras, opposing each other, and placed below and above the drone. A total coverage of the drone's surroundings is achieved by fitting each camera with a fisheye lens whose field of view is significantly greater than 180 degrees. The combined field of view of the cameras is omnidirectional, and may be conceptually subdivided into three regions: the monocular portions of each camera (centered at the north and south poles of the drone) and the stereo portion common to both cameras (circling the drone's equator). To use both the stereo and monocular data, a special image projection is developed, based on a model of the world as a 'capsule'.en_US
dc.description.abstractThe capsule projection consists in a perspective cylindrical projection in the stereo portion, and a planar projection for the two monocular portions. Fisheye images warped by the capsule projection are called capsule images. A stereo algorithm is applied to the cylindrical portion of the capsule images to produce a stereo point cloud. Image features are tracked on the capsule images, since the projection is continuous across the stereo and monocular portions. The tracked features are used in a structure-from-motion algorithm that estimates their 3D locations, and produces a point cloud representing landmarks. The landmark and stereo point clouds are merged into a single set and projected to the unit sphere centered at the drone's coordinate frame. A 2D spherical Delaunay triangulation algorithm is used to build a triangular mesh from the projected points.en_US
dc.description.abstractThe vertices of the mesh are then back-projected to their original 3D location, to create a 3D triangulated surface that represents the obstacles surrounding the drone. The overall method is validated via field experiments conducted with a drone whose design implements our specific camera arrangement. The drone system design is detailed and the experimental results show that this drone can effectively detect obstacles in arbitrary direction, with satisfactory accuracy.en_US
dc.description.statementofresponsibilityby Audren Damien Prigent Cloitre.en_US
dc.format.extent168 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectMechanical Engineering.en_US
dc.titleOmnidirectional obstacle detection using minimal sensingen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.identifier.oclc1149391280en_US
dc.description.collectionPh.D. Massachusetts Institute of Technology, Department of Mechanical Engineeringen_US
dspace.imported2020-04-13T18:33:39Zen_US
mit.thesis.degreeDoctoralen_US
mit.thesis.departmentMechEen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record