Show simple item record

dc.contributor.advisorNicholas Roy and Ted J. Steiner.en_US
dc.contributor.authorGreene, W. Nicholas (William Nicholas)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2017-02-22T19:01:14Z
dc.date.available2017-02-22T19:01:14Z
dc.date.copyright2016en_US
dc.date.issued2016en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/107051
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2016.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 91-100).en_US
dc.description.abstractCameras are powerful sensors for robotic navigation as they provide high-resolution environment information (color, shape, texture, etc.), while being lightweight, low-power, and inexpensive. Exploiting such sensor data for navigation tasks typically falls into the realm of monocular simultaneous localization and mapping (SLAM), where both the robot's pose and a map of the environment are estimated concurrently from the imagery produced by a single camera mounted on the robot. This thesis presents a monocular SLAM solution capable of reconstructing dense 3D geometry online without the aid of a graphics processing unit (GPU). The key contribution is a multi-resolution depth estimation and spatial smoothing process that exploits the correlation between low-texture image regions and simple planar structure to adaptively scale the complexity of the generated keyframe depthmaps to the quality of the input imagery. High-texture image regions are represented at higher resolutions to capture fine detail, while low-texture regions are represented at coarser resolutions for smooth surfaces. This approach allows for significant computational savings while simultaneously increasing reconstruction density and quality when compared to the state-of-the-art. Preliminary qualitative results are also presented for an adaptive meshing technique that generates dense reconstructions using only the pixels necessary to represent the scene geometry, which further reduces the computational requirements for fully dense reconstructions.en_US
dc.description.statementofresponsibilityby W. Nicholas Greene.en_US
dc.format.extent100 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleReal-time dense simultaneous localization and mapping using monocular camerasen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc971021875en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record