Show simple item record

dc.contributor.advisorJohn W. Fisher III.en_US
dc.contributor.authorCabezas, Randien_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2018-09-17T14:51:33Z
dc.date.available2018-09-17T14:51:33Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/117832
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 153-167).en_US
dc.description.abstractWhile much emphasis has been placed on large-scale 3D scene reconstruction from a single data source such as images or distance sensors, models that jointly utilize multiple data types remain largely unexplored. In this work, we will present a Bayesian formulation of scene reconstruction from multi-modal data as well as two critical components that enable large-scale reconstructions with adaptive resolution and high-level scene understanding with meaningful prior-probability distributions. Our first contribution is to formulate the 3D reconstruction problem within the Bayesian framework. We develop an integrated probabilistic model that allows us to naturally represent uncertainty and to fuse complementary information provided by different sensor modalities (imagery and LiDAR). Maximum-a-Posteriori inference within this model leverages GPGPUs for efficient likelihood evaluations. Our dense reconstructions (triangular mesh with texture information) are feasible with fewer observations of a given modality by relaying on others without sacrificing quality. Secondly, to enable large-scale reconstructions our formulation supports adaptive resolutions in both appearance and geometry. This change is motivated by the need for a representation that can adjust to a wide variability in data quality and availability. By coupling edge transformations within a reversible-jump MCMC framework, we allow changes in the number of triangles and mesh connectivity. We demonstrate that these data-driven updates lead to more accurate representations while reducing modeling assumptions and utilizing fewer triangles. Lastly, to enable high-level scene understanding, we include a categorization of reconstruction elements in our formulation. This scene-specific classification of triangles is estimated from semantic annotations (which are noisy and incomplete) and other scene features (e.g., geometry and appearance). The categorization provides a class-specific prior-probability distribution, thus helping to obtain more accurate and interpretable representations by regularizing the reconstruction. Collectively, these models enable complex reasoning about urban scenes by fusing all available data across modalities, a crucial necessity for future autonomous agents and large-scale augmented-reality applications.en_US
dc.description.statementofresponsibilityby Randi Cabezas.en_US
dc.format.extentxvi, 167 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleLarge-scale probabilistic aerial reconstructionen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1052123619en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record