Show simple item record

dc.contributor.advisorVivienne Sze.en_US
dc.contributor.authorCheng, Alan,M. Eng.(Alan D.)Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-09-15T21:55:25Z
dc.date.available2020-09-15T21:55:25Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/127389
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 107-110).en_US
dc.description.abstractMobile augmented reality (AR) technology has seen immense growth in recent years, such as the release of the Microsoft HoloLens2, which allows users to interact with virtual objects around them in the real world. 3D data is often used in AR applications to determine surface normals, allowing for the correct placement and orientation of virtual objects in the real world scene. Time-of-flight (ToF) cameras can be used to acquire this 3D data by emitting light and measuring its round trip time to obtain depth. However, continuous acquisition of high-quality depth maps requires the ToF camera to expend significant amounts of power emitting light, lowering the battery-life of the underlying device. To reduce power consumption, we can use 3D motion computed using the 2D pixel-wise motion of consecutive RGB images captured alongside the ToF camera to obtain a new depth map without illuminating the scene.en_US
dc.description.abstractIn this thesis, we propose depth map reconstruction to limit the ToF camera usage by estimating depth maps using previously captured ones. In our algorithm, we represent previously captured depth maps as a point cloud called the scene map. Each time the ToF camera is used, the captured depth data is added to the scene map to create a representation of the scene captured. The ToF camera is only used when the 3D motion cannot be obtained, or if the obtained depth map contains too many zero-depth (invalid) pixels. To evaluate our algorithm for use in AR applications, we evaluate the accuracy of surface normals and trajectory estimations on our depth maps, in addition to mean relative error (MRE). Using RGB-D datasets, we show that our algorithm reduces the usage of the ToF by up to 97% with negligible impact on surface normals and trajectory estimation, while obtaining depth maps with at least 70% valid pixels.en_US
dc.description.abstractWe further demonstrate the use of our algorithm with integration into an AR application. Finally, we explore the implementation of depth map reconstruction using a CPU-FPGA co-processing architecture to achieve real-time performance.en_US
dc.description.statementofresponsibilityby Alan Cheng.en_US
dc.format.extent110 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleLow power time-of-flight imaging for augmented realityen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1192543563en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-09-15T21:55:23Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record