dc.contributor.advisor | Vivienne Sze. | en_US |
dc.contributor.author | Cheng, Alan,M. Eng.(Alan D.)Massachusetts Institute of Technology. | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2020-09-15T21:55:25Z | |
dc.date.available | 2020-09-15T21:55:25Z | |
dc.date.copyright | 2020 | en_US |
dc.date.issued | 2020 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/127389 | |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020 | en_US |
dc.description | Cataloged from the official PDF of thesis. | en_US |
dc.description | Includes bibliographical references (pages 107-110). | en_US |
dc.description.abstract | Mobile augmented reality (AR) technology has seen immense growth in recent years, such as the release of the Microsoft HoloLens2, which allows users to interact with virtual objects around them in the real world. 3D data is often used in AR applications to determine surface normals, allowing for the correct placement and orientation of virtual objects in the real world scene. Time-of-flight (ToF) cameras can be used to acquire this 3D data by emitting light and measuring its round trip time to obtain depth. However, continuous acquisition of high-quality depth maps requires the ToF camera to expend significant amounts of power emitting light, lowering the battery-life of the underlying device. To reduce power consumption, we can use 3D motion computed using the 2D pixel-wise motion of consecutive RGB images captured alongside the ToF camera to obtain a new depth map without illuminating the scene. | en_US |
dc.description.abstract | In this thesis, we propose depth map reconstruction to limit the ToF camera usage by estimating depth maps using previously captured ones. In our algorithm, we represent previously captured depth maps as a point cloud called the scene map. Each time the ToF camera is used, the captured depth data is added to the scene map to create a representation of the scene captured. The ToF camera is only used when the 3D motion cannot be obtained, or if the obtained depth map contains too many zero-depth (invalid) pixels. To evaluate our algorithm for use in AR applications, we evaluate the accuracy of surface normals and trajectory estimations on our depth maps, in addition to mean relative error (MRE). Using RGB-D datasets, we show that our algorithm reduces the usage of the ToF by up to 97% with negligible impact on surface normals and trajectory estimation, while obtaining depth maps with at least 70% valid pixels. | en_US |
dc.description.abstract | We further demonstrate the use of our algorithm with integration into an AR application. Finally, we explore the implementation of depth map reconstruction using a CPU-FPGA co-processing architecture to achieve real-time performance. | en_US |
dc.description.statementofresponsibility | by Alan Cheng. | en_US |
dc.format.extent | 110 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Low power time-of-flight imaging for augmented reality | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.oclc | 1192543563 | en_US |
dc.description.collection | M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
dspace.imported | 2020-09-15T21:55:23Z | en_US |
mit.thesis.degree | Master | en_US |
mit.thesis.department | EECS | en_US |