dc.contributor.advisor | Christopher M. Schmandt and Vivek K. Goyal. | en_US |
dc.contributor.author | Colaç̦o, Andrea B. (Andrea Brazilin Immaculate Danielle) | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Architecture. Program in Media Arts and Sciences. | en_US |
dc.date.accessioned | 2014-11-04T21:36:40Z | |
dc.date.available | 2014-11-04T21:36:40Z | |
dc.date.copyright | 2014 | en_US |
dc.date.issued | 2014 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/91437 | |
dc.description | Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2014. | en_US |
dc.description | 87 | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 141-150). | en_US |
dc.description.abstract | Mobile devices have evolved into powerful computing platforms. As computing capabilities grow and size shrinks, the most pronounced limitation with mobile devices is display size. With the adoption of touch as the de facto input, the mobile screen doubles as a display and an input device. Touchscreen interfaces have several limitations: the act of touching the screen occludes the display, interface elements like on-screen keyboards consume precious display real estate, and navigation through content often requires repeated actions like pinch-and-zoom. This thesis is motivated by these inherent limitations of using touch input to interact with mobile devices. Thus, the primary focus of this thesis is on using the space around the device for touchless gestural input to devices with small or no displays. Capturing gestural input in this volume requires localization of the human hand in 3D. We present a real-time system for doing so as a culmination of an exploration of novel methods for 3D capture. First, two related systems for 3D imaging are presented, both relying on modeling and algorithms from parametric sampling theory and compressed sensing. Then, a separate system for 3D localization, without full 3D imaging, is presented. This system, Mime, is built using standard, low-cost opto-electronic components - a single LED and three baseline separated photodiodes. We demonstrate fast and accurate 3D motion tracking at low power enabled by parametric scene response modeling. We combine this low-power 3D tracking with RGB image-based computer vision algorithms for finer gestural control. We demonstrate a variety of application scenarios developed using our sensor, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions. | en_US |
dc.description.statementofresponsibility | by Andrea B. Colaç̦o. | en_US |
dc.format.extent | 150 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Architecture. Program in Media Arts and Sciences. | en_US |
dc.title | Compact and low-power computational 3D sensors for gestural input | en_US |
dc.type | Thesis | en_US |
dc.description.degree | Ph. D. | en_US |
dc.contributor.department | Program in Media Arts and Sciences (Massachusetts Institute of Technology) | |
dc.identifier.oclc | 893671437 | en_US |