Show simple item record

dc.contributor.advisorChristopher M. Schmandt and Vivek K. Goyal.en_US
dc.contributor.authorColaç̦o, Andrea B. (Andrea Brazilin Immaculate Danielle)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Architecture. Program in Media Arts and Sciences.en_US
dc.date.accessioned2014-11-04T21:36:40Z
dc.date.available2014-11-04T21:36:40Z
dc.date.copyright2014en_US
dc.date.issued2014en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/91437
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2014.en_US
dc.description87en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 141-150).en_US
dc.description.abstractMobile devices have evolved into powerful computing platforms. As computing capabilities grow and size shrinks, the most pronounced limitation with mobile devices is display size. With the adoption of touch as the de facto input, the mobile screen doubles as a display and an input device. Touchscreen interfaces have several limitations: the act of touching the screen occludes the display, interface elements like on-screen keyboards consume precious display real estate, and navigation through content often requires repeated actions like pinch-and-zoom. This thesis is motivated by these inherent limitations of using touch input to interact with mobile devices. Thus, the primary focus of this thesis is on using the space around the device for touchless gestural input to devices with small or no displays. Capturing gestural input in this volume requires localization of the human hand in 3D. We present a real-time system for doing so as a culmination of an exploration of novel methods for 3D capture. First, two related systems for 3D imaging are presented, both relying on modeling and algorithms from parametric sampling theory and compressed sensing. Then, a separate system for 3D localization, without full 3D imaging, is presented. This system, Mime, is built using standard, low-cost opto-electronic components - a single LED and three baseline separated photodiodes. We demonstrate fast and accurate 3D motion tracking at low power enabled by parametric scene response modeling. We combine this low-power 3D tracking with RGB image-based computer vision algorithms for finer gestural control. We demonstrate a variety of application scenarios developed using our sensor, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.en_US
dc.description.statementofresponsibilityby Andrea B. Colaç̦o.en_US
dc.format.extent150 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectArchitecture. Program in Media Arts and Sciences.en_US
dc.titleCompact and low-power computational 3D sensors for gestural inputen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc893671437en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record