Show simple item record

dc.contributor.advisorV. Michael Bove.en_US
dc.contributor.authorParthiban, Vikraman.en_US
dc.contributor.otherProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.date.accessioned2020-03-23T18:11:18Z
dc.date.available2020-03-23T18:11:18Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/124196
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 83-84).en_US
dc.description.abstractWith the rise of augmented and virtual reality, new interactive technologies are incorporating immersive user interfaces that leverage gesture and voice recognition in addition to existing controller inputs. However, the state-of-the-art interfaces are quite rudimentary and not widely accessible for the user. Such interfaces require significant amount of sensors, extensive calibration, and/or high latency in the gestural commands. LUI (Large User Interface) is a scalable, multimodal interface that uses a framework of nondiscrete, free-handed gestures and voice to control modular applications with a single stereo-camera and voice assistant. The gestures and voice input are mapped to web UI elements to provide a highly-responsive and accessible user experience. The menu screen consists of an extendable list of applications, currently including photos, YouTube, etc, which are navigated through the input framework. This interface can be deployed on an AR or VR system, heads-up displays for autonomous vehicles, and everyday large displays.en_US
dc.description.statementofresponsibilityby Vikraman Parthiban.en_US
dc.format.extent84 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectProgram in Media Arts and Sciencesen_US
dc.titleLUI : a scalable, multimodal gesture- and voice-interface for large displaysen_US
dc.title.alternativeLarge User Interface : a scalable, multimodal gesture- and voice-interface for large displaysen_US
dc.title.alternativeScalable, multimodal gesture- and voice-interface for large displaysen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.identifier.oclc1145278341en_US
dc.description.collectionS.M. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciencesen_US
dspace.imported2020-03-23T18:11:17Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentMediaen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record