Show simple item record

dc.contributor.advisorJoseph A. Paradiso.en_US
dc.contributor.authorBenbasat, Ari Yosef, 1975-en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciencesen_US
dc.date.accessioned2007-08-03T19:30:07Z
dc.date.available2007-08-03T19:30:07Z
dc.date.copyright2000en_US
dc.date.issued2000en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/38451
dc.descriptionThesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 2000.en_US
dc.descriptionIncludes bibliographical references (p. 131-135).en_US
dc.description.abstractInertial measurement components, which sense either acceleration or angular rate, are being embedded into common user interface devices more frequently as their cost continues to drop dramatically. These devices hold a number of advantages over other sensing technologies: they measure relevant parameters for human interfaces and can easily be embedded into wireless, mobile platforms. The work in this dissertation demonstrates that inertial measurement can be used to acquire rich data about human gestures, that we can derive efficient algorithms for using this data in gesture recognition, and that the concept of a parameterized atomic gesture recognition has merit. Further we show that a framework combining these three levels of description can be easily used by designers to create robust applications. A wireless six degree-of-freedom inertial measurement unit (IMU), with a cubical form factor (1.25 inches on a side) was constructed to collect the data, providing updates at 15 ms intervals. This data is analyzed for periods of activity using a windowed variance algorithm, whose thresholds can be set analytically. These segments are then examined by the gesture recognition algorithms, which are applied on an axis-by-axis basis to the data. The recognized gestures are considered atomic (i.e. cannot be decomposed) and are parameterized in terms of magnitude and duration. Given these atomic gestures, a simple scripting language is developed to allow designers to combine them into full gestures of interest. It allows matching of recognized atomic gestures to prototypes based on their type, parameters and time of occurrence. Because our goal is to eventually create stand-alone devices,the algorithms designed for this framework have both low algorithmic complexity and low latency, at the price of a small loss in generality. To demonstrate this system, the gesture recognition portion of (void*): A Cast of Characters, an installation which used a pair of hand-held IMUs to capture gestural inputs, was implemented using this framework. This version ran much faster than the original version (based on Hidden Markov Models), used less processing power, and performed at least as well.en_US
dc.description.statementofresponsibilityby Ari Yosef Benbasat.en_US
dc.format.extent135 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectArchitecture. Program In Media Arts and Sciencesen_US
dc.titleAn inertial measurement unit for user interfacesen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc48591488en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record