Show simple item record

dc.contributor.authorAdler, Aaron D.en_US
dc.date.accessioned2004-10-20T20:31:48Z
dc.date.available2004-10-20T20:31:48Z
dc.date.issued2003-02-01en_US
dc.identifier.otherAITR-2003-004en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7103
dc.description.abstractSketches are commonly used in the early stages of design. Our previous system allows users to sketch mechanical systems that the computer interprets. However, some parts of the mechanical system might be too hard or too complicated to express in the sketch. Adding speech recognition to create a multimodal system would move us toward our goal of creating a more natural user interface. This thesis examines the relationship between the verbal and sketch input, particularly how to segment and align the two inputs. Toward this end, subjects were recorded while they sketched and talked. These recordings were transcribed, and a set of rules to perform segmentation and alignment was created. These rules represent the knowledge that the computer needs to perform segmentation and alignment. The rules successfully interpreted the 24 data sets that they were given.en_US
dc.format.extent193 p.en_US
dc.format.extent34430522 bytes
dc.format.extent46149955 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesAITR-2003-004en_US
dc.subjectAIen_US
dc.subjectsketchen_US
dc.subjectdesignen_US
dc.subjectmultimodalen_US
dc.subjectdisambiguationen_US
dc.subjectsegmentationen_US
dc.subjectalignmenten_US
dc.titleSegmentation and Alignment of Speech and Sketching in a Design Environmenten_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record