Gesture in Automatic Discourse Processing
Author(s)
Eisenstein, Jacob
DownloadMIT-CSAIL-TR-2008-027.pdf (3.469Mb)
Additional downloads
Other Contributors
Natural Language Processing
Advisor
Randall Davis
Metadata
Show full item recordAbstract
Computers cannot fully understand spoken language without access to the wide range of modalities that accompany speech. This thesis addresses the particularly expressive modality of hand gesture, and focuses on building structured statistical models at the intersection of speech, vision, and meaning.My approach is distinguished in two key respects. First, gestural patterns are leveraged to discover parallel structures in the meaning of the associated speech. This differs from prior work that attempted to interpret individual gestures directly, an approach that was prone to a lack of generality across speakers. Second, I present novel, structured statistical models for multimodal language processing, which enable learning about gesture in its linguistic context, rather than in the abstract.These ideas find successful application in a variety of language processing tasks: resolving ambiguous noun phrases, segmenting speech into topics, and producing keyframe summaries of spoken language. In all three cases, the addition of gestural features -- extracted automatically from video -- yields significantly improved performance over a state-of-the-art text-only alternative. This marks the first demonstration that hand gesture improves automatic discourse processing.
Date issued
2008-05-07Other identifiers
MIT-CSAIL-TR-2008-027