Show simple item record

dc.contributor.advisorJustine Cassell.en_US
dc.contributor.authorYan, Hao, 1973-en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciences.en_US
dc.date.accessioned2012-05-15T21:07:32Z
dc.date.available2012-05-15T21:07:32Z
dc.date.copyright2000en_US
dc.date.issued2000en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/70733
dc.descriptionThesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.en_US
dc.descriptionIncludes bibliographical references (p. 68-71).en_US
dc.description.abstractUsing face-to-face conversation as an interface metaphor, an embodied conversational agent is likely to be easier to use and learn than traditional graphical user interfaces. To make a believable agent that to some extent has the same social and conversational skills as humans do, the embodied conversational agent system must be able to deal with input of the user from different communication modalities such as speech and gesture, as well as generate appropriate behaviors for those communication modalities. In this thesis, I address the problem of paired speech and gesture generation in embodied conversational agents. I propose a real-time generation framework that is capable of generating a comprehensive description of communicative actions, including speech, gesture, and intonation, in the real-estate domain. The generation of speech, gesture, and intonation are based on the same underlying representation of real-estate properties, discourse information structure, intentional and attentional structures, and a mechanism to update the common ground between the user and the agent. Algorithms have been implemented to analyze the discourse information structure, contrast, and surprising semantic features, which together decide the intonation contour of the speech utterances and where gestures occur. I also investigate through a correlational study the role of communicative goals in determining the distribution of semantic features across speech and gesture modalities.en_US
dc.description.statementofresponsibilityby Hao Yan.en_US
dc.format.extent71 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectArchitecture. Program In Media Arts and Sciences.en_US
dc.titlePaired speech and gesture generation in embodied conversational agentsen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc47934332en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record