Show simple item record

dc.contributor.advisorCynthia Breazeal.en_US
dc.contributor.authorLee, Jin Jooen_US
dc.contributor.otherProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.date.accessioned2017-12-20T17:25:22Z
dc.date.available2017-12-20T17:25:22Z
dc.date.copyright2017en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/112851
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2017.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 115-122).en_US
dc.description.abstractMuch of human social communication is channeled through our facial expressions, body language, gaze directions, and many other nonverbal behaviors. A robot's ability to express and recognize the emotional states of people through these nonverbal channels is at the core of artificial social intelligence. The purpose of this thesis is to define a computational framework to nonverbal communication for human-robot interactions. We address both sides to nonverbal communication, the decoding and encoding of social-emotional states through nonverbal behaviors, and also demonstrate their shared underlying representation. We use our computational framework to model engagement/attention in storytelling interactions. Storytelling is an interaction form that is mutually regulated between storytellers and listeners where a key dynamic is the back-and- forth process of speaker cues and listener responses. Listeners convey attentiveness through nonverbal back-channels, while storytellers use nonverbal cues to elicit this feedback. We demonstrate that storytellers employ plans, albeit short, to influence and infer the attentive state of listeners using these speaker cues.We computationally model the intentional inference of storytellers as a planning problem of getting listeners to pay attention. When accounting for this intentional context of storytellers, our attention estimator outperforms current state-of-the-art approaches to emotion recognition. By formulating emotion recognition as a planning problem, we apply a recent artificial intelligence method of inverting planning models to perform belief inference. We computationally model emotion expression as a combined process of estimating a person's beliefs through inference inversion and then producing nonverbal expressions to affect those beliefs.We demonstrate that a robotic agent operating under our belief manipulation paradigm more effectively communicates an attentive state compared to current state-of- the-art approaches that cannot dynamically capture how the robot's expressions are interpreted by the human partner.en_US
dc.description.statementofresponsibilityJin Joo Lee.en_US
dc.format.extent137, [2] pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectProgram in Media Arts and Sciences ()en_US
dc.titleA Bayesian theory of mind approach to nonverbal communication for human-robot interactions : a computational formulation of intentional inference and belief manipulationen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.identifier.oclc1015248523en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record