Show simple item record

dc.contributor.advisorCynthia Breazeal.en_US
dc.contributor.authorLee, Jin Jooen_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences.en_US
dc.date.accessioned2012-02-28T18:49:01Z
dc.date.available2012-02-28T18:49:01Z
dc.date.copyright2011en_US
dc.date.issued2011en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/69244
dc.descriptionThesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (p. 105-108).en_US
dc.description.abstractWe describe the design, implementation, and validation of a computational model for recognizing interpersonal trust in social interactions. We begin by leverage pre-existing datasets to understand the relationship between synchronous movement, mimicry, and gestural cues with trust. We found that although synchronous movement was not predictive of trust, synchronous movement is positively correlated with mimicry. That is, people who mimicked each other more frequently also move more synchronously in time together. And revealing the versatile nature of unconscious mimicry, we found mimicry to be predictive of liking between participants instead of trust. We reconfirmed that the following four negative gestural cues, leaning-backward, face-touching, hand-touching, and crossing-arms, when taken together are predictive of lower levels of trust, while the following three positive gestural cues, leaning-forward, having arms-in-lap, and open-arms, were predictive of higher levels of trust. We train and validate a probabilistic graphical model using natural social interaction data from 74 participants. And by observing how these seven important gestures unfold throughout the social interaction, our Trust Hidden Markov Model is able to predict with 94% accuracy whether an individual is willing to behave cooperatively or uncooperatively with their novel partner. And by simulating the resulting model, we found that not only does the frequency in the emission of the predictive gestures matter as well, but also the sequence in which we emit negative to positive cues matter. We attempt to automate this recognition process by detecting those trust-related behaviors through 3D motion capture technology and gesture recognition algorithms. And finally, we test how accurately our entire system, with low-level gesture recognition for high-level trust recognition, can predict whether an individual finds another to be trustworthy or untrustworthy.en_US
dc.description.statementofresponsibilityby Jin Joo Lee.en_US
dc.format.extent108 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectArchitecture. Program in Media Arts and Sciences.en_US
dc.titleModeling the dynamics of nonverbal behavior on interpersonal trust for human-robot interactionsen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc776155780en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record