Show simple item record

dc.contributor.authorRui Li
dc.contributor.authorHoque, Mohammed Ehsan
dc.contributor.authorCurhan, Jared R
dc.date.accessioned2017-05-10T18:55:57Z
dc.date.available2017-05-10T18:55:57Z
dc.date.issued2015-07
dc.date.submitted2015-05
dc.identifier.isbn978-1-4799-6026-2
dc.identifier.isbn978-1-4799-6027-9
dc.identifier.urihttp://hdl.handle.net/1721.1/108792
dc.description.abstractEffective video-conferencing conversations are heavily influenced by each speaker's facial expression. In this study, we propose a novel probabilistic model to represent interactional synchrony of conversation partners' facial expressions in video-conferencing communication. In particular, we use a hidden Markov model (HMM) to capture temporal properties of each speaker's facial expression sequence. Based on the assumption of mutual influence between conversation partners, we couple their HMMs as two interacting processes. Furthermore, we summarize the multiple coupled HMMs with a stochastic process prior to discover a set of facial synchronization templates shared among the multiple conversation pairs. We validate the model, by utilizing the exhibition of these facial synchronization templates to predict the outcomes of video-conferencing conversations. The dataset includes 75 video-conferencing conversations from 150 Amazon Mechanical Turkers in the context of a new recruit negotiation. The results show that our proposed model achieves higher accuracy in predicting negotiation winners than support vector machine and canonical HMMs. Further analysis indicates that some synchronized nonverbal templates contribute more in predicting the negotiation outcomes.en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/FG.2015.7163102en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceOther univ. web domainen_US
dc.titlePredicting video-conferencing conversation outcomes based on modeling facial expression synchronizationen_US
dc.typeArticleen_US
dc.identifier.citationRui Li; Curhan, Jared and Hoque, Mohammed Ehsan. “Predicting Video-Conferencing Conversation Outcomes Based on Modeling Facial Expression Synchronization.” 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), May 2015, Ljubljana, Slovenia, Institute of Electrical and Electronics Engineers (IEEE), July 2015.en_US
dc.contributor.departmentSloan School of Managementen_US
dc.contributor.mitauthorCurhan, Jared R
dc.relation.journal2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsRui Li; Curhan, Jared; Hoque, Mohammed Ehsanen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0003-0625-1831
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record