| dc.contributor.author | Rui Li | |
| dc.contributor.author | Hoque, Mohammed Ehsan | |
| dc.contributor.author | Curhan, Jared R | |
| dc.date.accessioned | 2017-05-10T18:55:57Z | |
| dc.date.available | 2017-05-10T18:55:57Z | |
| dc.date.issued | 2015-07 | |
| dc.date.submitted | 2015-05 | |
| dc.identifier.isbn | 978-1-4799-6026-2 | |
| dc.identifier.isbn | 978-1-4799-6027-9 | |
| dc.identifier.uri | http://hdl.handle.net/1721.1/108792 | |
| dc.description.abstract | Effective video-conferencing conversations are heavily influenced by each speaker's facial expression. In this study, we propose a novel probabilistic model to represent interactional synchrony of conversation partners' facial expressions in video-conferencing communication. In particular, we use a hidden Markov model (HMM) to capture temporal properties of each speaker's facial expression sequence. Based on the assumption of mutual influence between conversation partners, we couple their HMMs as two interacting processes. Furthermore, we summarize the multiple coupled HMMs with a stochastic process prior to discover a set of facial synchronization templates shared among the multiple conversation pairs. We validate the model, by utilizing the exhibition of these facial synchronization templates to predict the outcomes of video-conferencing conversations. The dataset includes 75 video-conferencing conversations from 150 Amazon Mechanical Turkers in the context of a new recruit negotiation. The results show that our proposed model achieves higher accuracy in predicting negotiation winners than support vector machine and canonical HMMs. Further analysis indicates that some synchronized nonverbal templates contribute more in predicting the negotiation outcomes. | en_US |
| dc.language.iso | en_US | |
| dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
| dc.relation.isversionof | http://dx.doi.org/10.1109/FG.2015.7163102 | en_US |
| dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
| dc.source | Other univ. web domain | en_US |
| dc.title | Predicting video-conferencing conversation outcomes based on modeling facial expression synchronization | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Rui Li; Curhan, Jared and Hoque, Mohammed Ehsan. “Predicting Video-Conferencing Conversation Outcomes Based on Modeling Facial Expression Synchronization.” 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), May 2015, Ljubljana, Slovenia, Institute of Electrical and Electronics Engineers (IEEE), July 2015. | en_US |
| dc.contributor.department | Sloan School of Management | en_US |
| dc.contributor.mitauthor | Curhan, Jared R | |
| dc.relation.journal | 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) | en_US |
| dc.eprint.version | Author's final manuscript | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dspace.orderedauthors | Rui Li; Curhan, Jared; Hoque, Mohammed Ehsan | en_US |
| dspace.embargo.terms | N | en_US |
| dc.identifier.orcid | https://orcid.org/0000-0003-0625-1831 | |
| mit.license | OPEN_ACCESS_POLICY | en_US |