Predicting video-conferencing conversation outcomes based on modeling facial expression synchronization
Author(s)Rui Li; Hoque, Mohammed Ehsan; Curhan, Jared R
MetadataShow full item record
Effective video-conferencing conversations are heavily influenced by each speaker's facial expression. In this study, we propose a novel probabilistic model to represent interactional synchrony of conversation partners' facial expressions in video-conferencing communication. In particular, we use a hidden Markov model (HMM) to capture temporal properties of each speaker's facial expression sequence. Based on the assumption of mutual influence between conversation partners, we couple their HMMs as two interacting processes. Furthermore, we summarize the multiple coupled HMMs with a stochastic process prior to discover a set of facial synchronization templates shared among the multiple conversation pairs. We validate the model, by utilizing the exhibition of these facial synchronization templates to predict the outcomes of video-conferencing conversations. The dataset includes 75 video-conferencing conversations from 150 Amazon Mechanical Turkers in the context of a new recruit negotiation. The results show that our proposed model achieves higher accuracy in predicting negotiation winners than support vector machine and canonical HMMs. Further analysis indicates that some synchronized nonverbal templates contribute more in predicting the negotiation outcomes.
DepartmentSloan School of Management
2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG)
Institute of Electrical and Electronics Engineers (IEEE)
Rui Li; Curhan, Jared and Hoque, Mohammed Ehsan. “Predicting Video-Conferencing Conversation Outcomes Based on Modeling Facial Expression Synchronization.” 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), May 2015, Ljubljana, Slovenia, Institute of Electrical and Electronics Engineers (IEEE), July 2015.
Author's final manuscript