Role of Speaker Cues in Attention Inference
Author(s)Lee, Jin Joo; DeSteno, David; Breazeal, Cynthia L.
MetadataShow full item record
Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in attention inference, we conduct investigations into real-world interactions of children (5–6 years old) storytelling with their peers. Through indepth analysis of human–human interaction data, we first identify nonverbal speaker cues (i.e., backchannel-inviting cues) and listener responses (i.e., backchannel feedback). We then demonstrate how speaker cues can modify the interpretation of attention-related backchannels as well as serve as a means to regulate the responsiveness of listeners. We discuss the design implications of our findings toward our primary goal of developing attention recognition models for storytelling robots, and we argue that social robots can proactively use speaker cues to form more accurate inferences about the attentive state of their human partners.
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science; Program in Media Arts and Sciences (Massachusetts Institute of Technology)
Frontiers in Robotics and AI
Frontiers Media SA
Lee, Jin Joo, Cynthia Breazeal, and David DeSteno. “Role of Speaker Cues in Attention Inference.” Frontiers in Robotics and AI 4 (October 31, 2017).
Final published version