Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions
Author(s)
DePalma, Nicholas; Smith, H; Chernova, Sonia; Hodgins, Jessica
Download3434074.3447223.pdf (20.82Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
While recent work on gesture synthesis in agent and robot literature has treated gesture as co-speech and thus dependent on verbal utterances, we present evidence that gesture may leverage model context (i.e. the navigational task) and is not solely dependent on verbal utterance. This effect is particularly evident within ambiguous verbal utterances. Decoupling this dependency may allow future systems to synthesize clarifying gestures that clarify the ambiguous verbal utterance while enabling research in better understanding the semantics of the gesture. We bring together evidence from our own experiences in this domain that allow us to see for the first time what kind of end-to-end concerns models need to be developed to synthesize gesture for one-shot interactions while still preserving user outcomes and allowing for ambiguous utterances by the robot. We discuss these issues within the context of "cardinal direction gesture plans" which represent instructions that refer to the actions the human must follow in the future.
Description
HRI ’21 Companion, March 8–11, 2021, Boulder, CO, USA
Date issued
2021-03-08Department
Massachusetts Institute of Technology. Media LaboratoryPublisher
ACM|Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
Citation
DePalma, Nicholas, Smith, H, Chernova, Sonia and Hodgins, Jessica. 2021. "Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions."
Version: Final published version
ISBN
978-1-4503-8290-8
Collections
The following license files are associated with this item: