Acume: A New Visualization Tool for Understanding Facial Expression and Gesture Data
Author(s)
McDuff, Daniel Jonathan; el Kaliouby, Rana; Kassam, Karim; Picard, Rosalind W.
DownloadFG 2011_McDuff.pdf (652.6Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Facial and head actions contain significant affective information. To date, these actions have mostly been studied in isolation because the space of naturalistic combinations is vast. Interactive visualization tools could enable new explorations of dynamically changing combinations of actions as people interact with natural stimuli. This paper describes a new open-source tool that enables navigation of and interaction with dynamic face and gesture data across large groups of people, making it easy to see when multiple facial actions co-occur, and how these patterns compare and cluster across groups of participants. We share two case studies that demonstrate how the tool allows researchers to quickly view an entire corpus of data for single or multiple participants, stimuli and actions. Acume yielded patterns of actions across participants and across stimuli, and helped give insight into how our automated facial analysis methods could be better designed. The results of these case studies are used to demonstrate the efficacy of the tool. The open-source code is designed to directly address the needs of the face and gesture research community, while also being extensible and flexible for accommodating other kinds of behavioral data. Source code, application and documentation are available at http://affect.media.mit.edu/acume.
Date issued
2011-05Department
Massachusetts Institute of Technology. Media LaboratoryJournal
2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011)
Publisher
Institute of Electrical and Electronics Engineers
Citation
McDuff, Daniel et al. “Acume: A New Visualization Tool for Understanding Facial Expression and Gesture Data.” Face and Gesture 2011. Santa Barbara, CA, USA, 2011. 591-596.
Version: Author's final manuscript
ISBN
978-1-4244-9140-7