dc.contributor.author | McDuff, Daniel Jonathan | |
dc.contributor.author | el Kaliouby, Rana | |
dc.contributor.author | Kassam, Karim | |
dc.contributor.author | Picard, Rosalind W. | |
dc.date.accessioned | 2011-12-06T18:03:15Z | |
dc.date.available | 2011-12-06T18:03:15Z | |
dc.date.issued | 2011-05 | |
dc.date.submitted | 2011-03 | |
dc.identifier.isbn | 978-1-4244-9140-7 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/67459 | |
dc.description.abstract | Facial and head actions contain significant affective information. To date, these actions have mostly been studied in isolation because the space of naturalistic combinations is vast. Interactive visualization tools could enable new explorations of dynamically changing combinations of actions as people interact with natural stimuli. This paper describes a new open-source tool that enables navigation of and interaction with dynamic face and gesture data across large groups of people, making it easy to see when multiple facial actions co-occur, and how these patterns compare and cluster across groups of participants. We share two case studies that demonstrate how the tool allows researchers to quickly view an entire corpus of data for single or multiple participants, stimuli and actions. Acume yielded patterns of actions across participants and across stimuli, and helped give insight into how our automated facial analysis methods could be better designed. The results of these case studies are used to demonstrate the efficacy of the tool. The open-source code is designed to directly address the needs of the face and gesture research community, while also being extensible and flexible for accommodating other kinds of behavioral data. Source code, application and documentation are available at http://affect.media.mit.edu/acume. | en_US |
dc.description.sponsorship | Procter & Gamble Company | en_US |
dc.language.iso | en_US | |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1109/FG.2011.5771464 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike 3.0 | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/3.0/ | en_US |
dc.source | Javier Hernandez Rivera | en_US |
dc.title | Acume: A New Visualization Tool for Understanding Facial Expression and Gesture Data | en_US |
dc.type | Article | en_US |
dc.identifier.citation | McDuff, Daniel et al. “Acume: A New Visualization Tool for Understanding Facial Expression and Gesture Data.” Face and Gesture 2011. Santa Barbara, CA, USA, 2011. 591-596. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Media Laboratory | en_US |
dc.contributor.mitauthor | McDuff, Daniel Jonathan | |
dc.contributor.mitauthor | el Kaliouby, Rana | |
dc.contributor.mitauthor | Picard, Rosalind W. | |
dc.relation.journal | 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) | en_US |
dc.eprint.version | Author's final manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
dspace.orderedauthors | McDuff, Daniel; Kaliouby, Rana el; Kassam, Karim; Picard, Rosalind | en |
dc.identifier.orcid | https://orcid.org/0000-0002-5661-0022 | |
mit.license | OPEN_ACCESS_POLICY | en_US |
mit.metadata.status | Complete | |