Crowdsourced data collection of facial responses
Author(s)
McDuff, Daniel Jonathan; Picard, Rosalind W.; El Kaliouby, Rana
DownloadPicard_Crowdsourced data.pdf (4.877Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
In the past, collecting data to train facial expression and affect recognition systems has been time consuming and often led to data that do not include spontaneous expressions. We present the first crowdsourced data collection of dynamic, natural and spontaneous facial responses as viewers watch media online. This system allowed a corpus of 3,268 videos to be collected in under two months.
We characterize the data in terms of viewer demographics, position, scale, pose and movement of the viewer within the frame, and illumination of the facial region. We compare statistics from this corpus to those from the CK+ and MMI databases and show that distributions of position, scale, pose, movement and luminance of the facial region are significantly different from those represented in these datasets.
We demonstrate that it is possible to efficiently collect massive amounts of ecologically valid responses, to known stimuli, from a diverse population using such a system. In addition facial feature points within the videos can be tracked for over 90% of the frames. These responses were collected without need for scheduling, payment or recruitment. Finally, we describe a subset of data (over 290 videos) that will be available for the research community.
Date issued
2011-11Department
Massachusetts Institute of Technology. Media Laboratory; Program in Media Arts and Sciences (Massachusetts Institute of Technology)Journal
Proceedings of the 13th international conference on multimodal interfaces (ICMI '11)
Publisher
Association for Computing Machinery (ACM)
Citation
Daniel McDuff, Rana el Kaliouby, and Rosalind Picard. 2011. Crowdsourced data collection of facial responses. In Proceedings of the 13th international conference on multimodal interfaces (ICMI '11). ACM, New York, NY, USA, 11-18.
Version: Author's final manuscript
ISBN
9781450306416