Show simple item record

dc.contributor.authorMcDuff, Daniel Jonathan
dc.contributor.authorPicard, Rosalind W.
dc.contributor.authorEl Kaliouby, Rana
dc.date.accessioned2013-08-21T17:17:40Z
dc.date.available2013-08-21T17:17:40Z
dc.date.issued2011-11
dc.identifier.isbn9781450306416
dc.identifier.urihttp://hdl.handle.net/1721.1/79895
dc.description.abstractIn the past, collecting data to train facial expression and affect recognition systems has been time consuming and often led to data that do not include spontaneous expressions. We present the first crowdsourced data collection of dynamic, natural and spontaneous facial responses as viewers watch media online. This system allowed a corpus of 3,268 videos to be collected in under two months. We characterize the data in terms of viewer demographics, position, scale, pose and movement of the viewer within the frame, and illumination of the facial region. We compare statistics from this corpus to those from the CK+ and MMI databases and show that distributions of position, scale, pose, movement and luminance of the facial region are significantly different from those represented in these datasets. We demonstrate that it is possible to efficiently collect massive amounts of ecologically valid responses, to known stimuli, from a diverse population using such a system. In addition facial feature points within the videos can be tracked for over 90% of the frames. These responses were collected without need for scheduling, payment or recruitment. Finally, we describe a subset of data (over 290 videos) that will be available for the research community.en_US
dc.description.sponsorshipThings That Think Consortiumen_US
dc.description.sponsorshipProcter & Gamble Companyen_US
dc.language.isoen_US
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/2070481.2070486en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike 3.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourceMIT Web Domainen_US
dc.titleCrowdsourced data collection of facial responsesen_US
dc.typeArticleen_US
dc.identifier.citationDaniel McDuff, Rana el Kaliouby, and Rosalind Picard. 2011. Crowdsourced data collection of facial responses. In Proceedings of the 13th international conference on multimodal interfaces (ICMI '11). ACM, New York, NY, USA, 11-18.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.contributor.mitauthorMcDuff, Daniel Jonathanen_US
dc.contributor.mitauthorel Kaliouby, Ranaen_US
dc.contributor.mitauthorPicard, Rosalind W.en_US
dc.relation.journalProceedings of the 13th international conference on multimodal interfaces (ICMI '11)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsMcDuff, Daniel; el Kaliouby, Rana; Picard, Rosalinden_US
dc.identifier.orcidhttps://orcid.org/0000-0002-5661-0022
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record