Show simple item record

dc.contributor.authorMcDuff, Daniel
dc.contributor.authorel Kaliouby, Rana
dc.contributor.authorPicard, Rosalind W.
dc.date.accessioned2017-07-19T14:21:13Z
dc.date.available2017-07-19T14:21:13Z
dc.date.issued2015-12
dc.date.submitted2015-09
dc.identifier.isbn978-1-4799-9953-8
dc.identifier.urihttp://hdl.handle.net/1721.1/110774
dc.description.abstractTraditional observational research methods required an experimenter's presence in order to record videos of participants, and limited the scalability of data collection to typically less than a few hundred people in a single location. In order to make a significant leap forward in affective expression data collection and the insights based on it, our work has created and validated a novel framework for collecting and analyzing facial responses over the Internet. The first experiment using this framework enabled 3,268 trackable face videos to be collected and analyzed in under two months. Each participant viewed one or more commercials while their facial response was recorded and analyzed. Our data showed significantly different intensity and dynamics patterns of smile responses between subgroups who reported liking the commercials versus those who did not. Since this framework appeared in 2011, we have collected over three million videos of facial responses in over 75 countries using this same methodology, enabling facial analytics to become significantly more accurate and validated across five continents. Many new insights have been discovered based on crowd-sourced facial data, enabling Internet-based measurement of facial responses to become reliable and proven. We are now able to provide large-scale evidence for gender, cultural and age differences in behaviors. Today such methods are used as part of standard practice in industry for copy-testing advertisements and are increasingly used for online media evaluations, distance learning, and mobile applications.en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/ACII.2015.7344618en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleCrowdsourcing facial responses to online videos: Extended abstracten_US
dc.typeArticleen_US
dc.identifier.citationMcDuff, Daniel, Rana el Kaliouby, and Rosalind W. Picard. “Crowdsourcing Facial Responses to Online Videos: Extended Abstract.” 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi'an, China, 21-24 September, 2015. IEEE, 2015. 512–518.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.contributor.mitauthorPicard, Rosalind W.
dc.relation.journal2015 International Conference on Affective Computing and Intelligent Interaction (ACII)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsMcDuff, Daniel; el Kaliouby, Rana; Picard, Rosalind W.en_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-5661-0022
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record