Show simple item record

dc.contributor.authorMcDuff, Daniel Jonathan
dc.contributor.authorSenechal, Thibaud
dc.contributor.authorAmr, May
dc.contributor.authorCohn, Jeffrey F.
dc.contributor.authorPicard, Rosalind W.
dc.contributor.authorEl Kaliouby, Rana
dc.date.accessioned2013-09-16T13:12:16Z
dc.date.available2013-09-16T13:12:16Z
dc.date.issued2013-06
dc.identifier.isbn9780769549903
dc.identifier.urihttp://hdl.handle.net/1721.1/80733
dc.description.abstractComputer classification of facial expressions requires large amounts of data and this data needs to reflect the diversity of conditions seen in real applications. Public datasets help accelerate the progress of research by providing researchers with a benchmark resource. We present a comprehensively labeled dataset of ecologically valid spontaneous facial responses recorded in natural settings over the Internet. To collect the data, online viewers watched one of three intentionally amusing Super Bowl commercials and were simultaneously filmed using their webcam. They answered three self-report questions about their experience. A subset of viewers additionally gave consent for their data to be shared publicly with other researchers. This subset consists of 242 facial videos (168,359 frames) recorded in real world conditions. The dataset is comprehensively labeled for the following: 1) frame-by-frame labels for the presence of 10 symmetrical FACS action units, 4 asymmetric (unilateral) FACS action units, 2 head movements, smile, general expressiveness, feature tracker fails and gender; 2) the location of 22 automatically detected landmark points; 3) self-report responses of familiarity with, liking of, and desire to watch again for the stimuli videos and 4) baseline performance of detection algorithms on this dataset. This data is available for distribution to researchers online, the EULA can be found at: http://www.affectiva.com/facial-expression-dataset-am-fed/.en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike 3.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourceMIT Web Domainen_US
dc.titleAffectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wilden_US
dc.typeArticleen_US
dc.identifier.citationMcDuff, Daniel Jonathan; el Kaliouby, Rana; Senechal, Thibaud; Amr, May; Cohn, Jeffrey F.; Picard, Rosalind W. " Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild." Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2013.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.contributor.mitauthorMcDuff, Daniel Jonathanen_US
dc.contributor.mitauthorel Kaliouby, Ranaen_US
dc.contributor.mitauthorPicard, Rosalind W.en_US
dc.relation.journalProceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsMcDuff, Daniel Jonathan; el Kaliouby, Rana; Senechal, Thibaud; Amr, May; Cohn, Jeffrey F.; Picard, Rosalind W.en_US
dc.identifier.orcidhttps://orcid.org/0000-0002-5661-0022
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record