dc.contributor.author | McDuff, Daniel Jonathan | |
dc.contributor.author | Senechal, Thibaud | |
dc.contributor.author | Amr, May | |
dc.contributor.author | Cohn, Jeffrey F. | |
dc.contributor.author | Picard, Rosalind W. | |
dc.contributor.author | El Kaliouby, Rana | |
dc.date.accessioned | 2013-09-16T13:12:16Z | |
dc.date.available | 2013-09-16T13:12:16Z | |
dc.date.issued | 2013-06 | |
dc.identifier.isbn | 9780769549903 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/80733 | |
dc.description.abstract | Computer classification of facial expressions requires large amounts of data and this data needs to reflect the diversity of conditions seen in real applications. Public datasets help accelerate the progress of research by providing researchers with a benchmark resource. We present a comprehensively labeled dataset of ecologically valid spontaneous facial responses recorded in natural settings over the Internet. To collect the data, online viewers watched one of three intentionally amusing Super Bowl commercials and were simultaneously filmed using their webcam. They answered three self-report questions about their experience. A subset of viewers additionally gave consent for their data to be shared publicly with other researchers. This subset consists of 242 facial videos (168,359 frames) recorded in real world conditions. The dataset is comprehensively labeled for the following: 1) frame-by-frame labels for the presence of 10 symmetrical FACS action units, 4 asymmetric (unilateral) FACS action units, 2 head movements, smile, general expressiveness, feature tracker fails and gender; 2) the location of 22 automatically detected landmark points; 3) self-report responses of familiarity with, liking of, and desire to watch again for the stimuli videos and 4) baseline performance of detection algorithms on this dataset. This data is available for distribution to researchers online, the EULA can be found at: http://www.affectiva.com/facial-expression-dataset-am-fed/. | en_US |
dc.language.iso | en_US | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike 3.0 | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/3.0/ | en_US |
dc.source | MIT Web Domain | en_US |
dc.title | Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild | en_US |
dc.type | Article | en_US |
dc.identifier.citation | McDuff, Daniel Jonathan; el Kaliouby, Rana; Senechal, Thibaud; Amr, May; Cohn, Jeffrey F.; Picard, Rosalind W. " Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild." Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2013. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Media Laboratory | en_US |
dc.contributor.department | Program in Media Arts and Sciences (Massachusetts Institute of Technology) | en_US |
dc.contributor.mitauthor | McDuff, Daniel Jonathan | en_US |
dc.contributor.mitauthor | el Kaliouby, Rana | en_US |
dc.contributor.mitauthor | Picard, Rosalind W. | en_US |
dc.relation.journal | Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | en_US |
dc.eprint.version | Author's final manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dspace.orderedauthors | McDuff, Daniel Jonathan; el Kaliouby, Rana; Senechal, Thibaud; Amr, May; Cohn, Jeffrey F.; Picard, Rosalind W. | en_US |
dc.identifier.orcid | https://orcid.org/0000-0002-5661-0022 | |
mit.license | OPEN_ACCESS_POLICY | en_US |
mit.metadata.status | Complete | |