Show simple item record

dc.contributor.authorPicard, Rosalind W.
dc.contributor.authorMcDuff, Daniel Jonathan
dc.contributor.authorHoque, Mohammed Ehasanul
dc.date.accessioned2013-08-21T18:04:24Z
dc.date.available2013-08-21T18:04:24Z
dc.date.issued2012-04
dc.date.submitted2012-01
dc.identifier.issn1949-3045
dc.identifier.urihttp://hdl.handle.net/1721.1/79899
dc.description.abstractWe create two experimental situations to elicit two affective states: frustration, and delight. In the first experiment, participants were asked to recall situations while expressing either delight or frustration, while the second experiment tried to elicit these states naturally through a frustrating experience and through a delightful video. There were two significant differences in the nature of the acted versus natural occurrences of expressions. First, the acted instances were much easier for the computer to classify. Second, in 90 percent of the acted cases, participants did not smile when frustrated, whereas in 90 percent of the natural cases, participants smiled during the frustrating interaction, despite self-reporting significant frustration with the experience. As a follow up study, we develop an automated system to distinguish between naturally occurring spontaneous smiles under frustrating and delightful stimuli by exploring their temporal patterns given video of both. We extracted local and global features related to human smile dynamics. Next, we evaluated and compared two variants of Support Vector Machine (SVM), Hidden Markov Models (HMM), and Hidden-state Conditional Random Fields (HCRF) for binary classification. While human classification of the smile videos under frustrating stimuli was below chance, an accuracy of 92 percent distinguishing smiles under frustrating and delighted stimuli was obtained using a dynamic SVM classifier.en_US
dc.description.sponsorshipMIT Media Lab Consortiumen_US
dc.description.sponsorshipProcter & Gamble Companyen_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/T-AFFC.2012.11en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike 3.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourceMIT Web Domainen_US
dc.titleExploring Temporal Patterns in Classifying Frustrated and Delighted Smilesen_US
dc.typeArticleen_US
dc.identifier.citationHoque, Mohammed Ehsan, Daniel J. McDuff, and Rosalind W. Picard. “Exploring Temporal Patterns in Classifying Frustrated and Delighted Smiles.” IEEE Transactions on Affective Computing 3, no. 3 (July 2012): 323-334.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.contributor.mitauthorHoque, Mohammed Ehasanulen_US
dc.contributor.mitauthorMcDuff, Daniel Jonathanen_US
dc.contributor.mitauthorPicard, Rosalind W.en_US
dc.relation.journalIEEE Transactions on Affective Computingen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsHoque, Mohammed Ehsan; McDuff, Daniel J.; Picard, Rosalind W.en_US
dc.identifier.orcidhttps://orcid.org/0000-0002-5661-0022
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record