MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

When Human Coders (and Machines) Disagree on the Meaning of Facial Affect in Spontaneous Videos

Author(s)
Picard, Rosalind W.; Hoque, Mohammed Ehasanul; El Kaliouby, Rana
Thumbnail
DownloadPicard_When Human Coders.pdf (76.48Kb)
OPEN_ACCESS_POLICY

Open Access Policy

Creative Commons Attribution-Noncommercial-Share Alike

Terms of use
Attribution-Noncommercial-Share Alike 3.0 Unported http://creativecommons.org/licenses/by-nc-sa/3.0/
Metadata
Show full item record
Abstract
This paper describes the challenges of getting ground truth affective labels for spontaneous video, and presents implications for systems such as virtual agents that have automated facial analysis capabilities. We first present a dataset from an intelligent tutoring application and describe the most prevalent approach to labeling such data. We then present an alternative labeling approach, which closely models how the majority of automated facial analysis systems are designed. We show that while participants, peers and trained judges report high inter-rater agreement on expressions of delight, confusion, flow, frustration, boredom, surprise, and neutral when shown the entire 30 minutes of video for each participant, inter-rater agreement drops below chance when human coders are asked to watch and label short 8 second clips for the same set of labels. We also perform discriminative analysis for facial action units for each affective state represented in the clips. The results emphasize that human coders heavily rely on factors such as familiarity of the person and context of the interaction to correctly infer a person’s affective state; without this information, the reliability of humans as well as machines attributing affective labels to spontaneous facial-head movements drops significantly.
Date issued
2009-09
URI
http://hdl.handle.net/1721.1/56633
Department
Massachusetts Institute of Technology. Media Laboratory; Program in Media Arts and Sciences (Massachusetts Institute of Technology)
Journal
Intelligent Virtual Agents, 9th International Conference, IVA 2009 Amsterdam, The Netherlands, September 14-16, 2009 Proceedings
Publisher
Springer Berlin
Citation
Hoque, M. E., R. El Kaliouby, and R. W. Picard. "When Human Coders (and Machines) Disagree on the Meaning of Facial Affect in Spontaneous Videos." Intelligent Virtual Agents, Proceedings 5773 (2009): 337-43.
Version: Original manuscript
ISBN
978-3-642-04379-6

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.