dc.contributor.author | Baltrusaitis, Tadas | |
dc.contributor.author | McDuff, Daniel Jonathan | |
dc.contributor.author | Banda, Ntombikayise | |
dc.contributor.author | Mahmoud, Marwa | |
dc.contributor.author | el Kaliouby, Rana | |
dc.contributor.author | Robinson, Peter | |
dc.contributor.author | Picard, Rosalind W. | |
dc.date.accessioned | 2011-12-06T17:57:00Z | |
dc.date.available | 2011-12-06T17:57:00Z | |
dc.date.issued | 2011-03 | |
dc.identifier.isbn | 978-1-4244-9140-7 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/67458 | |
dc.description.abstract | We present a real-time system for detecting facial action units and inferring emotional states from head and shoulder gestures and facial expressions. The dynamic system uses three levels of inference on progressively longer time scales. Firstly, facial action units and head orientation are identified from 22 feature points and Gabor filters. Secondly, Hidden Markov Models are used to classify sequences of actions into head and shoulder gestures. Finally, a multi level Dynamic Bayesian Network is used to model the unfolding emotional state based on probabilities of different gestures. The most probable state over a given video clip is chosen as the label for that clip. The average F1 score for 12 action units (AUs 1, 2, 4, 6, 7, 10, 12, 15, 17, 18, 25, 26), labelled on a frame by frame basis, was 0.461. The average classification rate for five emotional states (anger, fear, joy, relief, sadness) was 0.440. Sadness had the greatest rate, 0.64, anger the smallest, 0.11. | en_US |
dc.description.sponsorship | Thales Research and Technology (UK) | en_US |
dc.description.sponsorship | Bradlow Foundation Trust | en_US |
dc.description.sponsorship | Procter & Gamble Company | en_US |
dc.language.iso | en_US | |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1109/FG.2011.5771372 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike 3.0 | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/3.0/ | en_US |
dc.source | Javier Hernandez Rivera | en_US |
dc.title | Real-Time Inference of Mental States from Facial Expressions and Upper Body Gestures | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Baltrusaitis, Tadas et al. “Real-time Inference of Mental States from Facial Expressions and Upper Body Gestures.” Face and Gesture 2011. Santa Barbara, CA, USA, 2011. 909-914. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Media Laboratory | en_US |
dc.contributor.approver | Picard, Rosalind W. | |
dc.contributor.mitauthor | McDuff, Daniel Jonathan | |
dc.contributor.mitauthor | el Kaliouby, Rana | |
dc.contributor.mitauthor | Picard, Rosalind W. | |
dc.relation.journal | 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) | en_US |
dc.eprint.version | Author's final manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
dspace.orderedauthors | Baltrusaitis, Tadas; McDuff, Daniel; Banda, Ntombikayise; Mahmoud, Marwa; Kaliouby, Rana el; Robinson, Peter; Picard, Rosalind | en |
dc.identifier.orcid | https://orcid.org/0000-0002-5661-0022 | |
mit.license | OPEN_ACCESS_POLICY | en_US |
mit.metadata.status | Complete | |