Your Face Mirrors Your Deepest Beliefs--Predicting Personality and Morals through Facial Emotion Recognition
Author(s)
Gloor, Peter A.; Fronzetti Colladon, Andrea; Altuntas, Erkin; Cetinkaya, Cengiz; Kaiser, Maximilian F.; Ripperger, Lukas; Schaefer, Tim; ... Show more Show less
Downloadfutureinternet-14-00005-v2.pdf (9.450Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Can we really “read the mind in the eyes”? Moreover, can AI assist us in this task? This paper answers these two questions by introducing a machine learning system that predicts personality characteristics of individuals on the basis of their face. It does so by tracking the emotional response of the individual’s face through facial emotion recognition (FER) while watching a series of 15 short videos of different genres. To calibrate the system, we invited 85 people to watch the videos, while their emotional responses were analyzed through their facial expression. At the same time, these individuals also took four well-validated surveys of personality characteristics and moral values: the revised NEO FFI personality inventory, the Haidt moral foundations test, the Schwartz personal value system, and the domain-specific risk-taking scale (DOSPERT). We found that personality characteristics and moral values of an individual can be predicted through their emotional response to the videos as shown in their face, with an accuracy of up to 86% using gradient-boosted trees. We also found that different personality characteristics are better predicted by different videos, in other words, there is no single video that will provide accurate predictions for all personality characteristics, but it is the response to the mix of different videos that allows for accurate prediction.
Date issued
2021-12-22Department
Massachusetts Institute of Technology. Center for Collective IntelligencePublisher
Multidisciplinary Digital Publishing Institute
Citation
Future Internet 14 (1): 5 (2022)
Version: Final published version