Understanding Non-Verbal Irony Markers: Machine Learning Insights Versus Human Judgment
Author(s)
Spitale, Micol; Catania, Fabio; Panzeri, Francesca
Download3678957.3685723.pdf (1.620Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
rony detection is a complex task that often stumps both humans, who frequently misinterpret ironic statements, and artificial intelligence (AI) systems. While the majority of AI research on irony detection has concentrated on linguistic cues, the role of non-verbal cues like facial expressions and auditory signals has been largely overlooked. This paper investigates the effectiveness of machine learning models in recognizing irony using solely non-verbal cues. To this end, we conducted the following experiments and analysis: (i) we trained and evaluated some machine-learning models to detect irony; (ii) we compared the results with human interpretations; and (iii) we analysed and identified multi-modal non-verbal irony markers. Our research demonstrates that machine learning models trained on nonverbal data have shown significant promise in detecting irony, outperforming human judgments in this task. Specifically, we found that certain facial action units and acoustic characteristics of speech are key indicators of irony expression. These non-verbal cues, often overlooked in traditional irony detection methods, were effectively identified by machine learning models, leading to improved accuracy in detecting irony.
Description
ICMI ’24, November 04–08, 2024, San Jose, Costa Rica
Date issued
2024-11-04Department
McGovern Institute for Brain Research at MITPublisher
ACM|INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
Citation
Spitale, Micol, Catania, Fabio and Panzeri, Francesca. 2024. "Understanding Non-Verbal Irony Markers: Machine Learning Insights Versus Human Judgment."
Version: Final published version
ISBN
979-8-4007-0462-8