| dc.contributor.author | Alghowinem, Sharifa | |
| dc.contributor.author | Caldwell, Sabrina | |
| dc.contributor.author | Radwan, Ibrahim | |
| dc.contributor.author | Wagner, Michael | |
| dc.contributor.author | Gedeon, Tom | |
| dc.date.accessioned | 2025-01-31T18:31:38Z | |
| dc.date.available | 2025-01-31T18:31:38Z | |
| dc.date.issued | 2024-12-26 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/158142 | |
| dc.description.abstract | Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from verbal and nonverbal cues, we aim to apply a similar concept for deception detection. Recognising deceptive behaviour has been attempted; however, only a few studies have analysed this behaviour from gait and body movement. This research involves a multimodal approach for deception detection from gait, where we fuse features extracted from body movement behaviours from a video signal, acoustic features from walking steps from an audio signal, and the dynamics of walking movement using an accelerometer sensor. Using the video recording of walking from the Whodunnit deception dataset, which contains 49 subjects performing scenarios that elicit deceptive behaviour, we conduct multimodal two-category (guilty/not guilty) subject-independent classification. The classification results obtained reached an accuracy of up to 88% through feature fusion, with an average of 60% from both single and multimodal signals. Analysing body movement using single modality showed that the visual signal had the highest performance followed by the accelerometer and acoustic signals. Several fusion techniques were explored, including early, late, and hybrid fusion, where hybrid fusion not only achieved the highest classification results, but also increased the confidence of the results. Moreover, using a systematic framework for selecting the most distinguishing features of guilty gait behaviour, we were able to interpret the performance of our models. From these baseline results, we can conclude that pattern recognition techniques could help in characterising deceptive behaviour, where future work will focus on exploring the tuning and enhancement of the results and techniques. | en_US |
| dc.publisher | Multidisciplinary Digital Publishing Institute | en_US |
| dc.relation.isversionof | http://dx.doi.org/10.3390/info16010006 | en_US |
| dc.rights | Creative Commons Attribution | en_US |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
| dc.source | Multidisciplinary Digital Publishing Institute | en_US |
| dc.title | The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Alghowinem, S.; Caldwell, S.; Radwan, I.; Wagner, M.; Gedeon, T. The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour. Information 2025, 16, 6. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Media Laboratory | en_US |
| dc.relation.journal | Information | en_US |
| dc.identifier.mitlicense | PUBLISHER_CC | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
| eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
| dc.date.updated | 2025-01-24T13:15:58Z | |
| dspace.date.submission | 2025-01-24T13:15:58Z | |
| mit.journal.volume | 16 | en_US |
| mit.journal.issue | 1 | en_US |
| mit.license | PUBLISHER_CC | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |