Show simple item record

dc.contributor.authorAlghowinem, Sharifa
dc.contributor.authorCaldwell, Sabrina
dc.contributor.authorRadwan, Ibrahim
dc.contributor.authorWagner, Michael
dc.contributor.authorGedeon, Tom
dc.date.accessioned2025-01-31T18:31:38Z
dc.date.available2025-01-31T18:31:38Z
dc.date.issued2024-12-26
dc.identifier.urihttps://hdl.handle.net/1721.1/158142
dc.description.abstractDetecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from verbal and nonverbal cues, we aim to apply a similar concept for deception detection. Recognising deceptive behaviour has been attempted; however, only a few studies have analysed this behaviour from gait and body movement. This research involves a multimodal approach for deception detection from gait, where we fuse features extracted from body movement behaviours from a video signal, acoustic features from walking steps from an audio signal, and the dynamics of walking movement using an accelerometer sensor. Using the video recording of walking from the Whodunnit deception dataset, which contains 49 subjects performing scenarios that elicit deceptive behaviour, we conduct multimodal two-category (guilty/not guilty) subject-independent classification. The classification results obtained reached an accuracy of up to 88% through feature fusion, with an average of 60% from both single and multimodal signals. Analysing body movement using single modality showed that the visual signal had the highest performance followed by the accelerometer and acoustic signals. Several fusion techniques were explored, including early, late, and hybrid fusion, where hybrid fusion not only achieved the highest classification results, but also increased the confidence of the results. Moreover, using a systematic framework for selecting the most distinguishing features of guilty gait behaviour, we were able to interpret the performance of our models. From these baseline results, we can conclude that pattern recognition techniques could help in characterising deceptive behaviour, where future work will focus on exploring the tuning and enhancement of the results and techniques.en_US
dc.publisherMultidisciplinary Digital Publishing Instituteen_US
dc.relation.isversionofhttp://dx.doi.org/10.3390/info16010006en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceMultidisciplinary Digital Publishing Instituteen_US
dc.titleThe Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviouren_US
dc.typeArticleen_US
dc.identifier.citationAlghowinem, S.; Caldwell, S.; Radwan, I.; Wagner, M.; Gedeon, T. The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour. Information 2025, 16, 6.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.relation.journalInformationen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-01-24T13:15:58Z
dspace.date.submission2025-01-24T13:15:58Z
mit.journal.volume16en_US
mit.journal.issue1en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record