Show simple item record

dc.contributor.authorDanry, Valdemar
dc.contributor.authorPataranutaporn, Pat
dc.contributor.authorGroh, Matthew
dc.contributor.authorEpstein, Ziv
dc.date.accessioned2025-09-22T18:02:08Z
dc.date.available2025-09-22T18:02:08Z
dc.date.issued2025-04-25
dc.identifier.isbn979-8-4007-1394-1
dc.identifier.urihttps://hdl.handle.net/1721.1/162775
dc.descriptionCHI ’25, Yokohama, Japanen_US
dc.description.abstractAdvanced Artificial Intelligence (AI) systems, specifically large language models (LLMs), have the capability to generate not just misinformation, but also deceptive explanations that can justify and propagate false information and discredit true information. We examined the impact of deceptive AI generated explanations on individuals’ beliefs in a pre-registered online experiment with 11,780 observations from 589 participants. We found that in addition to being more persuasive than accurate and honest explanations, AI-generated deceptive explanations can significantly amplify belief in false news headlines and undermine true ones as compared to AI systems that simply classify the headline incorrectly as being true/false. Moreover, our results show that logically invalid explanations are deemed less credible - diminishing the effects of deception. This underscores the importance of teaching logical reasoning and critical thinking skills to identify logically invalid arguments, fostering greater resilience against advanced AI-driven misinformation.en_US
dc.publisherACM|CHI Conference on Human Factors in Computing Systemsen_US
dc.relation.isversionofhttps://doi.org/10.1145/3706598.3713408en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleDeceptive Explanations by Large Language Models Lead People to Change their Beliefs About Misinformation More Often than Honest Explanationsen_US
dc.typeArticleen_US
dc.identifier.citationValdemar Danry, Pat Pataranutaporn, Matthew Groh, and Ziv Epstein. 2025. Deceptive Explanations by Large Language Models Lead People to Change their Beliefs About Misinformation More Often than Honest Explanations. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 933, 1–31.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-08-01T08:08:29Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-08-01T08:08:30Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record