Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation
Author(s)
Pataranutaporn, Pat; Archiwaranguprok, Chayapatr; Chan, Samantha; Loftus, Elizabeth; Maes, Pattie
Download3708359.3712112.pdf (2.783Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
This study examines the potential for malicious generative chatbots to induce false memories by injecting subtle misinformation during user interactions. An experiment involving 180 participants explored five intervention conditions following the presentation of an article: (1) no intervention, (2) reading an honest or (3) misleading article summary, (4) discussing the article with an honest or (5) misleading chatbot. Results revealed that while the misleading summary condition increased false memory occurrence, misleading chatbot interactions led to significantly higher rates of false recollection. These findings highlight the emerging risks associated with conversational AI as it becomes more prevalent. The paper concludes by discussing implications and proposing future research directions to address this concerning phenomenon.
Description
IUI ’25, Cagliari, Italy
Date issued
2025-03-24Department
Massachusetts Institute of Technology. Media LaboratoryPublisher
ACM|30th International Conference on Intelligent User Interfaces
Citation
Pat Pataranutaporn, Chayapatr Archiwaranguprok, Samantha W. T. Chan, Elizabeth Loftus, and Pattie Maes. 2025. Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation. In Proceedings of the 30th International Conference on Intelligent User Interfaces (IUI '25). Association for Computing Machinery, New York, NY, USA, 1297–1313.
Version: Final published version
ISBN
979-8-4007-1306-4