Mitigating LLM Hallucination in the Banking Domain
Author(s)
Sert, Deniz Bilge
DownloadThesis PDF (413.4Kb)
Advisor
Gupta, Amar
Terms of use
Metadata
Show full item recordAbstract
Large Language Models (LLMs) offer significant potential in the banking sector, particularly for applications such as fraud detection, credit approval, and enhancing customer experience. However, their tendency to "hallucinate"—generating plausible but inaccurate information—poses a critical challenge. This thesis examines existing strategies for mitigating LLM hallucinations and proposes a novel approach to reduce hallucinations in the context of predicting customer churn using LLMs.
Date issued
2025-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology