Show simple item record

dc.contributor.advisorGupta, Amar
dc.contributor.authorSert, Deniz Bilge
dc.date.accessioned2025-10-06T17:36:08Z
dc.date.available2025-10-06T17:36:08Z
dc.date.issued2025-05
dc.date.submitted2025-06-23T14:03:33.728Z
dc.identifier.urihttps://hdl.handle.net/1721.1/162944
dc.description.abstractLarge Language Models (LLMs) offer significant potential in the banking sector, particularly for applications such as fraud detection, credit approval, and enhancing customer experience. However, their tendency to "hallucinate"—generating plausible but inaccurate information—poses a critical challenge. This thesis examines existing strategies for mitigating LLM hallucinations and proposes a novel approach to reduce hallucinations in the context of predicting customer churn using LLMs.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleMitigating LLM Hallucination in the Banking Domain
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record