Examining LLMs in Economic Settings
Author(s)
Ross, Jillian A.
DownloadThesis PDF (1.537Mb)
Advisor
Lo, Andrew W.
Terms of use
Metadata
Show full item recordAbstract
Humans are not homo economicus (i.e., rational economic beings). We exhibit systematic behavioral biases such as loss aversion, anchoring, framing, etc., which lead us to make suboptimal economic decisions. Insofar as such biases may embedded in text data on which large language models (LLMs) are trained, to what extent are LLMs prone to the same behavioral biases? Understanding these biases in LLMs is crucial for deploying LLMs to support human decision-making. To enable the responsible deployment of LLMs, I propose economic alignment. Economic alignment is a specific form of AI alignment that provides a critical perspective to interrogate what human preferences we would like to incorporate into LLM decisions. To illustrate the power of economic alignment, I systematically study the economic decision-making behaviors of LLMs through utility theory, a paradigm at the core of modern economic theory. I apply experimental designs from human studies to LLMs and find that they are neither entirely human-like nor entirely economicus-like. Specifically, I find that LLMs generally exhibit stronger inequity aversion, stronger loss aversion, weaker risk aversion, and stronger time discounting compared to human subjects. I further find that most LLMs struggle to maintain consistent economic behavior across settings. Finally, I present a case study that examines how we can intervene through prompting to better align LLMs with economic goals.
Date issued
2024-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology