Preserving Human Autonomy in AI-Mediated Negotiations
Author(s)
Chen, J. Alvin
DownloadThesis PDF (1.929Mb)
Advisor
Susskind, Lawrence
Terms of use
Metadata
Show full item recordAbstract
The rapid integration of generative artificial intelligence (AI) into negotiation and conflict resolution processes raises critical ethical concerns about the erosion of human autonomy, particularly when AI systems navigate irreconcilable “sacred” values (non-negotiable moral principles) alongside transactional “mundane” interests. This thesis investigates whether generative AI can be designed to recognize and respect important values and beliefs while preserving human agency in decision-making. Drawing on datasets from a repository of large language model (LLM) prompts tested in simulated negotiation scenarios, this study employs a mixed-methods approach to evaluating AI’s efficacy in balancing efficiency with ethical imperatives in negotiation. Quantitative metrics (enumerating the outcomes of two-party negotiations) are analyzed alongside qualitative assessments of values such as transparency and consent, drawn from Kantian ethical frameworks.
My analysis reveals that while AI negotiating bots excel in trades across mundane, tradable interests they struggle to navigate beliefs and values without oversimplifying moral reasoning or obscuring cultural considerations. These findings inform policy recommendations, including a call for human-in-the-loop validation and technical safeguards for protecting important values in efforts to incorporate AI-assistance into negotiations. By bridging technical analysis and ethical theory, I hope this research contributes to improvements in designing autonomy-preserving AI systems for use in a range of negotiating settings, prioritizing human dignity alongside computational efficiency.
Date issued
2025-05Department
Massachusetts Institute of Technology. Institute for Data, Systems, and SocietyPublisher
Massachusetts Institute of Technology