Show simple item record

dc.contributor.advisorSontag, David
dc.contributor.advisorSatyanarayan, Arvind
dc.contributor.advisorGlassman, Elena
dc.contributor.advisorHorvitz, Eric
dc.contributor.authorMozannar, Hussein
dc.date.accessioned2024-07-08T18:55:54Z
dc.date.available2024-07-08T18:55:54Z
dc.date.issued2024-05
dc.date.submitted2024-06-03T20:46:12.190Z
dc.identifier.urihttps://hdl.handle.net/1721.1/155508
dc.description.abstractAI systems are augmenting humans' capabilities in settings such as healthcare and programming, forming human-AI teams. To enable more accurate and timely decisions, we need to optimize the performance of the human-AI team directly. In this thesis, we utilize a mathematical framing of the human-AI team and propose a set of methods that optimize the AI, the human, and the interface in which they communicate to enable better team performance. We first show how to provably train AI classifiers that complement humans and can defer the decision to humans when it is best to do so. However, in specific settings, AI cannot autonomously make decisions and thus only provides advice to humans. In that case, we build onboarding procedures that train humans to have an accurate mental model of the AI to enable appropriate reliance. Finally, we study how humans interact with large language models (LLMs) to write code. To understand current inefficiencies, we developed a taxonomy to categorize programmers' interactions with the LLM. Motivated by insight from the taxonomy, we leverage human feedback to know when to best display LLM suggestions.
dc.publisherMassachusetts Institute of Technology
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleTraining Human-AI Teams
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Institute for Data, Systems, and Society
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record