Show simple item record

dc.contributor.authorPark, Charlotte
dc.contributor.authorDonahue, Kate
dc.contributor.authorRaghavan, Manish
dc.date.accessioned2025-12-18T22:11:35Z
dc.date.available2025-12-18T22:11:35Z
dc.date.issued2025-06-12
dc.identifier.isbn979-8-4007-1399-6
dc.identifier.urihttps://hdl.handle.net/1721.1/164413
dc.descriptionUMAP Adjunct ’25, New York City, NY, USAen_US
dc.description.abstractGenerative AI tools (GAITs) fundamentally differ from traditional machine learning tools in that they allow users to provide as much or as little information as they choose in their inputs. This flexibility often leads users to omit certain details, relying on the GAIT to infer and fill in less critical information based on distributional knowledge of user preferences. Inferences about preferences lead to natural questions about fairness, since a GAIT’s “best guess” may skew towards the preferences of larger groups at the expense of smaller ones. Unlike more traditional recommender systems, GAITs can acquire additional information about a user’s preferences through feedback or by explicitly soliciting it. This creates an interesting communication challenge: the user is aware of their specific preference, while the GAIT has knowledge of the overall distribution of preferences, and both parties can only exchange a limited amount of information. In this work, we present a mathematical model to describe human-AI co-creation of content under information asymmetry. Our results suggest that GAITs can use distributional information about overall preferences to determine the “right” questions to ask to maximize both welfare and fairness, opening up a rich design space in human-AI collaboration.en_US
dc.publisherACM|Adjunct Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalizationen_US
dc.relation.isversionofhttps://doi.org/10.1145/3708319.3733711en_US
dc.rightsCreative Commons Attribution-Noncommercial-ShareAlikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleWhen to Ask a Question: Understanding Communication Strategies in Generative AI Toolsen_US
dc.typeArticleen_US
dc.identifier.citationCharlotte Park, Kate Donahue, and Manish Raghavan. 2025. When to Ask a Question: Understanding Communication Strategies in Generative AI Tools. In Adjunct Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalization (UMAP Adjunct '25). Association for Computing Machinery, New York, NY, USA, 288–299.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentSloan School of Managementen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-08-01T08:30:01Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-08-01T08:30:01Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record