| dc.contributor.author | Park, Charlotte | |
| dc.contributor.author | Donahue, Kate | |
| dc.contributor.author | Raghavan, Manish | |
| dc.date.accessioned | 2025-12-18T22:11:35Z | |
| dc.date.available | 2025-12-18T22:11:35Z | |
| dc.date.issued | 2025-06-12 | |
| dc.identifier.isbn | 979-8-4007-1399-6 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164413 | |
| dc.description | UMAP Adjunct ’25, New York City, NY, USA | en_US |
| dc.description.abstract | Generative AI tools (GAITs) fundamentally differ from traditional machine learning tools in that they allow users to provide as much or as little information as they choose in their inputs. This flexibility often leads users to omit certain details, relying on the GAIT to infer and fill in less critical information based on distributional knowledge of user preferences. Inferences about preferences lead to natural questions about fairness, since a GAIT’s “best guess” may skew towards the preferences of larger groups at the expense of smaller ones. Unlike more traditional recommender systems, GAITs can acquire additional information about a user’s preferences through feedback or by explicitly soliciting it. This creates an interesting communication challenge: the user is aware of their specific preference, while the GAIT has knowledge of the overall distribution of preferences, and both parties can only exchange a limited amount of information. In this work, we present a mathematical model to describe human-AI co-creation of content under information asymmetry. Our results suggest that GAITs can use distributional information about overall preferences to determine the “right” questions to ask to maximize both welfare and fairness, opening up a rich design space in human-AI collaboration. | en_US |
| dc.publisher | ACM|Adjunct Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalization | en_US |
| dc.relation.isversionof | https://doi.org/10.1145/3708319.3733711 | en_US |
| dc.rights | Creative Commons Attribution-Noncommercial-ShareAlike | en_US |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
| dc.source | Association for Computing Machinery | en_US |
| dc.title | When to Ask a Question: Understanding Communication Strategies in Generative AI Tools | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Charlotte Park, Kate Donahue, and Manish Raghavan. 2025. When to Ask a Question: Understanding Communication Strategies in Generative AI Tools. In Adjunct Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalization (UMAP Adjunct '25). Association for Computing Machinery, New York, NY, USA, 288–299. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
| dc.contributor.department | Sloan School of Management | en_US |
| dc.identifier.mitlicense | PUBLISHER_POLICY | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2025-08-01T08:30:01Z | |
| dc.language.rfc3066 | en | |
| dc.rights.holder | The author(s) | |
| dspace.date.submission | 2025-08-01T08:30:01Z | |
| mit.license | PUBLISHER_CC | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |