dc.contributor.author | Sanneman, Lindsay | |
dc.contributor.author | Tucker, Mycal | |
dc.contributor.author | Shah, Julie A. | |
dc.date.accessioned | 2024-07-24T17:13:05Z | |
dc.date.available | 2024-07-24T17:13:05Z | |
dc.date.issued | 2024-06-03 | |
dc.identifier.isbn | 979-8-4007-0450-5 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/155782 | |
dc.description | FAccT ’24, June 03–06, 2024, Rio de Janeiro, Brazil | en_US |
dc.description.abstract | Recent advances in artificial intelligence (AI) have underscored the need for explainable AI (XAI) to support human understanding of AI systems. Consideration of human factors that impact explanation efficacy, such as mental workload and human understanding, is central to effective XAI design. Existing work in XAI has demonstrated a tradeoff between understanding and workload induced by different types of explanations. Explaining complex concepts through abstractions (hand-crafted groupings of related problem features) has been shown to effectively address and balance this workload-understanding tradeoff. In this work, we characterize the workload-understanding balance via the Information Bottleneck method: an information-theoretic approach which automatically generates abstractions that maximize informativeness and minimize complexity. In particular, we establish empirical connections between workload and complexity and between understanding and informativeness through human-subject experiments. This empirical link between human factors and information-theoretic concepts provides an important mathematical characterization of the workload-understanding tradeoff which enables user-tailored XAI design. | en_US |
dc.publisher | ACM|The 2024 ACM Conference on Fairness, Accountability, and Transparency | en_US |
dc.relation.isversionof | 10.1145/3630106.3659032 | en_US |
dc.rights | Creative Commons Attribution | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.source | Association for Computing Machinery | en_US |
dc.title | An Information Bottleneck Characterization of the Understanding-Workload Tradeoff in Human-Centered Explainable AI | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Sanneman, Lindsay, Tucker, Mycal and Shah, Julie A. 2024. "An Information Bottleneck Characterization of the Understanding-Workload Tradeoff in Human-Centered Explainable AI." | |
dc.contributor.department | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics | |
dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | |
dc.identifier.mitlicense | PUBLISHER_CC | |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dc.date.updated | 2024-07-01T07:56:29Z | |
dc.language.rfc3066 | en | |
dc.rights.holder | The author(s) | |
dspace.date.submission | 2024-07-01T07:56:29Z | |
mit.license | PUBLISHER_CC | |
mit.metadata.status | Authority Work and Publication Information Needed | en_US |