Show simple item record

dc.contributor.authorSanneman, Lindsay
dc.contributor.authorTucker, Mycal
dc.contributor.authorShah, Julie A.
dc.date.accessioned2024-07-24T17:13:05Z
dc.date.available2024-07-24T17:13:05Z
dc.date.issued2024-06-03
dc.identifier.isbn979-8-4007-0450-5
dc.identifier.urihttps://hdl.handle.net/1721.1/155782
dc.descriptionFAccT ’24, June 03–06, 2024, Rio de Janeiro, Brazilen_US
dc.description.abstractRecent advances in artificial intelligence (AI) have underscored the need for explainable AI (XAI) to support human understanding of AI systems. Consideration of human factors that impact explanation efficacy, such as mental workload and human understanding, is central to effective XAI design. Existing work in XAI has demonstrated a tradeoff between understanding and workload induced by different types of explanations. Explaining complex concepts through abstractions (hand-crafted groupings of related problem features) has been shown to effectively address and balance this workload-understanding tradeoff. In this work, we characterize the workload-understanding balance via the Information Bottleneck method: an information-theoretic approach which automatically generates abstractions that maximize informativeness and minimize complexity. In particular, we establish empirical connections between workload and complexity and between understanding and informativeness through human-subject experiments. This empirical link between human factors and information-theoretic concepts provides an important mathematical characterization of the workload-understanding tradeoff which enables user-tailored XAI design.en_US
dc.publisherACM|The 2024 ACM Conference on Fairness, Accountability, and Transparencyen_US
dc.relation.isversionof10.1145/3630106.3659032en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleAn Information Bottleneck Characterization of the Understanding-Workload Tradeoff in Human-Centered Explainable AIen_US
dc.typeArticleen_US
dc.identifier.citationSanneman, Lindsay, Tucker, Mycal and Shah, Julie A. 2024. "An Information Bottleneck Characterization of the Understanding-Workload Tradeoff in Human-Centered Explainable AI."
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2024-07-01T07:56:29Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-07-01T07:56:29Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record