| dc.contributor.author | Lim, Brian | |
| dc.contributor.author | Cahaly, Joseph | |
| dc.contributor.author | Sng, Chester | |
| dc.contributor.author | Chew, Adam | |
| dc.date.accessioned | 2025-09-30T16:48:16Z | |
| dc.date.available | 2025-09-30T16:48:16Z | |
| dc.date.issued | 2025-04-25 | |
| dc.identifier.isbn | 979-8-4007-1394-1 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/162841 | |
| dc.description | CHI ’25, Yokohama, Japan | en_US |
| dc.description.abstract | Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. Investigating XAI for high-stakes medical diagnosis, we propose improving domain alignment with diagrammatic and abductive reasoning to reduce the interpretability gap. We developed DiagramNet to predict cardiac diagnoses from heart auscultation, select the best-fitting hypothesis based on criteria evaluation, and explain with clinically-relevant murmur diagrams. The ante-hoc interpretable model leverages domain-relevant ontology, representation, and reasoning process to increase trust in expert users. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better performance than baseline models. We demonstrate the interpretability and trustworthiness of diagrammatic, abductive explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-aligned explanations for user-centric XAI in complex domains. | en_US |
| dc.publisher | ACM|CHI Conference on Human Factors in Computing Systems | en_US |
| dc.relation.isversionof | https://doi.org/10.1145/3706598.3714058 | en_US |
| dc.rights | Creative Commons Attribution | en_US |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
| dc.source | Association for Computing Machinery | en_US |
| dc.title | Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical Diagnosis | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Brian Y. Lim, Joseph P. Cahaly, Chester Y. F. Sng, and Adam Chew. 2025. Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical Diagnosis. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 419, 1–25. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.identifier.mitlicense | PUBLISHER_POLICY | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2025-08-01T08:14:23Z | |
| dc.language.rfc3066 | en | |
| dc.rights.holder | The author(s) | |
| dspace.date.submission | 2025-08-01T08:14:23Z | |
| mit.license | PUBLISHER_CC | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |