| dc.contributor.author | Boggust, Angie | |
| dc.contributor.author | Bang, Hyemin | |
| dc.contributor.author | Strobelt, Hendrik | |
| dc.contributor.author | Satyanarayan, Arvind | |
| dc.date.accessioned | 2025-09-19T18:02:54Z | |
| dc.date.available | 2025-09-19T18:02:54Z | |
| dc.date.issued | 2025-04-25 | |
| dc.identifier.isbn | 979-8-4007-1394-1 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/162767 | |
| dc.description | CHI ’25, Yokohama, Japan | en_US |
| dc.description.abstract | While interpretability methods identify a model’s learned concepts, they overlook the relationships between concepts that make up its abstractions and inform its ability to generalize to new data. To assess whether models’ have learned human-aligned abstractions, we introduce abstraction alignment, a methodology to compare model behavior against formal human knowledge. Abstraction alignment externalizes domain-specific human knowledge as an abstraction graph, a set of pertinent concepts spanning levels of abstraction. Using the abstraction graph as a ground truth, abstraction alignment measures the alignment of a model’s behavior by determining how much of its uncertainty is accounted for by the human abstractions. By aggregating abstraction alignment across entire datasets, users can test alignment hypotheses, such as which human concepts the model has learned and where misalignments recur. In evaluations with experts, abstraction alignment differentiates seemingly similar errors, improves the verbosity of existing model-quality metrics, and uncovers improvements to current human abstractions. | en_US |
| dc.publisher | ACM|CHI Conference on Human Factors in Computing Systems | en_US |
| dc.relation.isversionof | https://doi.org/10.1145/3706598.3713406 | en_US |
| dc.rights | Creative Commons Attribution | en_US |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
| dc.source | Association for Computing Machinery | en_US |
| dc.title | Abstraction Alignment: Comparing Model-Learned and Human-Encoded Conceptual Relationships | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Angie Boggust, Hyemin Bang, Hendrik Strobelt, and Arvind Satyanarayan. 2025. Abstraction Alignment: Comparing Model-Learned and Human-Encoded Conceptual Relationships. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 417, 1–20. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
| dc.identifier.mitlicense | PUBLISHER_POLICY | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2025-08-01T08:07:59Z | |
| dc.language.rfc3066 | en | |
| dc.rights.holder | The author(s) | |
| dspace.date.submission | 2025-08-01T08:07:59Z | |
| mit.license | PUBLISHER_CC | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |