Notice
This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/137669.3
Towards robust interpretability with self-explaining neural networks
Author(s)
Jaakkola, Tommi; Alvarez Melis, David
DownloadPublished version (2.157Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
© 2018 Curran Associates Inc.All rights reserved. Most recent work on interpretability of complex machine learning models has focused on estimating a posteriori explanations for previously trained models around specific predictions. Self-explaining models where interpretability plays a key role already during learning have received much less attention. We propose three desiderata for explanations in general - explicitness, faithfulness, and stability - and show that existing methods do not satisfy them. In response, we design self-explaining models in stages, progressively generalizing linear classifiers to complex yet architecturally explicit models. Faithfulness and stability are enforced via regularization specifically tailored to such models. Experimental results across various benchmark datasets show that our framework offers a promising direction for reconciling model complexity and interpretability.
Date issued
2018Citation
Jaakkola, Tommi and Alvarez Melis, David. 2018. "Towards robust interpretability with self-explaining neural networks."
Version: Final published version