Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model
Author(s)
McCormick, Tyler H.; Madigan, David; Letham, Benjamin; Rudin, Cynthia
Download1511.01644.pdf (386.1Kb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. Our models are decision lists, which consist of a series of if … then. . . statements (e.g., if high blood pressure, then stroke) that discretize a high-dimensional, multivariate feature space into a series of simple, readily interpretable decision statements. We introduce a generative model called Bayesian Rule Lists that yields a posterior distribution over possible decision lists. It employs a novel prior structure to encourage sparsity. Our experiments show that Bayesian Rule Lists has predictive accuracy on par with the current top algorithms for prediction in machine learning. Our method is motivated by recent developments in personalized medicine, and can be used to produce highly accurate and interpretable medical scoring systems. We demonstrate this by producing an alternative to the CHADS₂ score, actively used in clinical practice for estimating the risk of stroke in patients that have atrial fibrillation. Our model is as interpretable as CHADS₂, but more accurate.
Date issued
2015-09Department
Sloan School of ManagementJournal
The Annals of Applied Statistics
Publisher
Institute of Mathematical Statistics
Citation
Letham, Benjamin et al. “Interpretable Classifiers Using Rules and Bayesian Analysis: Building a Better Stroke Prediction Model.” The Annals of Applied Statistics 9, 3 (September 2015): 1350–1371 © 2015 Institute of Mathematical Statistics
Version: Final published version
ISSN
1932-6157