Addressing two issues in machine learning : interpretability and dataset shift
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
MetadataShow full item record
In this thesis, I create solutions to two problems. In the first, I address the problem that many machine learning models are not interpretable, by creating a new form of classifier, called the Falling Rule List. This is a decision list classifier where the predicted probabilities are decreasing down the list. Experiments show that the gain in interpretability need not be accompanied by a large sacrifice in accuracy on real world datasets. I then briefly discuss possible extensions that allow one to directly optimize rank statistics over rule lists, and handle ordinal data. In the second, I address a shortcoming of a popular approach to handling covariate shift, in which the training distribution and that for which predictions need to be made have different covariate distributions. In particular, the existing importance weighting approach to handling covariate shift suffers from high variance if the two covariate distributions are very different. I develop a dimension reduction procedure that reduces this variance, at the expense of increased bias. Experiments show that this tradeoff can be worthwhile in some situations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018Cataloged from PDF version of thesis.Includes bibliographical references (pages 71-77).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.