Show simple item record

dc.contributor.advisorCynthia Rudin.en_US
dc.contributor.authorWang, Fulton.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-11-12T17:40:25Z
dc.date.available2019-11-12T17:40:25Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/122870
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 71-77).en_US
dc.description.abstractIn this thesis, I create solutions to two problems. In the first, I address the problem that many machine learning models are not interpretable, by creating a new form of classifier, called the Falling Rule List. This is a decision list classifier where the predicted probabilities are decreasing down the list. Experiments show that the gain in interpretability need not be accompanied by a large sacrifice in accuracy on real world datasets. I then briefly discuss possible extensions that allow one to directly optimize rank statistics over rule lists, and handle ordinal data. In the second, I address a shortcoming of a popular approach to handling covariate shift, in which the training distribution and that for which predictions need to be made have different covariate distributions. In particular, the existing importance weighting approach to handling covariate shift suffers from high variance if the two covariate distributions are very different. I develop a dimension reduction procedure that reduces this variance, at the expense of increased bias. Experiments show that this tradeoff can be worthwhile in some situations.en_US
dc.description.statementofresponsibilityby Fulton Wang.en_US
dc.format.extent77 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleAddressing two issues in machine learning : interpretability and dataset shiften_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1126649834en_US
dc.description.collectionPh.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-11-12T17:40:24Zen_US
mit.thesis.degreeDoctoralen_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record