Show simple item record

dc.contributor.advisorDavid K. Gifford.en_US
dc.contributor.authorCarter, Brandon M. (Machine learning scientist)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-11-22T00:01:53Z
dc.date.available2019-11-22T00:01:53Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/123008
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 73-77).en_US
dc.description.abstractRecent progress in machine learning has come at the cost of interpretability, earning the field a reputation of producing opaque, "black-box" models. While deep neural networks are often able to achieve superior predictive accuracy over traditional models, the functions and representations they learn are usually highly nonlinear and difficult to interpret. This lack of interpretability hinders adoption of deep learning methods in fields such as medicine where understanding why a model made a decision is crucial. Existing techniques for explaining the decisions by black-box models are often restricted to either a specific type of predictor or are undesirably sensitive to factors unrelated to the model's decision-making process. In this thesis, we propose sufficient input subsets, minimal subsets of input features whose values form the basis for a model's decision. Our technique can rationalize decisions made by a black-box function on individual inputs and can also explain the basis for misclassifications. Moreover, general principles that globally govern a model's decision-making can be revealed by searching for clusters of such input patterns across many data points. Our approach is conceptually straightforward, entirely model-agnostic, simply implemented using instance-wise backward selection, and able to produce more concise rationales than existing techniques. We demonstrate the utility of our interpretation method on various neural network models trained on text, genomic, and image data.en_US
dc.description.statementofresponsibilityby Brandon M. Carter.en_US
dc.format.extent77 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleInterpreting black-box models through sufficient input subsetsen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1127567462en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-11-22T00:01:53Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record