MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Interpreting black-box models through sufficient input subsets

Author(s)
Carter, Brandon M. (Machine learning scientist)
Thumbnail
Download1127567462-MIT.pdf (2.944Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
David K. Gifford.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Recent progress in machine learning has come at the cost of interpretability, earning the field a reputation of producing opaque, "black-box" models. While deep neural networks are often able to achieve superior predictive accuracy over traditional models, the functions and representations they learn are usually highly nonlinear and difficult to interpret. This lack of interpretability hinders adoption of deep learning methods in fields such as medicine where understanding why a model made a decision is crucial. Existing techniques for explaining the decisions by black-box models are often restricted to either a specific type of predictor or are undesirably sensitive to factors unrelated to the model's decision-making process. In this thesis, we propose sufficient input subsets, minimal subsets of input features whose values form the basis for a model's decision. Our technique can rationalize decisions made by a black-box function on individual inputs and can also explain the basis for misclassifications. Moreover, general principles that globally govern a model's decision-making can be revealed by searching for clusters of such input patterns across many data points. Our approach is conceptually straightforward, entirely model-agnostic, simply implemented using instance-wise backward selection, and able to produce more concise rationales than existing techniques. We demonstrate the utility of our interpretation method on various neural network models trained on text, genomic, and image data.
Description
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
 
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
 
Cataloged from student-submitted PDF version of thesis.
 
Includes bibliographical references (pages 73-77).
 
Date issued
2019
URI
https://hdl.handle.net/1721.1/123008
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.