MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Explaining machine learning predictions : rationales and effective modifications

Author(s)
Mishra, Sudhanshu Nath.
Thumbnail
Download1098174801-MIT.pdf (12.22Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Randall Davis and Andrew W. Lo.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Deep learning models have demonstrated unprecedented accuracy in wide-ranging tasks such as object and speech recognition. These models can outperform techniques traditionally used in credit risk modeling like logistic regression. However, deep learning models operate as black-boxes, which can limit their use and impact. Regulation mandates that a lender must be able to disclose up to four factors that adversely affected a rejected credit applicant. But we argue that knowing why an applicant is turned down is not enough. An applicant would also want actionable advice that can enable them to reach a favorable classification. Our research thus focuses on both the desire to explain why a machine learning model predicted the classification it did and to find small changes to an input point that can reverse its classification. In this thesis, we evaluate two variants of LIME, a local model-approximation technique and use them in a generate and test algorithm to produce mathematically-effective modifications. We demonstrate that such modifications may not be pragmatically-useful and show how numerical analyses can be supplemented with domain knowledge to generate explanations that are of pragmatic utility. Our work can help accelerate the adoption of deep learning in domains that would benefit from interpreting machine learning predictions.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 129-131).
 
Date issued
2018
URI
https://hdl.handle.net/1721.1/121599
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.