Show simple item record

dc.contributor.advisorRandall Davis and Andrew W. Lo.en_US
dc.contributor.authorMishra, Sudhanshu Nath.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-07-12T17:40:46Z
dc.date.available2019-07-12T17:40:46Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/121599
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 129-131).en_US
dc.description.abstractDeep learning models have demonstrated unprecedented accuracy in wide-ranging tasks such as object and speech recognition. These models can outperform techniques traditionally used in credit risk modeling like logistic regression. However, deep learning models operate as black-boxes, which can limit their use and impact. Regulation mandates that a lender must be able to disclose up to four factors that adversely affected a rejected credit applicant. But we argue that knowing why an applicant is turned down is not enough. An applicant would also want actionable advice that can enable them to reach a favorable classification. Our research thus focuses on both the desire to explain why a machine learning model predicted the classification it did and to find small changes to an input point that can reverse its classification. In this thesis, we evaluate two variants of LIME, a local model-approximation technique and use them in a generate and test algorithm to produce mathematically-effective modifications. We demonstrate that such modifications may not be pragmatically-useful and show how numerical analyses can be supplemented with domain knowledge to generate explanations that are of pragmatic utility. Our work can help accelerate the adoption of deep learning in domains that would benefit from interpreting machine learning predictions.en_US
dc.description.statementofresponsibilityby Sudhanshu Nath Mishra.en_US
dc.format.extent131 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleExplaining machine learning predictions : rationales and effective modificationsen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1098174801en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-07-12T17:40:44Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record