Show simple item record

dc.contributor.advisorShah, Julie A.
dc.contributor.authorZheng, Yiming
dc.date.accessioned2023-07-31T19:37:55Z
dc.date.available2023-07-31T19:37:55Z
dc.date.issued2023-06
dc.date.submitted2023-06-06T16:35:27.418Z
dc.identifier.urihttps://hdl.handle.net/1721.1/151413
dc.description.abstractIn recent years, increasingly powerful machine learning models have shown remarkable performance on a wide variety of tasks and thus their use is becoming more and more prevalent, including deployment in high stakes settings such as for medical and legal applications. Because these models are complex, their decision process is hard to understand, suggesting a need for model interpretability. Interpretability can be deceptively challenging. First, explanations for a model’s decision on example inputs may appear understandable. However, if the underlying explanation method is not interpretable, more care must be taken before making a claim about the interpretability of the explanation method. Second, it can be difficult to use interpretability techniques efficiently on large models with many parameters. Through the lens of the first challenge, we examine neural rationale models, which are popular for interpretable predictions of natural language processing (NLP) tasks. In these, a selector extracts segments of the input text, called rationales, and passes these segments to a classifier for prediction. Since the rationale is the only information accessible to the classifier, it is plausibly defined to be the explanation. However, through both philosophical perspectives and empirical studies, we argue rationale models may be less interpretable than expected. We call for more rigorous evaluations of these models to ensure desired properties of interpretability are indeed achieved. Through the lens of the second challenge, we study influence functions which explain a model’s output by tracing the model decision process back to the training data. Given a test point, influence functions compute an influence score for each training point representing how influential it is on the model’s decision with the test point as input. While expensive to compute on large models with many parameters, we aim to gain intuition on influence functions in low dimensional settings and develop simple, cheap to compute heuristics which are competitive with influence functions.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleAn Analysis of Neural Rationale Models andInfluence Functions for Interpretable MachineLearning
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record