MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

A Test Suite for Saliency Method Evaluation Metrics

Author(s)
Kaspar, Moulinrouge
Thumbnail
DownloadThesis PDF (3.879Mb)
Advisor
Satyanarayan, Arvind
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
This thesis introduces a structured test suite designed to evaluate the input sensitivity of saliency methods, a crucial factor when interpreting machine learning models, particularly in high-stakes environments. Saliency methods, by highlighting essential input features inf luencing model decisions, serve as a key tool for understanding model behavior. Yet, their effectiveness can vary, often presenting challenges in selection due to their inconsistent reliability and the potential for unfaithful representations of model dynamics. To address these challenges, our work enhances the process of selecting and applying saliency methods by rigorously testing their response to input perturbations, from adversarial modifications to minor variations. This test suite specifically assesses aspects such as completeness, deletion, faithfulness, and robustness across various data types—including textual and image data—and model architectures like convolutional and transformer models. We demonstrate the utility of the test suite by using it to compare how different saliency methods, as well as the same method across different architectures, behave under varied conditions. Our findings reveal significant variations in how these methods respond to changes in input data, providing insights that guide users in choosing more reliable techniques for interpreting model decisions. This facilitates a deeper understanding of which methods are best suited for specific tasks and promotes the selection of techniques that enhance the transparency and accountability of AI systems. Ultimately, this thesis contributes to advancing ethical compliance and fostering trust in automated decision-making processes by providing a comprehensive evaluation platform for saliency methods.
Date issued
2024-05
URI
https://hdl.handle.net/1721.1/156781
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.