MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Systems for Usable Machine Learning

Author(s)
Zytek, Alexandra
Thumbnail
DownloadThesis PDF (11.55Mb)
Advisor
Veeramachaneni, Kalyan
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Many real-world decision problems are complex, with outcomes difficult to measure and evaluate. The impact of decisions made in these domains is nuanced and takes a long time to be fully realized. Individual mistakes can lead to significant costs, and computational tools such as ML models must be integrated alongside existing, well-established human workflows. These properties of such decision problems means that ML solutions must be usable in order to be effective — in other words, developed and deployed in such a way as to be used by humans in decision-making and improve outcomes. In order improve ML usability, developers create ML tools, or diverse kinds of interfaces that allow users to understand ML models and their predictions. In this thesis, we use real-world case studies to synthesize generalizable lessons for applying usable ML tools to complex, real-world decision problems. Based on experience developing ML tools for child welfare screening, we propose a formal taxonomy of feature properties related to usability and interpretability. We then discuss the design and development of a system to make generating ML explanations that use such interpretable features more effective. Pyreal is a framework and Python library implementation that uses updated data transformers to generate explanations of ML models and predictions using interpretable features. Motivated by the development and customization effort required to develop ML tools for new applications, we then discuss the development of Sibyl, a configurable and comprehensive system for generating usable ML interfaces for a wide range of applications. We then discuss our case study in applying Sibyl to the decision problem of wind turbine monitoring. We then discuss Explingo, our system for transforming traditional ML explanations into natural language narratives to further improve the usability of ML outputs. We finish by discussing the practical lessons this work demonstrates related to the need for usable ML, the challenges specific to these complex applications, ethical questions, and future directions.
Date issued
2025-02
URI
https://hdl.handle.net/1721.1/158953
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.