MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Evaluating Bias in Machine Learning-Enabled Radiology Image Classification

Author(s)
Atia, Dina
Thumbnail
DownloadThesis PDF (1.682Mb)
Advisor
Ghassemi, Marzyeh
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
As machine learning grows more prevalent in the medical field, it is important to ensure that fairness is considered as a central criterion in the evaluation of algorithms and models. Building upon previous work, we study a set of machine learning models used to detect spinal fractures, comparing their performance across various age, sex, and geographic groups. This serves not only as an audit of this particular set of models but also contributes to the development of a meaningful standard for fairness in the space of Machine Learning for Healthcare. We analyze the 10 highest-performing models from a competition hosted by the Radiological Society of North America in 2022. In this competition, teams competed to design and train machine learning models to detect and locate cervical spine fractures, a severe injury with high mortality rates, with high accuracy. We split the data into subgroups across the categories of sex, age, and continent, then compare them across seven performance metrics. We find the models to be fair overall, with similar performance across the given metrics. Additionally, we perform an intersectional analysis, where we compare the same metrics, but instead split the data based on intersections of the above attributes, and again find fair overall performance. Taking a holistic look at the results, the models appear to be fair under a variety of comparative metrics. However, future work is needed to determine whether or not the models we studied would in fact be fair for a more representative population.
Date issued
2023-06
URI
https://hdl.handle.net/1721.1/151662
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.