Fair, Robust, and Calibrated Deep Learning with Heavy-Tailed Subgroups
Author(s)
Hampton, Lelia Marie![Thumbnail](/bitstream/handle/1721.1/151670/hampton-lelia-sm-eecs-2023-thesis.pdf.jpg?sequence=3&isAllowed=y)
DownloadThesis PDF (1.038Mb)
Advisor
Pentland, Alexander P.
Terms of use
Metadata
Show full item recordAbstract
To deploy safe machine learning systems in the real world, we must ensure they are fair, robust, and calibrated. However, heavy-tails pose a challenge to this mandate, especially since real world data is often imbalanced and marginalized subgroups tend to be underrepresented. To move toward safer systems, we present two studies on fair pre-processing and ensemble learning, respectively. We show that fair pre-processing comes with a fairness-robustness-calibration tradeoff, and we present a novel adaptive sampling algorithm to overcome this tradeoff. Furthermore, we demonstrate that ensemble learning on its own increases the fairness, robustness, and calibration of machine learning models. The adaptive sampling algorithm and ensemble learning present opportunities for practitioners to overcome this tradeoff in practice.
Date issued
2023-06Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology