Show simple item record

dc.contributor.advisorPentland, Alexander P.
dc.contributor.authorHampton, Lelia Marie
dc.date.accessioned2023-07-31T19:57:49Z
dc.date.available2023-07-31T19:57:49Z
dc.date.issued2023-06
dc.date.submitted2023-07-13T14:21:23.444Z
dc.identifier.urihttps://hdl.handle.net/1721.1/151670
dc.description.abstractTo deploy safe machine learning systems in the real world, we must ensure they are fair, robust, and calibrated. However, heavy-tails pose a challenge to this mandate, especially since real world data is often imbalanced and marginalized subgroups tend to be underrepresented. To move toward safer systems, we present two studies on fair pre-processing and ensemble learning, respectively. We show that fair pre-processing comes with a fairness-robustness-calibration tradeoff, and we present a novel adaptive sampling algorithm to overcome this tradeoff. Furthermore, we demonstrate that ensemble learning on its own increases the fairness, robustness, and calibration of machine learning models. The adaptive sampling algorithm and ensemble learning present opportunities for practitioners to overcome this tradeoff in practice.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleFair, Robust, and Calibrated Deep Learning with Heavy-Tailed Subgroups
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record