Fair Selective Regression
Author(s)
Qu, Xiaoran (Steven)
DownloadThesis PDF (1.095Mb)
Advisor
Wornell, Gregory W.
Terms of use
Metadata
Show full item recordAbstract
Selective regression allows for abstention from prediction when uncertainty is high, creating a tradeoff between coverage rate and prediction error. In this thesis, we consider how selective regression interacts with data that is partitioned into subgroups by a sensitive attribute. Specifically, we define two notions of fairness with respect to these subgroups: monotonic prediction error in the coverage rate, and similar prediction error between subgroups. In each case, we develop and analyze appropriate fairness constraints on the feature set that yield fair selective regression: a calibration condition for the former, and a local differential privacy condition for the latter.
Based on our theoretical results, we design two novel inference algorithms for fair selective regression that enforce their respective feature set constraints via regularization in a neural network. Calibration is enforced with a contrastive loss for subgroup mean-squared error and local differential privacy is enforced with a mutual information approximation. We find that our algorithms effectively enforce fairness without significantly compromising accuracy on a variety of synthetic and real-world datasets.
Date issued
2023-06Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology