Show simple item record

dc.contributor.authorBakker, M
dc.contributor.authorValdés, HR
dc.contributor.authorPatrick Tu, D
dc.contributor.authorGummadi, KP
dc.contributor.authorVarshney, KR
dc.contributor.authorWeller, A
dc.contributor.authorPentland, AS
dc.date.accessioned2021-11-02T13:07:41Z
dc.date.available2021-11-02T13:07:41Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/137071
dc.description.abstract© 2020 for this paper by its authors. Increasing concern about discrimination and bias in datadriven decision making systems has led to a growth in academic and popular interest in algorithmic fairness. Prior work on fairness in machine learning has focused primarily on the setting in which all the information (features) needed to make a confident decision about an individual is readily available. In practice, however, many applications allow further information to be acquired at a feature-specific cost. For example, when diagnosing a patient, the doctor starts with only a handful of symptoms but progressively improves the diagnosis by acquiring additional information before making a final decision. We show that we can achieve fairness by leveraging a natural affordance of this setting: The decision on when to stop acquiring more features and proceeding to predict. First, we show that by setting a single set of confidence thresholds for stopping, we can attain equal error rates across arbitrary groups. Second, we extend the framework to a set of group-specific confidence thresholds which ensure that a classifier achieves equal opportunity (equal falsepositive or false-negative rates). The confidence thresholds naturally achieve fairness by redistributing the budget across individuals. This leads to statistical fairness across groups but also addresses the limitation that current statistical fairness methods fail to provide any guarantees to individuals. Finally, using two public datasets, we confirm the effectiveness of our methods empirically and investigate the limitations.en_US
dc.language.isoen
dc.relation.isversionofhttp://ceur-ws.org/Vol-2560/paper24.pdfen_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceCEUR Workshop Proceedingsen_US
dc.titleFair enough: Improving fairness in budget-constrained decision making using confidence thresholdsen_US
dc.typeArticleen_US
dc.identifier.citationBakker, M, Valdés, HR, Patrick Tu, D, Gummadi, KP, Varshney, KR et al. 2020. "Fair enough: Improving fairness in budget-constrained decision making using confidence thresholds." CEUR Workshop Proceedings, 2560.
dc.contributor.departmentMIT-IBM Watson AI Lab
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratory
dc.relation.journalCEUR Workshop Proceedingsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-07-01T15:27:17Z
dspace.orderedauthorsBakker, M; Valdés, HR; Patrick Tu, D; Gummadi, KP; Varshney, KR; Weller, A; Pentland, ASen_US
dspace.date.submission2021-07-01T15:27:18Z
mit.journal.volume2560en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record