Show simple item record

dc.contributor.authorBalagopalan, Aparna
dc.contributor.authorWang, Kai
dc.contributor.authorSalaudeen, Olawale
dc.contributor.authorBiega, Asia
dc.contributor.authorGhassemi, Marzyeh
dc.date.accessioned2025-09-16T19:40:41Z
dc.date.available2025-09-16T19:40:41Z
dc.date.issued2025-04-22
dc.identifier.isbn979-8-4007-1274-6
dc.identifier.urihttps://hdl.handle.net/1721.1/162663
dc.descriptionWWW ’25, April 28-May 2, 2025, Sydney, NSW, Australiaen_US
dc.description.abstractMachine learning-driven rankings, where individuals (or items) are ranked in response to a query, mediate search exposure or attention in a variety of safety-critical settings. Thus, it is important to ensure that such rankings are fair. Under the goal of equal opportunity, attention allocated to an individual on a ranking interface should be proportional to their relevance across search queries. In this work, we examine amortized fair ranking -- where relevance and attention are cumulated over a sequence of user queries to make fair ranking more feasible in practice. Unlike prior methods that operate on expected amortized attention for each individual, we define new divergence-based measures for attention distribution-based fairness in ranking (DistFaiR), characterizing unfairness as the divergence between the distribution of attention and relevance corresponding to an individual over time. This allows us to propose new definitions of unfairness, which are more reliable at test time. Second, we prove that group fairness is upper-bounded by individual fairness under this definition for a useful class of divergence measures, and experimentally show that maximizing individual fairness through an integer linear programming-based optimization is often beneficial to group fairness. Lastly, we find that prior research in amortized fair ranking ignores critical information about queries, potentially leading to a fairwashing risk in practice by making rankings appear more fair than they actually are.en_US
dc.publisherACM|Proceedings of the ACM Web Conference 2025en_US
dc.relation.isversionofhttps://doi.org/10.1145/3696410.3714660en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleWhat's in a Query: Polarity-Aware Distribution-Based Fair Rankingen_US
dc.typeArticleen_US
dc.identifier.citationAparna Balagopalan, Kai Wang, Olawale Salaudeen, Asia Biega, and Marzyeh Ghassemi. 2025. What's in a Query: Polarity-Aware Distribution-Based Fair Ranking. In Proceedings of the ACM on Web Conference 2025 (WWW '25). Association for Computing Machinery, New York, NY, USA, 3716–3730.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-08-01T07:57:31Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-08-01T07:57:31Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record