Comparing Fair Ranking Metrics
2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation at RecSys 2020 (peer-reviewed but not archived). arXiv:2009.01311 [cs.IR]. Cited 37 times. Cited 29 times.
, , , and .This paper was led by my Ph.D student Amifa Raj. Ranking is a fundamental aspect of recommender systems. However, ranked outputs can be susceptible to various biases; some of these may cause disadvantages to members of protected groups. Several metrics have been proposed to quantify the (un)fairness of rankings, but there has not been to date any direct comparison of these metrics. This complicates deciding what fairness metrics are applicable for specific scenarios, and assessing the extent to which metrics agree or disagree. In this paper, we describe several fair ranking metrics in a common notation, enabling direct comparison of their approaches and assumptions, and empirically compare them on the same experimental setup and data set. Our work provides a direct comparative analysis identifying similarities and differences of fair ranking metrics selected for our work.Abstract
Listed Under
Recorded Elsewhere