Ensuring that information access systems, such as recommender systems and search
engines, are “fair” is a complex, multi-faceted problem. Significant progress
has been made in recent years on identifying and measuring important forms of
unfair recommendation and retrieval, but there are still many ways recommender
systems can replicate, exacerbate, or mitigate potentially discriminatory harms
that need careful study. These harms can affect different stakeholders — such
as the producers and consumers of information, among others — in many different
ways, including denying them access to the system's benefits, misrepresenting
them, or reinforcing unhelpful stereotypes.
In this talk, I will provide an overview of the landscape of fairness and
anti-discrimination in information access systems, discussing both the state of
the art in measuring relatively well-understood harms and new directions and
open problems in defining and measuring fairness problems.
Beutel, Alex, Ed H. Chi, Cristos Goodrow, Jilin Chen, Tulsee Doshi, Hai Qian, Li Wei, et al. 2019. “Fairness in Recommendation Ranking through Pairwise Comparisons.” In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. doi:10.1145/3292500.3330745.
Biega, Asia J., Krishna P. Gummadi, and Gerhard Weikum. 2018. “Equity of Attention: Amortizing Individual Fairness in Rankings.” In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 405–14. ACM. doi:10.1145/3209978.3210063.
Binns, Reuben. 2020. “On the Apparent Conflict between Individual and Group Fairness.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 514–24. FAT* ’20. doi:10.1145/3351095.3372864.
Chouldechova, Alexandra. 2017. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5(2): 153–63. doi:10.1089/big.2016.0047.
Friedler, Sorelle A., Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. “The (Im)possibility of Fairness.” Communications of the ACM 64(4): 136–43. doi:10.1145/3433949.
Friedman, Batya, and Helen Nissenbaum. 1996. “Bias in Computer Systems.” ACM Transactions on Information Systems 14(3): 330–47. doi:10.1145/230538.230561.
Kamishima, Toshihiro, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2018. “Recommendation Independence.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81:187–201. Proceedings of Machine Learning Research. New York, NY, USA: PMLR. http://proceedings.mlr.press/v81/kamishima18a.html
Mehrotra, Rishabh, Ashton Anderson, Fernando Diaz, Amit Sharma, Hanna Wallach, and Emine Yilmaz. 2017. “Auditing Search Engines for Differential Satisfaction Across Demographics.” In Proceedings of the 26th International Conference on World Wide Web Companion, 626–33. doi:10.1145/3041021.3054197.
Mitchell, Shira, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2020. “Algorithmic Fairness: Choices, Assumptions, and Definitions.” Annual Review of Statistics and Its Application 8 (November). doi:10.1146/annurev-statistics-042720-125902.x`
Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. “Fairness and Abstraction in Sociotechnical Systems.” In Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. FAT* ’19. doi:10.1145/3287560.3287598.
Wang, Lequn, and Thorsten Joachims. 2021. “User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided Markets.” In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval, 23–41. ICTIR ’21. doi:10.1145/3471158.3472260.