This is the resource page for my keynote talk at the AI & Society Conference on June 28 in Zhuhai, China, entitled “Searching for Fairness: Grounding and Measuring Fairness and Social Impacts in Information Access”.
Abstract
Information access systems, such as search engines, recommender systems, and conversational agents, are used daily by billions of Internet users and have a profound impact on users’ information experiences, access to knowledge, and understanding of the world and people around them. These systems differ in crucial ways from the kinds of systems most frequently studied in the algorithmic fairness literature, requiring new techniques to properly understand and measure their social impacts. In this talk, I will discuss what makes these systems different and interesting; ground the quest for fairness and mitigating social harms in the varying goals of recommender systems; and describe several specific approaches and a general philosophy for measuring and mitigating harms in information access and other AI systems.
Jacy Reese Anthis, Kristian Lum, Michael Ekstrand, Avi Feller, Alexander D’Amour, and Chenhao Tan. 2024. The Impossibility of Fair LLMs. In HEAL: Human-centered Evaluation and Auditing of Language Models, a non-archival workshop at CHI 2024, May 12, 2024. arXiv:2406.03198v1 [cs.CL]. Cited 20 times.
Belkin, Nicholas J, and Stephen E Robertson. 1976. “Some Ethical and Political Implications of Theoretical Research in Information Science.” In Proceedings of the ASIS Annual Meeting. https://www.researchgate.net/publication/255563562.
Beutel, Alex, Ed H. Chi, Cristos Goodrow, Jilin Chen, Tulsee Doshi, Hai Qian, Li Wei, et al. 2019. “Fairness in Recommendation Ranking through Pairwise Comparisons.” In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. doi:10.1145/3292500.3330745.
Biega, Asia J., Krishna P. Gummadi, and Gerhard Weikum. 2018. “Equity of Attention: Amortizing Individual Fairness in Rankings.” In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 405–14. ACM. doi:10.1145/3209978.3210063.
Binns, Reuben. 2020. “On the Apparent Conflict between Individual and Group Fairness.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 514–24. FAT* ’20. doi:10.1145/3351095.3372864.
Burke, Robin, and Morgan Sylvester. 2024. “Post-Userist Recommender Systems: A Manifesto.” <arXiv:2410.11870>.
Chouldechova, Alexandra. 2017. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5(2): 153–63. doi:10.1089/big.2016.0047.
Dwork, Cynthia, and Christina Ilvento. 2019. “Fairness under Composition.” In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). doi:10.4230/LIPICS.ITCS.2019.33.
Fish, Benjamin, Ashkan Bashardoust, Danah Boyd, Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2019. “Gaps in Information Access in Social Networks?” In WWW ’19: The World Wide Web Conference, 480–90. doi:10.1145/3308558.3313680.
Friedler, Sorelle A., Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. “The (Im)possibility of Fairness.” Communications of the ACM 64(4): 136–43. doi:10.1145/3433949.
Friedman, Batya, and Helen Nissenbaum. 1996. “Bias in Computer Systems.” ACM Transactions on Information Systems 14(3): 330–47. doi:10.1145/230538.230561.
Hill, William, Larry Stead, Mark Rosenstein, and George Furnas. 1995. “Recommending and Evaluating Choices in a Virtual Community of Use.” In CHI ’95: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 194–201. doi:10.1145/223904.223929.
Kamishima, Toshihiro, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2018. “Recommendation Independence.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81:187–201. Proceedings of Machine Learning Research. New York, NY, USA: PMLR. http://proceedings.mlr.press/v81/kamishima18a.html
Mehrotra, Rishabh, Ashton Anderson, Fernando Diaz, Amit Sharma, Hanna Wallach, and Emine Yilmaz. 2017. “Auditing Search Engines for Differential Satisfaction Across Demographics.” In Proceedings of the 26th International Conference on World Wide Web Companion, 626–33. doi:10.1145/3041021.3054197.
Mitchell, Shira, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2020. “Algorithmic Fairness: Choices, Assumptions, and Definitions.” Annual Review of Statistics and Its Application 8 (November). doi:10.1146/annurev-statistics-042720-125902.
Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. “Fairness and Abstraction in Sociotechnical Systems.” In Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. FAT* ’19. doi:10.1145/3287560.3287598.
Smith, Jessie J., Lex Beattie, and Henriette Cramer. 2023. “Scoping Fairness Objectives and Identifying Fairness Metrics for Recommender Systems: The Practitioners’ Perspective.” In Proceedings of the ACM Web Conference 2023, 3648–59. doi:10.1145/3543507.3583204.
Smith, Jessie J, and Lex Beattie. 2022. “RecSys Fairness Metrics: Many to Use but Which One to Choose?” arXiv [Cs.HC]. http://arxiv.org/abs/2209.04011.
Wang, Lequn, and Thorsten Joachims. 2021. “User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided Markets.” In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval, 23–41. ICTIR ’21. doi:10.1145/3471158.3472260.