ECIR 2025 Tutorial on Fair IR

Michael D. Ekstrand. 2025. Fairness in Information Access: Conceptual Foundations and New Directions. Tutorial presented at Proceedings of the 47th European Conference on Information Retrieval, Apr 6, 2025. DOI 10.1007/978-3-031-88720-8_41.

Abstract

As information access systems have profound impact through their mediation of users’ experiences of information spaces, it is vital to ensure that their benefits, costs, and other impacts are fair: that people have equitable access to information, relevant information results, and opportunities, and are not systematically excluded, underserved, or harmed. Fairness is further a broad and complex topic with many divergent and sometimes competing definitions. This tutorial will provide an introduction to fairness in machine learning and information access, a broad survey of the landscape of fairness and fairness-related harms, and a review of key results and both established and emerging techniques for measuring and improving the fairness of information retrieval technologies.

Key Resources

This tutorial reuses content from the 2019 tutorials authored by myself, Robin Burke, and Fernando Diaz, and much of the material is in our monograph:

Michael D. Ekstrand, Anubrata Das, Robin Burke, and Fernando Diaz. 2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16(1–2) (July 2022), 1–177. DOI 10.1561/1500000079. arXiv:2105.05779 [cs.IR]. NSF PAR 10347630. Impact factor: 8. Cited 238 times.

Slides

Bibliography

Abbasi, Mohsen, Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2019. “Fairness in Representation: Quantifying Stereotyping as a Representational Harm.” In Proceedings of the 2019 SIAM International Conference on Data Mining (SDM). Proceedings. Society for Industrial and Applied Mathematics. https://doi.org/10.1137/1.9781611975673.90.
Agudelo-España, Diego, Sebastian Gomez-Gonzalez, Stefan Bauer, Bernhard Schölkopf, and Jan Peters. 2020. “Bayesian Online Prediction of Change Points.” Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), August 27, 320–29. https://proceedings.mlr.press/v124/agudelo-espana20a.html.
Akutsu, Hiromi, Gordon E. Legge, Julie A. Ross, and Kurt J. Schuebel. 1991. “Psychophysics of ReadingEffects of Age-related Changes in Vision.” Journal of Gerontology 46 (6): P325–31. https://doi.org/10.1093/geronj/46.6.P325.
Amendola, Maddalena, Carlos Castillo, Andrea Passarella, and Raffaele Perego. 2024. “Understanding and Addressing Gender Bias in Expert Finding Task.” Pre-published July 7. http://arxiv.org/abs/2407.05335.
Anderson, Chris. 2006. The Long Tail: Why the Future of Business Is Selling Less of More. Hyperion (2006). http://www.librarything.com/work/3394404/249998764.
Balagopalan, Aparna, Abigail Z. Jacobs, and Asia J. Biega. 2023. “The Role of Relevance in Fair Ranking.” Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (New York, NY, USA), SIGIR ’23, July 18, 2650–60. https://doi.org/10.1145/3539618.3591933.
Barocas, Solon, Anhong Guo, Ece Kamar, et al. 2021. “Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs.” Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (New York, NY, USA), July 21, 368–78. https://doi.org/10.1145/3461702.3462610.
Beattie, Lex, Dan Taber, and Henriette Cramer. 2022. “Challenges in Translating Research to Practice for Evaluating Fairness and Bias in Recommendation Systems.” Proceedings of the 16th ACM Conference on Recommender Systems (New York, NY, USA), September 18, 528–30. https://doi.org/10.1145/3523227.3547403.
Beutel, Alex, Jilin Chen, Zhe Zhao, and Ed H Chi. 2017. “Data Decisions and Theoretical Implications When Adversarially Learning Fair Representations.” In arXiv [Cs.LG]. July 1. http://arxiv.org/abs/1707.00075.
Beutel, Alex, Ed H Chi, Cristos Goodrow, et al. 2019. “Fairness in Recommendation Ranking Through Pairwise Comparisons.” Paper presented Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. https://doi.org/10.1145/3292500.3330745.
Biega, Asia J, Krishna P Gummadi, and Gerhard Weikum. 2018. “Equity of Attention: Amortizing Individual Fairness in Rankings.” Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR ’18, June 27, 405–14. https://doi.org/10.1145/3209978.3210063.
Billey, Amber, Matthew Haugen, John Hostage, Nancy Sack, and Adam L Schiff. 2016. Report of the PCC Ad Hoc Task Group on Gender in Name Authority Records. Program for Cooperative Cataloging. https://www.loc.gov/aba/pcc/documents/Gender_375%20field_RecommendationReport.pdf.
Bolukbasi, Tolga, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. “Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings.” In Advances in Neural Information Processing Systems 29 (NIPS 2016), edited by D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett. Curran Associates, Inc. http://papers.nips.cc/paper/6227-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.
Burke, Robin, Nasim Sonboli, and Aldo Ordonez-Gauger. 2018. “Balanced Neighborhoods for Multi-Sided Fairness in Recommendation.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, edited by Sorelle A Friedler and Christo Wilson, vol. 81. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v81/burke18a.html.
Chouldechova, Alexandra. 2017. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5 (2): 153–63. https://doi.org/10.1089/big.2016.0047.
Corbett-Davies, Sam, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. “Algorithmic Decision Making and the Cost of Fairness.” KDD ’17, August 13, 797–806. https://doi.org/10.1145/3097983.3098095.
Diaz, Fernando, Bhaskar Mitra, Michael D Ekstrand, Asia J Biega, and Ben Carterette. 2020. “Evaluating Stochastic Rankings with Expected Exposure.” Proceedings of the 29th ACM International Conference on Information and Knowledge Management, CIKM ’20, October 21. https://doi.org/10.1145/3340531.3411962.
Dwork, Cynthia, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. “Fairness Through Awareness.” ITCS ’12 (New York, NY, USA), January 8, 214–26. https://doi.org/10.1145/2090236.2090255.
Ekstrand, Michael D., Lex Beattie, Maria Soledad Pera, and Henriette Cramer. 2024. “Not Just Algorithms: Strategically Addressing Consumer Impacts in Information Retrieval.” Proceedings of the 46th European Conference on Information Retrieval, Lecture Notes in Computer Science, vol. 14611 (March). https://doi.org/10.1007/978-3-031-56066-8_25.
Ekstrand, Michael D., Ben Carterette, and Fernando Diaz. 2024. “Distributionally-Informed Recommender System Evaluation.” ACM Transactions on Recommender Systems 2 (1): 6:1–27. https://doi.org/10.1145/3613455.
Ekstrand, Michael D, Anubrata Das, Robin Burke, and Fernando Diaz. 2022a. “Fairness in Information Access Systems.” Foundations and Trends® in Information Retrieval 16 (1–2): 1–177. https://doi.org/10.1561/1500000079.
Ekstrand, Michael D, Anubrata Das, Robin Burke, and Fernando Diaz. 2022b. “Fairness in Recommender Systems.” In Recommender Systems Handbook, edited by Francesco Ricci, Lior Rokach, and Bracha Shapira. Springer US. https://doi.org/10.1007/978-1-0716-2197-4_18.
Ekstrand, Michael D, and Daniel Kluver. 2021. “Exploring Author Gender in Book Rating and Recommendation.” User Modeling and User-Adapted Interaction 31 (3): 377–420. https://doi.org/10.1007/s11257-020-09284-2.
Ekstrand, Michael D, and Maria Soledad Pera. 2022. “Matching Consumer Fairness Objectives & Strategies for RecSys.” In arXiv [Cs.IR]. September 6. http://arxiv.org/abs/2209.02662.
Ekstrand, Michael D, Mucun Tian, Ion Madrazo Azpiazu, et al. 2018. “All the Cool Kids, How Do They Fit in?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, edited by Sorelle A Friedler and Christo Wilson, vol. 81. Proceedings of Machine Learning Research. PMLR. https://proceedings.mlr.press/v81/ekstrand18b.html.
Epps-Darling, Avriel, Romain Takeo Bouyer, and Henriette Cramer. 2020. “Artist Gender Representation in Music Streaming.” Proceedings of the 21st International Society for Music Information Retrieval Conference, October 12, 248–54. https://program.ismir2020.net/poster_2-11.html.
Feldman, Michael, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. “Certifying and Removing Disparate Impact.” August 10, 259–68. https://doi.org/10.1145/2783258.2783311.
Ferraro, Andres, Michael D. Ekstrand, and Christine Bauer. 2024. “It’s Not You, It’s Me: The Impact of Choice Models and Ranking Strategies on Gender Imbalance in Music Recommendation.” Paper presented RecSys ’24. Proceedings of the 18th ACM Conference on Recommender Systems, August 22. https://doi.org/10.1145/3640457.3688163.
Ferraro, Andres, Xavier Serra, and Christine Bauer. 2021. “Break the Loop: Gender Imbalance in Music Recommenders.” Proceedings of the 2021 Conference on Human Information Interaction and Retrieval, CHIIR ’21, March 14, 249–54. https://doi.org/10.1145/3406522.3446033.
Friedler, Sorelle A, Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. “The (Im)possibility of Fairness.” Commun. ACM 64 (4): 136–43. https://doi.org/10.1145/3433949.
García-Soriano, David, and Francesco Bonchi. 2021. “Maxmin-Fair Ranking: Individual Fairness Under Group-Fairness Constraints.” KDD ’21 (New York, NY, USA), August 14, 436–46. https://doi.org/10.1145/3447548.3467349.
Green, Ben, and Yiling Chen. 2019. “Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments.” FAT* ’19 (New York, NY, USA), January 29, 90–99. https://doi.org/10.1145/3287560.3287563.
Green, Ben, and Salomé Viljoen. 2020. “Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought.” FAT* ’20 (New York, NY, USA), January 27, 19–31. https://doi.org/10.1145/3351095.3372840.
Hardt, Moritz, Eric Price, and Nati Srebro. 2016. “Equality of Opportunity in Supervised Learning.” 3315–23. http://papers.nips.cc/paper/6373-equality-of-opportunity-in-supervised-learning.
Hoffmann, Anna Lauren. 2018. “Data Violence and How Bad Engineering Choices Can Damage Society.” April 30. https://medium.com/s/story/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4.
Hutchinson, Ben, and Margaret Mitchell. 2019. “50 Years of Test (Un)fairness: Lessons for Machine Learning.” FAT* ’19 (New York, NY, USA), January 29, 49–58. https://doi.org/10.1145/3287560.3287600.
Kamishima, T, S Akaho, H Asoh, and J Sakuma. 2012. “Considerations on Fairness-Aware Data Mining.” December, 378–85. https://doi.org/10.1109/ICDMW.2012.101.
Kamishima, T, S Akaho, H Asoh, and I Sato. 2016. “Model-Based Approaches for Independence-Enhanced Recommendation.” December, 860–67. https://doi.org/10.1109/ICDMW.2016.0127.
Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. 2017. “Inherent Trade-Offs in the Fair Determination of Risk Scores.” In Leibniz International Proceedings in Informatics (LIPIcs), edited by Christos H Papadimitriou, vol. 67. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH, Wadern/Saarbruecken, Germany. https://doi.org/10.4230/LIPICS.ITCS.2017.43.
Lazovich, Tomo, Luca Belli, Aaron Gonzales, et al. 2022. “Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Metrics.” Patterns (N Y) 3 (8): 100568. https://doi.org/10.1016/j.patter.2022.100568.
Lipton, Zachary, Julian McAuley, and Alexandra Chouldechova. 2018. “Does Mitigating ML’s Impact Disparity Require Treatment Disparity?” In Advances in Neural Information Processing Systems 31, edited by S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, and R Garnett. Curran Associates, Inc. http://papers.nips.cc/paper/8035-does-mitigating-mls-impact-disparity-require-treatment-disparity.pdf.
Liu, Weiwen, Jun Guo, Nasim Sonboli, Robin Burke, and Shengyu Zhang. 2019. “Personalized Fairness-aware Re-ranking for Microlending.” Paper presented Proceedings of the 13th ACM Conference on Recommender Systems. RecSys ’19. https://doi.org/10.1145/3298689.3347016.
Mehrotra, Rishabh, Ashton Anderson, Fernando Diaz, Amit Sharma, Hanna Wallach, and Emine Yilmaz. 2017. “Auditing Search Engines for Differential Satisfaction Across Demographics.” Proceedings of the 26th International Conference on World Wide Web Companion, 626–33. https://doi.org/10.1145/3041021.3054197.
Mitchell, Shira, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2020. “Algorithmic Fairness: Choices, Assumptions, and Definitions.” Annual Review of Statistics and Its Application 8 (November). https://doi.org/10.1146/annurev-statistics-042720-125902.
Olteanu, Alexandra, Carlos Castillo, Fernando Diaz, and Emre Kıcıman. 2019. “Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries.” Front. Big Data 2 (July): 13. https://doi.org/10.3389/fdata.2019.00013.
Patro, Gourab K, Lorenzo Porcaro, Laura Mitchell, Qiuyue Zhang, Meike Zehlike, and Nikhil Garg. 2022. “Fair Ranking: A Critical Review, Challenges, and Future Directions.” January 29. http://arxiv.org/abs/2201.12662.
Pinney, Christine, Amifa Raj, Alex Hanna, and Michael D Ekstrand. 2023. “Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information Access.” Proceedings of the 2023 Conference on Human Information Interaction and Retrieval, March 20, 269–79. https://doi.org/10.1145/3576840.3578316.
Raj, Amifa, and Michael Ekstrand. 2023. “Unified Browsing Models for Linear and Grid Layouts.” Pre-published October 19. https://doi.org/10.48550/arXiv.2310.12524.
Raj, Amifa, and Michael D Ekstrand. 2022. “Fire Dragon and Unicorn Princess: Gender Stereotypes and Children’s Products in Search Engine Responses.” Paper presented SIGIR-eCom ’22. Proceedings of the 2022 SIGIR Workshop on eCommerce, June 28. http://arxiv.org/abs/2206.13747.
Raj, Amifa, and Michael D. Ekstrand. 2024. “Towards Optimizing Ranking in Grid-Layout for Provider-Side Fairness.” Proceedings of the 46th European Conference on Information Retrieval, LNCS, vol. 14612 (March): 90–105. https://doi.org/10.1007/978-3-031-56069-9_7.
Riedl, John, and Joseph Konstan. 2002. Word of Mouse. Warner Books.
Sapiezynski, Piotr, Wesley Zeng, Ronald E Robertson, Alan Mislove, and Christo Wilson. 2019. “Quantifying the Impact of User Attention on Fair Group Representation in Ranked Lists.” Companion Proceedings of The 2019 World Wide Web Conference, WWW ’19 Companion, May 13, 553–62. https://doi.org/10.1145/3308560.3317595.
Selbst, Andrew D, Danah Boyd, Sorelle Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2018. “Fairness and Abstraction in Sociotechnical Systems.” August 23. https://papers.ssrn.com/abstract=3265913.
Seyedsalehi, Shirin, Amin Bigdeli, Negar Arabzadeh, et al. 2025. “Understanding and Mitigating Gender Bias in Information Retrieval Systems.” Foundations and Trends® in Information Retrieval 19 (3): 191–364. https://doi.org/10.1561/1500000103.
Singh, Ashudeep, and Thorsten Joachims. 2018. “Fairness of Exposure in Rankings.” Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (New York, NY, USA), 2219–28. https://doi.org/10.1145/3219819.3220088.
Smith, Jessie J., Lex Beattie, and Henriette Cramer. 2023. “Scoping Fairness Objectives and Identifying Fairness Metrics for Recommender Systems: The Practitioners’ Perspective.” Proceedings of the ACM Web Conference 2023 (New York, NY, USA), April 30, 3648–59. https://doi.org/10.1145/3543507.3583204.
Smith, Jessie J, and Lex Beattie. 2022. RecSys Fairness Metrics: Many to Use but Which One to Choose?” In arXiv [Cs.HC]. September 8. http://arxiv.org/abs/2209.04011.
Ungruh, Robin, Murtadha Al Nahadi, and Maria Soledad Pera. 2025. “Mirror, Mirror: Exploring Stereotype Presence Among Top-N Recommendations That May Reach Children.” ACM Trans. Recomm. Syst., ahead of print, March 6. https://doi.org/10.1145/3721987.
Vardasbi, Ali, Maarten de Rijke, Fernando Diaz, and Mostafa Dehghani. 2024. “The Impact of Group Membership Bias on the Quality and Fairness of Exposure in Ranking.” Pre-published April 29. http://arxiv.org/abs/2308.02887.
Wang, Lequn, and Thorsten Joachims. 2021. “User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided Markets.” Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval (New York, NY, USA), July 11, 23–41. https://doi.org/10.1145/3471158.3472260.
Wu, Haolun, Bhaskar Mitra, Chen Ma, Fernando Diaz, and Xue Liu. 2022. “Joint Multisided Exposure Fairness for Recommendation.” Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, July 6, 703–14. https://doi.org/10.1145/3477495.3532007.
Zehlike, Meike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. FA*IR: A Fair Top-k Ranking Algorithm.” CIKM ’17, November 6, 1569–78. https://doi.org/10.1145/3132847.3132938.
Zehlike, Meike, Tom Sühr, Ricardo Baeza-Yates, Francesco Bonchi, Carlos Castillo, and Sara Hajian. 2022. “Fair Top-k Ranking with Multiple Protected Groups.” Inf. Process. Manag. 59 (1): 102707. https://doi.org/10.1016/j.ipm.2021.102707.
Zehlike, Meike, Ke Yang, and Julia Stoyanovich. 2022a. “Fairness in Ranking, Part I: Score-Based Ranking.” ACM Computing Surveys (New York, NY, USA) 55 (6): 118:1–36. https://doi.org/10.1145/3533379.
Zehlike, Meike, Ke Yang, and Julia Stoyanovich. 2022b. “Fairness in Ranking, Part II: Learning-to-Rank and Recommender Systems.” ACM Comput. Surv. (New York, NY, USA), ahead of print, April 23. https://doi.org/10.1145/3533380.