EvalRS Keynote

The cover slide of the talk. It shows a picture of a sea monster from a map, and the title information.

I will be giving a keynote talk, “Do You Want To Hunt A Kraken? Mapping and Expanding Recommendation Fairness”, at the EvalRS workshop at CIKM 2022.

The video is now available.


Fair recommendation (and related problems, such as fair information retrieval) is a complex, multi-faceted problem. Significant progress has been made in recent years on identifying and measuring important forms of unfair recommendation, but there are still many ways recommender systems can replicate, exacerbate, or mitigate potentially discriminatory harms that need careful study.

In this talk, I will provide an overview of the landscape of fairness and anti-discrimination in information access systems, discussing both the state of the art in measuring relatively well-understood harms and new directions and open problems in defining and measuring fairness problems. This will set the workshop’s metrics and objectives in a broad context and hopefully catalyze discussion about what the next iteration of EasyRS and research following up on conference outcomes might look like.


The maps are from the Carta marina et descriptio septentrionalium terrarum by Olaus Magnus (1539), redrawn by Antoine Lafréry in 1572.

My Research

I discuss the following papers in this talk:


Michael D. Ekstrand, Anubrata Das, Robin Burke, and Fernando Diaz. 2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16(1–2) (July 11th, 2022), 1–177. FnT IR 16(1–2) (July 11th, 2022). DOI 10.1561/1500000079. arXiv:2105.05779 [cs.IR]. NSF PAR 10347630. Impact factor: 8. Cited 90 times. Cited 56 times.


Michael D. Ekstrand and Maria Soledad Pera. 2022. Matching Consumer Fairness Objectives & Strategies for RecSys. Presented at the 5th FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). arXiv:2209.02662 [cs.IR].


Amifa Raj and Michael D. Ekstrand. 2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22). pp. 726–736. Proc. SIGIR ’22. DOI 10.1145/3477495.3532018. NSF PAR 10329880. Acceptance rate: 20%. Cited 24 times. Cited 24 times.


Michael D. Ekstrand and Daniel Kluver. 2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 4th, 2021), 377–420. UMUAI 31(3) (February 4th, 2021). DOI 10.1007/s11257-020-09284-2. arXiv:1808.07586v2. NSF PAR 10218853. Impact factor: 4.412. Cited 145 times. Cited 37 times (shared with RecSys18).


Other research cited:

  • Workshop paper
  • Lequn Wang and Thorsten Joachims. 2021. “User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided Markets”. In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval (ICTIR ’21). ACM, 23–41. doi:10.1145/3471158.3472260.
  • Lex Beattie, Dan Taber, and Henriette Cramer. 2022. “Challenges in Translating Research to Practice for Evaluating Fairness and Bias in Recommendation Systems.” In Proceedings of the 16th ACM Conference on Recommender Systems (RecSys ’22). doi:10.1145/3523227.3547403.
  • Beutel, Alex, Ed H. Chi, Cristos Goodrow, Jilin Chen, Tulsee Doshi, Hai Qian, Li Wei, et al. 2019. “Fairness in Recommendation Ranking through Pairwise Comparisons.” In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM Press. doi:10.1145/3292500.3330745.
  • Beutel, Alex, Jilin Chen, Zhe Zhao, and Ed H. Chi. 2017. “Data Decisions and Theoretical Implications When Adversarially Learning Fair Representations.” arXiv [cs.LG]. http://arxiv.org/abs/1707.00075.
  • Benjamin Fish, Ashkan Bashardoust, Danah Boyd, Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2019. “Gaps in Information Access in Social Networks?” In WWW ’19: The World Wide Web Conference doi:10.1145/3308558.3313680.
  • Mitchell, Shira, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2020. “Algorithmic Fairness: Choices, Assumptions, and Definitions.” Annual Review of Statistics and Its Application 8 (November). doi:10.1146/annurev-statistics-042720-125902.
  • Biega, A.J., Gummadi, K. P., and Weikum, G. 2018. Equity of Attention: Amortizing Individual Fairness in Rankings. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, ACM. doi:10.1145/3209978.3210063.
  • Kate Crawford, 2017. The Trouble with Bias. NeurIPS 2017 Keynote.
  • Ben Green and Salomé Viljoen. 2020. Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought. In Conference on Fairness, Accountability, and Transparency (FAT* ’20) doi:10.1145/3351095.3372840.
  • Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. “Fairness and Abstraction in Sociotechnical Systems.” In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). doi:10.1145/3287560.3287598.
  • Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. “The (Im)possibility of Fairness.” Communications of the ACM 64 (4): 136–43. doi:10.1145/3433949.
  • Alexandra Chouldechova. 2017. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5 (2): 153–63. doi:10.1089/big.2016.0047.
  • Reuben Binns. 2020. “On the Apparent Conflict between Individual and Group Fairness.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. (FAT* ’20). doi:10.1145/3351095.3372864.