Fair Recommender Systems
In this project, we are investigating several questions of fairness and bias in recommender systems:
- What does it mean for a recommender to be fair, unfair, or biased?
- What potentially discriminatory biases are present in the recommender’s input data, algorithmic structure, or output?
- How do these biases change over time through the recommender-user feedback loop?
This is a part of our overall, ongoing goal to help make recommenders (and other AI systems) better for the people they affect. The key starting point to read about this research is our monograph. I have been involved with several workshops, sessions, etc. related to fair recommendation. 2023. Towards Measuring Fairness in Grid Layout in Recommender Systems. Presented at the 6th FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). arXiv:2309.10271 [cs.IR]. 2023. Patterns of Gender-Specializing Query Reformulation. Short paper in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23). Proc. SIGIR ’23. DOI 10.1145/3539618.3592034. arXiv:2304.13129. NSF PAR 10423689. 2023. Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information Access. In Proceedings of the 2023 Conference on Human Information Interaction and Retrieval (CHIIR ’23). Proc. CHIIR ’23. DOI 10.1145/3576840.3578316. arXiv:2301.04780. NSF PAR 10423693. Acceptance rate: 39.4%. Cited 3 times. Cited 4 times. 2022. Matching Consumer Fairness Objectives & Strategies for RecSys. Presented at the 5th FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). arXiv:2209.02662 [cs.IR]. 2022. Fire Dragon and Unicorn Princess: Gender Stereotypes and Children’s Products in Search Engine Responses. In SIGIR eCom ’22. DOI 10.48550/arXiv.2206.13747. arXiv:2206.13747 [cs.IR]. Cited 2 times. Cited 2 times. 2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22). pp. 726–736. Proc. SIGIR ’22. DOI 10.1145/3477495.3532018. NSF PAR 10329880. Acceptance rate: 20%. Cited 24 times. Cited 20 times. 2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16(1–2) (July 11th, 2022), 1–177. FnT IR 16(1–2) (July 11th, 2022). DOI 10.1561/1500000079. arXiv:2105.05779 [cs.IR]. NSF PAR 10347630. Impact factor: 8. Cited 90 times. Cited 51 times. 2022. The Multisided Complexity of Fairness in Recommender Systems. AI Magazine 43(2) (June 23rd, 2022), 164–176. DOI 10.1002/aaai.12054. NSF PAR 10334796. Cited 11 times. Cited 9 times. 2022. Fairness in Recommender Systems. In Recommender Systems Handbook (3rd edition). Francesco Ricci, Lior Roach, and Bracha Shapira, eds. Springer-Verlag. DOI 10.1007/978-1-0716-2197-4_18. ISBN 978-1-0716-2196-7. Cited 13 times. Cited 12 times. 2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 4th, 2021), 377–420. UMUAI 31(3) (February 4th, 2021). DOI 10.1007/s11257-020-09284-2. arXiv:1808.07586v2. NSF PAR 10218853. Impact factor: 4.412. Cited 145 times. Cited 24 times (shared with RecSys18◊). 2021. Estimation of Fair Ranking Metrics with Incomplete Judgments. In Proceedings of The Web Conference 2021 (TheWebConf 2021). ACM. Proc. TheWebConf 2021. DOI 10.1145/3442381.3450080. arXiv:2108.05152. NSF PAR 10237411. Acceptance rate: 21%. Cited 30 times. Cited 27 times. 2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20). ACM, pp. 275–284. Proc. CIKM ’20. DOI 10.1145/3340531.3411962. arXiv:2004.13157 [cs.IR]. NSF PAR 10199451. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 137 times. Cited 127 times. 2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). arXiv:2009.01311 [cs.IR]. Cited 25 times. Cited 21 times. 2020. Overview of the TREC 2019 Fair Ranking Track. In The Twenty-Eighth Text REtrieval Conference (TREC 2019) Proceedings (TREC 2019). Proc. TREC 2019. arXiv:2003.11650. Cited 35 times. Cited 5 times. 2019. FACTS-IR: Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval. SIGIR Forum 53(2) (December 12th, 2019), 20–43. DOI 10.1145/3458553.3458556. Cited 34 times. Cited 28 times. 2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18). ACM, pp. 242–250. Proc. RecSys ’18. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Citations reported under UMUAI21◊. Citations reported under UMUAI21◊. 2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Proc. FAT* 2018. Acceptance rate: 24%. Cited 216 times. Cited 177 times. 2017. The Demographics of Cool: Popularity and Recommender Performance for Different Groups of Users. In RecSys 2017 Poster Proceedings. CEUR, Workshop Proceedings 1905. Cited 15 times. Cited 6 times. 2021. FAccTRec 2021: The 4th Workshop on Responsible Recommendation. In Proceedings of the 15th ACM Conference on Recommender Systems (RecSys ’21). ACM. Proc. RecSys ’21. DOI 10.1145/3460231.3470932. Cited 1 time. Cited 1 time. 2020. 3rd FATREC Workshop: Responsible Recommendation. In Proceedings of the 14th ACM Conference on Recommender Systems (RecSys ’20). ACM. Proc. RecSys ’20. DOI 10.1145/3383313.3411538. Cited 5 times. Cited 5 times. 2020. FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’20). ACM. Proc. UMAP ’20. DOI 10.1145/3340631.3398671. Cited 2 times. Cited 2 times. 2019. Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval (FACTS-IR). In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’19). ACM. Proc. SIGIR ’19. DOI 10.1145/3331184.3331644. Cited 6 times. 2019. FairUMAP 2019 Chairs’ Welcome Overview. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization (UMAP ’19). ACM. Proc. UMAP ’19. DOI 10.1145/3314183.3323842. 2018. 2nd FATREC Workshop: Responsible Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18). ACM. Proc. RecSys ’18. DOI 10.1145/3240323.3240335. Cited 11 times. Cited 9 times. 2018. UMAP 2018 Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2018) Chairs’ Welcome & Organization. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation, and Personalization (UMAP ’18). ACM. Proc. UMAP ’18. DOI 10.1145/3213586.3226200. 2017. The FATREC Workshop on Responsible Recommendation. In Proceedings of the 11th ACM Conference on Recommender Systems (RecSys ’17). ACM. Proc. RecSys ’17. DOI 10.1145/3109859.3109960. Cited 12 times.Blog Posts and Other Coverage
Funding
Workshops and Meetings
Publications
Workshop Summaries