Fair Recommender Systems
In this project, we are investigating several questions of fairness and bias in recommender systems:
- What does it mean for a recommender to be fair, unfair, or biased?
- What potentially discriminatory biases are present in the recommender’s input data, algorithmic structure, or output?
- How do these biases change over time through the recommender-user feedback loop?
This is a part of our overall, ongoing goal to help make recommenders (and other AI systems) better for the people they affect.
Blog Posts and Other Coverage
- 2018–2023: NSF award 1751278, $482,081: CAREER: User-Based Simulation Methods for Quantifying Sources of Error and Bias in Recommender Systems
Workshops and Meetings
I have been involved with several workshops, sessions, etc. related to fair recommendation.
2022. Matching Consumer Fairness Objectives & Strategies for RecSys. Presented at the 5th FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived).and .
2022. Fire Dragon and Unicorn Princess: Gender Stereotypes and Children's Products in Search Engine Responses. In SIGIR eCom '22.and .
2022. The Multisided Complexity of Fairness in Recommender Systems. AI Magazine 43 (June 2022), 164–176. Cited 4 times., , , and .
2022. Fairness in Recommender Systems. In Recommender Systems Handbook (3rd edition). Francesco Ricci, Lior Roach, and Bracha Shapira, eds. Springer-Verlag., , , and .
2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 2021), 377–420.and .
2021. Estimation of Fair Ranking Metrics with Incomplete Judgments. In Proceedings of The Web Conference 2021 (TheWebConf 2021). ACM. Acceptance rate: 21%. Cited 16 times., , , , , and .
2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20). ACM, pp. 275–284. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 78 times., , , , and .
2019. FACTS-IR: Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval. SIGIR Forum 53(2) (December 2019), 20–43. Cited 4 times., , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and .
2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM, pp. 242–250. Acceptance rate: 17.5%. Cited 102 times., , , , and .
2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 123 times., , , , , , and .
2017. The Demographics of Cool: Popularity and Recommender Performance for Different Groups of Users. In RecSys 2017 Poster Proceedings. CEUR, Workshop Proceedings 1905. Cited 5 times.and .
2020. FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (UMAP '20). ACM. Cited 2 times., , , , , and .
2019. Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval (FACTS-IR). In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '19). ACM. Cited 6 times., , , and .
2019. FairUMAP 2019 Chairs' Welcome Overview. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization (UMAP '19). ACM., , , , , , , , and .
2018. UMAP 2018 Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2018) Chairs' Welcome & Organization. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation, and Personalization (UMAP '18). ACM., , , and .