Fair Recommender Systems
In this project, we are investigating several questions of fairness and bias in recommender systems:
- What does it mean for a recommender to be fair, unfair, or biased?
- What potentially discriminatory biases are present in the recommender’s input data, algorithmic structure, or output?
- How do these biases change over time through the recommender-user feedback loop?
This is a part of our overall, ongoing goal to help make recommenders (and other AI systems) better for the people they affect.
Blog Posts and Other Coverage
- 2018–2023: NSF award 1751278, $482,081: CAREER: User-Based Simulation Methods for Quantifying Sources of Error and Bias in Recommender Systems
Workshops and Meetings
I have been involved with several workshops, sessions, etc. related to fair recommendation.
- Fairness track at TREC 2019
- FATREC2 at RecSys 2018
- Recommenders Session at FAT* 2018
- FATREC at RecSys 2017, the Workshop on Responsible Recommendation (proceedings)
2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM. . . Acceptance rate: 17.5%., , , , and .
2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 2 times., , , , , , and .
2017. The Demographics of Cool: Popularity and Recommender Performance for Different Groups of Users. In RecSys 2017 Poster Proceedings. CEUR, Workshop Proceedings 1905.and .