Fair Recommender Systems
In this project, we are investigating several questions of fairness and bias in recommender systems:
- What does it mean for a recommender to be fair, unfair, or biased?
- What potentially discriminatory biases are present in the recommender’s input data, algorithmic structure, or output?
- How do these biases change over time through the recommender-user feedback loop?
This is a part of our overall, ongoing goal to help make recommenders (and other AI systems) better for the people they affect.
Blog Posts and Other Coverage
Funding
- 2018–2023: NSF award 1751278, $482,081: CAREER: User-Based Simulation Methods for Quantifying Sources of Error and Bias in Recommender Systems
Workshops and Meetings
I have been involved with several workshops, sessions, etc. related to fair recommendation.
- Fair Recommendation & Retrieval tutorial
- Fairness track at TREC 2019
- FATREC2 at RecSys 2018
- Recommenders Session at FAT* 2018
- FATREC at RecSys 2017, the Workshop on Responsible Recommendation (proceedings)
Publications
2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval (to appear), 92 pp. arXiv:2105.05779 [cs.IR]. Impact factor: 8.
, , , and .2022. The Multisided Complexity of Fairness in Recommender Systems. AI Magazine (to appear).
, , , and .2022. Fairness in Recommender Systems. In Recommender Systems Handbook (3rd edition). Francesco Ricci, Lior Roach, and Bracha Shapira, eds. Springer-Verlag. DOI 10.1007/978-1-0716-2197-4_18. ISBN 978-1-0716-2196-7.
, , , and .2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 2021), 377–420. DOI 10.1007/s11257-020-09284-2. NSF PAR 10218853. Impact factor: 4.412. Cited 3 times.
and .2021. Estimation of Fair Ranking Metrics with Incomplete Judgments. In Proceedings of The Web Conference 2021 (TheWebConf 2021). ACM. DOI 10.1145/3442381.3450080. arXiv:2108.05152. NSF PAR 10237411. Acceptance rate: 21%. Cited 8 times.
, , , , , and .2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20). ACM, pp. 275–284. DOI 10.1145/3340531.3411962. arXiv:2004.13157 [cs.IR]. NSF PAR 10199451. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 54 times.
, , , , and .2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). arXiv:2009.01311 [cs.IR]. Cited 10 times.
, , , and .2020. Overview of the TREC 2019 Fair Ranking Track. In The Twenty-Eighth Text REtrieval Conference (TREC 2019) Proceedings (TREC 2019). arXiv:2003.11650. Cited 15 times.
, , , and .2019. FACTS-IR: Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval. SIGIR Forum 53(2) (December 2019), 20–43. DOI 10.1145/3458553.3458556. Cited 4 times.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and .2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM, pp. 242–250. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Cited 78 times.
, , , , and .2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 97 times.
, , , , , , and .2017. The Demographics of Cool: Popularity and Recommender Performance for Different Groups of Users. In RecSys 2017 Poster Proceedings. CEUR, Workshop Proceedings 1905. Cited 4 times.
and .Workshop Summaries
2020. 3rd FATREC Workshop: Responsible Recommendation. In Proceedings of the 14th ACM Conference on Recommender Systems (RecSys '20). ACM. DOI 10.1145/3383313.3411538. Cited 3 times.
, , , , and .2020. FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (UMAP '20). ACM. DOI 10.1145/3340631.3398671. Cited 2 times.
, , , , , and .2019. Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval (FACTS-IR). In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '19). ACM. DOI 10.1145/3331184.3331644. Cited 6 times.
, , , and .2019. FairUMAP 2019 Chairs' Welcome Overview. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization (UMAP '19). ACM. DOI 10.1145/3314183.3323842.
, , , , , , , , and .2018. 2nd FATREC Workshop: Responsible Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM. DOI 10.1145/3240323.3240335. Cited 8 times.
, , and .2018. UMAP 2018 Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2018) Chairs' Welcome & Organization. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation, and Personalization (UMAP '18). ACM. DOI 10.1145/3213586.3226200.
, , , and .2017. The FATREC Workshop on Responsible Recommendation. In Proceedings of the 11th ACM Conference on Recommender Systems (RecSys '17). ACM. DOI 10.1145/3109859.3109960. Cited 11 times.
and .