NSF CAREER award on recommenders, humans, and data

In 2018, I was awarded the NSF CAREER award to study how recommender systems and our evaluations of them respond to the human messiness of their input data. Us computer scientists have long known the principle of ‘garbage in, garbage out’: with bad data, a system will produce bad outputs. But in practice, computing systems can differ a great deal in precisely how they translate such inputs to outputs.

Our goal in this project is to understand that response — to characterize the ‘garbage response curve’ of common recommendation algorithms and surrounding statistical and experimental techniques. For a given type and quantity of garbage (metric/intent mismatch, discriminatory bias, polarized content), we want to understand its impact on recommendations, subsequent human behavior, and the information experiments provide to operators of recommender systems.

Project Abstract

Systems that recommend products, places, and services are an increasingly common part of everyday life and commerce, making it important to understand how recommendation algorithms affect outcomes for both individual users and larger social groups. To do this, the project team will develop novel methods of simulating users' behavior based on large-scale historical datasets. These methods will be used to better understand vulnerabilities that underlying biases in training datasets pose to commonly-used machine learning-based methods for building and testing recommender systems, as well as characterize the effectiveness of common evaluation metrics such as recommendation accuracy and diversity given different models of how people interact with recommender systems in practice. The team will publicly release its datasets, software, and novel metrics for the benefit of other researchers and developers of recommender systems. The work also will inform the development of course materials about the social impact of data analytics and computing as well as outreach activities for librarians, who are often in the position of helping information seekers understand the way search engines and other recommender systems affect their ability to get what they need.

The work is organized around two main themes. The first will quantify and mitigate the popularity bias and misclassified decoy problems in offline recommender evaluation that tend to lead to popular, known recommendations. To do this, the team will develop simulation-based evaluation models that encode a variety of assumptions about how users select relevant items to buy and rate and use them to quantify the statistical biases these assumptions induce in recommendation quality metrics. They will calibrate these simulations by comparing with existing data sets covering books, research papers, music, and movies. These models and datasets will help drive the second main project around measuring the impact of feature distributions in training data on recommender algorithm accuracy and diversity, while developing bias-resistant algorithms. The team will use data resampling techniques along with the simulation models, extended to model system behavior over time, to evaluate how different algorithms mitigate, propagate, or exacerbate underlying distributional biases through their recommendations, and how those biased recommendations in turn affect future user behavior and experience.


Research Outcomes

Published Papers and Outputs

Per NSF policy, all published papers are deposited in the NSF Public Access Repository, searchable by award ID; see the list associated with this grant.


Michael D. Ekstrand, Anubrata Das, Robin Burke, and Fernando Diaz. 2021. Fairness and Discrimination in Information Access Systems. Foundations and Trends® in Information Retrieval (to appear), 92 pp. arXiv:2105.05779 [cs.IR].


Amifa Raj, Ashlee Milton, and Michael D. Ekstrand. 2021. Pink for Princesses, Blue for Superheroes: The Need to Examine Gender Stereotypes in Kids’ Products in Search and Recommendations. In Proceedings of the 5th International and Interdisciplinary Workshop on Children & Recommender Systems (KidRec '21), at IDC 2021. arXiv:2105.09296.


Ömer Kırnap, Fernando Diaz, Asia J. Biega, Michael D. Ekstrand, Ben Carterette, and Emine Yılmaz. 2021. Estimation of Fair Ranking Metrics with Incomplete Judgments. In Proceedings of The Web Conference 2021 (TheWebConf 2021). ACM. DOI 10.1145/3442381.3450080. NSF PAR 10237411. Acceptance rate: 21%.


Michael D. Ekstrand and Daniel Kluver. 2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction (February 2021). DOI 10.1007/s11257-020-09284-2. NSF PAR 10218853. Cited 1 time.


Michael D. Ekstrand. 2020. LensKit for Python: Next-Generation Software for Recommender Systems Experiments. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20, Resource track). ACM, pp. 2999–3006. DOI 10.1145/3340531.3412778. arXiv:1809.03125 [cs.IR]. NSF PAR 10199450. No acceptance rate reported. Cited 5 times.


Fernando Diaz, Bhaskar Mitra, Michael D. Ekstrand, Asia J. Biega, and Ben Carterette. 2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20). ACM, pp. 275–284. DOI 10.1145/3340531.3411962. arXiv:2004.13157 [cs.IR]. NSF PAR 10199451. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 26 times.


Amifa Raj, Connor Wood, Ananda Montoly, and Michael D. Ekstrand. 2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). arXiv:2009.01311 [cs.IR]. Cited 6 times.


Mucun Tian and Michael D. Ekstrand. 2020. Estimating Error and Bias in Offline Evaluation Results. Short paper in Proceedings of the 2020 Conference on Human Information Interaction and Retrieval (CHIIR '20). ACM, 5 pp. DOI 10.1145/3343413.3378004. arXiv:2001.09455 [cs.IR]. NSF PAR 10146883. Acceptance rate: 47%. Cited 1 time.


Ashlee Milton, Michael Green, Adam Keener, Joshua Ames, Michael D. Ekstrand, and Maria Soledad Pera. 2019. StoryTime: Eliciting Preferences from Children for Book Recommendations. Demo recorded in Proceedings of the 13th ACM Conference on Recommender Systems (RecSys '19). 2 pp. DOI 10.1145/3298689.3347048. NSF PAR 10133610. Cited 3 times.


Mucun Tian and Michael D. Ekstrand. 2018. Monte Carlo Estimates of Evaluation Metric Error and Bias. Computer Science Faculty Publications and Presentations 148. Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems, a workshop at RecSys 2018. DOI 10.18122/cs_facpubs/148/boisestate. NSF PAR 10074452.

Recommendation Tutorial

We have given a tutorial on fairness in IR & recommendation in multiple settings.


Michael D. Ekstrand, Fernando Diaz, and Robin Burke. 2019. Fairness and Discrimination in Recommendation and Retrieval. Tutorial presented at Proceedings of the 13th ACM Conference on Recommender Systems (RecSys '19). 2 pp. DOI 10.1145/3298689.3346964.


Michael D. Ekstrand, Fernando Diaz, and Robin Burke. 2019. Fairness and Discrimination in Retrieval and Recommendation. Tutorial presented at Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '19). 2 pp. DOI 10.1145/3331184.3331380.

TREC Track

I am one of the organizers of the TREC Fairness Track; my participation in this is funded by the grant.

FACTS-IR Workshop

I co-organized the Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval.


Alexandra Olteanu, Jean Garcia-Gathright, Maarten de Rijke, Michael D. Ekstrand, Adam Roegiest, Aldo Lipani, Alex Beutel, Ana Lucic, Ana-Andreea Stoica, Anubrata Das, Asia Biega, Bart Voorn, Claudia Hauff, Damiano Spina, David Lewis, Douglas W Oard, Emine Yilmaz, Faegheh Hasibi, Gabriella Kazai, Graham McDonald, Hinda Haned, Iadh Ounis, Ilse van der Linden, Joris Baan, Kamuela N Lau, Krisztian Balog, Mahmoud Sayed, Maria Panteli, Mark Sanderson, Matthew Lease, Preethi Lahoti, and Toshihiro Kamishima. 2019. FACTS-IR: Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval. SIGIR Forum 53(2) (December 2019), 20–43. DOI 10.1145/3458553.3458556.


Alexandra Olteanu, Jean Garcia-Gathright, Maarten de Rijke, and Michael D. Ekstrand. 2019. Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval (FACTS-IR). In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '19). ACM. DOI 10.1145/3331184.3331644. Cited 4 times.

Preliminary Work

These papers were written prior to the project period, and establish preliminary results that helped secure the grant.


Michael D. Ekstrand, Mucun Tian, Mohammed R. Imran Kazi, Hoda Mehrpouyan, and Daniel Kluver. 2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM, pp. 242–250. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Cited 57 times.


Michael D. Ekstrand and Vaibhav Mahant. 2017. Sturgeon and the Cool Kids: Problems with Random Decoys for Top-N Recommender Evaluation. In Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track). AAAI, pp. 639–644. No acceptance rate reported. Cited 4 times.

Educational Outcomes

I have three planned educational activities as a part of this project:

  1. Work with Don Winiecki and Boise State CS faculty to incorporate material on ethics and the social impact of technology into graduate artificial intelligence and data science classes.
  2. Collaborate with Eric Lindquist from the Boise State School of Public Service to develop and teach Big Data in Public Life, an interdisciplinary undergraduate course on the interaction of big data, ethics, and policy as data-driven algorithmic systems are increasingly deployed in our society in both public and private sectors.
  3. Develop training materials and teach workshops for librarians across Idaho on recommender systems and related technology, so they can make better use of it in working with their communities and provide their patrons with guidance in engaging with recommenders. Meridian Library District will be working with me on the pilot of these workshops.

Library Training

See the Library Training page for details on library training, and on scheduling a training for your library.

I have given the following:

  • Presentation at the Idaho Library Association 2019 Conference


National Science Foundation logo
This material is based upon work supported by the National Science Foundation under Grant No. IIS 17-51278. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.