Recommending for People
This page collects resources referenced in my research seminar talk, Recommending for People, as I have given it in 2017.
I tweak up the exact talk a little bit for different audiences, but the core ideas are generally the same.
Recommender systems help people find movies to watch, introduce new friends on social networks, increase sales for online retailers by connecting their customers with personally-relevant products, and direct readers to additional articles on news publishers’ partner sites. Users interact with recommenders almost everywhere they turn on the modern Internet. However, there is a great deal we still do not yet know about how to best design these systems to support their users’ needs and decision-making processes, and how the recommender and its sociotechnical context support and affect each other.
In this talk, I will present work on understanding the ways in which different recommender algorithms impact and respond to their users. This research applies several methodologies, including analysis of recommender algorithms on public data sets and studies of both the stated preferences and observable behaviors of the users of a recommender system. Our findings provide evidence, consistent across different experimental settings, that different users are more satisfied by different recommendation algorithms even within the single domain of movie recommendation. I will also discuss our ongoing work on how recommender algorithms interact with biases in their underlying input data and on deep challenges in evaluating recommender effectiveness with respect to actual user needs.
These projects, along with several others, are a part of our broad vision for designing recommenders and other algorithmic information systems that are responsive to the needs, desires, and well-being of the people they will affect.
Dr. Michael D. Ekstrand is an assistant professor in the Department of Computer Science at Boise State University, where he studies human-computer interaction and recommender systems. He received his Ph.D in 2014 from the University of Minnesota, supporting reproducible research and examining user-relevant differences in recommender algorithms with the GroupLens research group. He co-leads (with Dr. Sole Pera) the People and Information Research Team (PIReT) at Boise State; is the founder and lead developer of LensKit, an open-source software project aimed at supporting reproducible research and education in recommender systems; and co-created (with Dr. Joseph A. Konstan at the University of Minnesota) the Recommender Systems specialization on Coursera. His research interests are primarily in the ways users and intelligent information systems interact, with the goal of improving the ability of these systems to help their users and produce social benefit, and in the reproducibility of such research.
2011. Rethinking The Recommender Research Ecosystem: Reproducibility, Openness, and LensKit. In Proceedings of the Fifth ACM Conference on Recommender Systems (RecSys '11). ACM, 133–140. DOI:10.1145/2043932.2043958. Acceptance rate: 27% (20% for oral presentation, which this received). Cited 71 times (119 est.)., , , and .
2017. Sturgeon and the Cool Kids: Problems with Top-N Recommender Evaluation. In Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference.and .
2012. When Recommenders Fail: Predicting Recommender Failure for Algorithm Selection and Combination. Short paper in Proceedings of the Sixth ACM Conference on Recommender Systems (RecSys '12). ACM, 233–236. DOI:10.1145/2365952.2366002. Acceptance rate: 32%. Cited 19 times.and .
2014. User Perception of Differences in Recommender Algorithms. In Proceedings of the Eighth ACM Conference on Recommender Systems (RecSys '14). ACM. DOI:10.1145/2645710.2645737. Acceptance rate: 23%. Cited 31 times (61 est.)., , , and .
2015. Letting Users Choose Recommender Algorithms: An Experimental Study. In Proceedings of the Ninth ACM Conference on Recommender Systems (RecSys '15). ACM. DOI:10.1145/2792838.2800195. Acceptance rate: 21%. Cited 13 times., , , and .
2016. Behaviorism is Not Enough: Better Recommendations through Listening to Users. In Proceedings of the Tenth ACM Conference on Recommender Systems (RecSys '16). ACM. DOI:10.1145/2959100.2959179. Acceptance rate: 36% (Past, Present, and Future track). Cited 4 times.and .
Work in progress on recommender fairness
Balabanović, Marko, and Yoav Shoham. 1997. “Fab: Content-Based, Collaborative Recommendation.” Commun. ACM 40 (3): 66–72. doi:http://dx.doi.org/10.1145/245108.245124[10.1145/245108.245124].
Pera, Maria Soledad, and Yiu-Kai Ng. 2014. “Automating Readers’ Advisory to Make Book Recommendations for K-12 Readers.” In Proceedings of the 8th ACM Conference on Recommender Systems, 9–16. RecSys ’14. New York, NY, USA: ACM. doi:http://dx.doi.org/10.1145/2645710.2645721[10.1145/2645710.2645721].
Resnick, Paul, Neophytos Iacovou, Mitesh Suchak, Peter Bergstrom, and John Riedl. 1994. “GroupLens: An Open Architecture for Collaborative Filtering of Netnews.” In ACM CSCW ’94, 175–86. ACM. doi:http://dx.doi.org/10.1145/192844.192905[10.1145/192844.192905].
Sarwar, Badrul, George Karypis, Joseph Konstan, and John Reidl. 2001. “Item-Based Collaborative Filtering Recommendation Algorithms.” In ACM WWW ’01, 285–95. ACM. doi:http://dx.doi.org/10.1145/371920.372071[10.1145/371920.372071].
Sarwar, Badrul M, George Karypis, Joseph A Konstan, and John T Riedl. 2000. “Application of Dimensionality Reduction in Recommender System — A Case Study.” In WebKDD 2000. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.8381.
Burke, Robin. 2002. “Hybrid Recommender Systems: Survey and Experiments.” User Modeling and User-Adapted Interaction 12(4): 331–70. doi:http://dx.doi.org/10.1023/A:1021240730564[10.1023/A:1021240730564].
Rendle, Steffen, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. “BPR: Bayesian Personalized Ranking from Implicit Feedback.” In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 452–461. UAI ’09. http://dl.acm.org/citation.cfm?id=1795114.1795167.
McNee, Sean, John Riedl, and Joseph A. Konstan. 2006. “Making Recommendations Better: An Analytic Model for Human-Recommender Interaction.” In CHI ’06 Extended Abstracts, 1103–8. ACM. doi:[10.1145/1125451.1125660](http://dx.doi.org/10.1145/1125451.1125660).
Franklin, Ursula M. 2004. The Real World of Technology(). Revised Edition. Toronto, Ont.; Berkeley, CA: House of Anansi Press.
Mehrotra, Rishabh, Ashton Anderson, Fernando Diaz, Amit Sharma, Hanna Wallach, and Emine Yilmaz. 2017. Auditing Search Engines for Differential Satisfaction Across Demographics. In Proceedings of the 26th International Conference on World Wide Web Companion (WWW '17 Companion).
2016. Dependency Injection with Static Analysis and Context-Aware Policy. Journal of Object Technology 15, 1 (February 2016), pp 1:1–31. DOI:10.5381/jot.2016.15.5.a1.and .
Speer, Rob. 2017. ConceptNet Numberbatch 17.04: better, less-stereotyped word vectors.
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Advances in Neural Information Processing Systems 29 (NIPS 2016).
Hunt, Neil. 2014. 🎞 Quantifying the Value of Better Recommendations.