Recommending for People (21 Nov. 2016)
This page collects resources referenced in my Nov. 21, 2016 talk at the University at Albany, Recommending for People. Recommender systems help people find movies to watch, introduce new friends on social networks, increase sales for online retailers by connecting their customers with personally-relevant products, and direct readers to additional articles on news publishers’ partner sites. Users interact with recommenders almost everywhere they turn on the modern Internet. However, there is a great deal we still do not yet know about how to best design these systems to support their users’ needs and decision-making processes, and how the recommender and its sociotechnical context support and affect each other. In this talk, I will present work on understanding the ways in which different recommender algorithms may be able to meet the needs of different users. This research applies several methodologies, including analysis of recommender algorithms on public data sets and studies of both the stated preferences and observable behaviors of the users of a recommender system. Our findings provide evidence, consistent across different experimental settings, that different recommendation algorithms meet the needs of different users and among currently-competitive recommendation approaches there is not a clear winner even within the single domain of movie recommendation. I will situate this work within the broader context of our research agenda – including further work on reproducible research, studying the behavior of the user-recommender feedback loop, and tailoring recommenders for particular users – and our vision for designing recommender systems that are responsive to the needs and desires of the people they will affect. 2011. Rethinking The Recommender Research Ecosystem: Reproducibility, Openness, and LensKit. In Proceedings of the Fifth ACM Conference on Recommender Systems (RecSys ’11). ACM, pp. 133–140. DOI 10.1145/2043932.2043958. Acceptance rate: 27% (20% for oral presentation, which this received). Cited 255 times. Cited 195 times. 2012. When Recommenders Fail: Predicting Recommender Failure for Algorithm Selection and Combination. Short paper in Proceedings of the Sixth ACM Conference on Recommender Systems (RecSys ’12). ACM, pp. 233–236. DOI 10.1145/2365952.2366002. Acceptance rate: 32%. Cited 88 times. Cited 73 times. 2016. Behaviorism is Not Enough: Better Recommendations through Listening to Users. In Proceedings of the Tenth ACM Conference on Recommender Systems (RecSys ’16, Past, Present, and Future track). ACM. DOI 10.1145/2959100.2959179. Acceptance rate: 36%. Cited 141 times. Cited 94 times. 2015. Letting Users Choose Recommender Algorithms: An Experimental Study. In Proceedings of the 9th ACM Conference on Recommender Systems (RecSys ’15). ACM. DOI 10.1145/2792838.2800195. Acceptance rate: 21%. Cited 138 times. Cited 100 times. 2016. Dependency Injection with Static Analysis and Context-Aware Policy. Journal of Object Technology 15(1) (February 1st, 2016), 1:1–31. DOI 10.5381/jot.2016.15.1.a1. Cited 16 times.Abstract
Resources
Research Presented
Works Cited