Reproducible Recommender Systems Research
I have spent considerable effort implementing software and conducting research to help promote reproducibility in recommender systems research. This has most notably resulted in the LensKit software, an open-source toolkit for recommender systems research. I am still maintaining this software and am active in promoting reproducibility in the RecSys research community.
- Texas State University Research Enhancement Program: Temporal Evaluation of Recommender Systems ($8000)
2017. Sturgeon and the Cool Kids: Problems with Top-N Recommender Evaluation. In Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference.and .
2016. Dependency Injection with Static Analysis and Context-Aware Policy. Journal of Object Technology 15, 1 (February 2016), pp 1:1–31. DOI:10.5381/jot.2016.15.5.a1.and .
- Michael D. Ekstrand. 2014. Towards Recommender Engineering: Tools and Experiments in Recommender Differences. Ph.D Thesis, University of Minnesota.
2014. Building Open-Source Tools for Reproducible Research and Education. In Sharing, Re-use and Circulation of Resources in Cooperative Scientific Work, a workshop at ACM CSCW 2014..
2011. Rethinking The Recommender Research Ecosystem: Reproducibility, Openness, and LensKit. In Proceedings of the Fifth ACM Conference on Recommender Systems (RecSys '11). ACM, 133–140. DOI:10.1145/2043932.2043958. Acceptance rate: 27% (20% for oral presentation, which this received). Cited 70 times (117 est.)., , , and .
2011. RecBench: Benchmarks for Evaluating Performance of Recommender System Architectures. Proceedings of the VLDB Endowment 4, 11 (August 2011), 911–920. Acceptance rate: 18%. Cited 6 times., , , , , and .