Other Recommender Research
In addition to the various recommender systems projects I lead, I have also been involved in several side projects with other collaborators. I have written or co-authored a few general recommender system papers, including our book chapter and survey for Foundations and Trends in HCI: 2018. Rating-Based Collaborative Filtering: Algorithms and Evaluation. In Social Information Access. Peter Brusilovsky and Daqing He, eds. Springer-Verlag, Lecture Notes in Computer Science vol. 10100, pp. 344–390. DOI 10.1007/978-3-319-90092-6_10. ISBN 978-3-319-90091-9. Cited 146 times. Cited 100 times. 2011. Collaborative Filtering Recommender Systems. Foundations and Trends® in Human-Computer Interaction 4(2) (February 1st, 2011), 81–173. DOI 10.1561/1100000009. Cited 1728 times. Cited 659 times. And position papers on recommender systems research and development, either generally or applied to particular areas: 2019. Recommender Systems Notation: Proposed Common Notation for Teaching and Research. Computer Science Faculty Publications and Presentations 177, Boise State University. DOI 10.18122/cs_facpubs/177/boisestate. arXiv:1902.01348 [cs.IR]. Cited 12 times. Cited 4 times. 2017. Challenges in Evaluating Recommendations for Children. In Proceedings of the International Workshop on Children & Recommender Systems (KidRec), at RecSys 2017. Cited 11 times. 2016. First Do No Harm: Considering and Minimizing Harm in Recommender Systems Designed for Engendering Health. In Proceedings of the Workshop on Recommender Systems for Health at RecSys ’16. Cited 16 times. Cited 11 times. 2016. Behaviorism is Not Enough: Better Recommendations through Listening to Users. In Proceedings of the Tenth ACM Conference on Recommender Systems (RecSys ’16, Past, Present, and Future track). ACM. DOI 10.1145/2959100.2959179. Acceptance rate: 36%. Cited 142 times. Cited 95 times. Additional position papers can be found under Reproducible Research. In this project, led by Tien Nguyen and Daniel Kluver, we examined different interfaces for improving the process of rating movies by giving the user additional information to help guide their rating. We tried several things: The result was published in RecSys 2013. 2013. Rating Support Interfaces to Improve User Experience and Recommender Accuracy. In Proceedings of the 7th ACM Conference on Recommender Systems (RecSys ’13). ACM. DOI 10.1145/2507157.2507188. Acceptance rate: 24%. Cited 60 times. Cited 42 times. In this project, led by Daniel Kluver and Tien Nguyen, we attempt to quantify how much information (in the Shannon information theory sense) is contained in a rating of a movie, and use this as the basis for comparing different rating interfaces based on their efficiency (bits per second). One of the particularly fun developments in this paper is an experimental protocol for estimating a lower bound on the mutual information between ratings and the preference constructs the user’s brain, allowing us to reason about the amount of information about preference, not just information, is in a rating. Unfortunately, this protocol requires a ridiculous number of users to achieve any kind of power, but it’s a very nice theoretical development in my opinion. 2012. How Many Bits per Rating?. In Proceedings of the Sixth ACM Conference on Recommender Systems (RecSys ’12). ACM, pp. 99–106. DOI 10.1145/2365952.2365974. Acceptance rate: 20%. Cited 48 times. Cited 38 times. This project, led by Justin Levandoski, embedded recommender technology into an SQL database. 2012. RecStore: An Extensible And Adaptive Framework for Online Recommender Queries Inside the Database Engine. In Proceedings of the 15th International Conference on Extending Database Technology (EDBT ’12). ACM, pp. 86–96. DOI 10.1145/2247596.2247608. Acceptance rate: 23%. Cited 19 times. Cited 16 times.Surveys and Position Papers
Rating Interfaces
Information Content of Ratings
Database-Embedded Recommenders