User, Agent, Subject, Spy
I gave this talk on November 9, 2018 as a part of the Carnegie Mellon University Human-Computer Interaction Institute Seminar Series.
My Research
These papers provide more details on the research I presented. Many of them have accompanying code to reproduce the experiments and results.
2020. LensKit for Python: Next-Generation Software for Recommender Systems Experiments. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20, Resource track). ACM, pp. 2999–3006. DOI 10.1145/3340531.3412778. arXiv:1809.03125 [cs.IR]. NSF PAR 10199450. No acceptance rate reported. Cited 100 times. Cited 72 times.
.2011. Rethinking The Recommender Research Ecosystem: Reproducibility, Openness, and LensKit. In Proceedings of the Fifth ACM Conference on Recommender Systems (RecSys ’11). ACM, pp. 133–140. DOI 10.1145/2043932.2043958. Acceptance rate: 27% (20% for oral presentation, which this received). Cited 255 times. Cited 195 times.
, , , and .2012. When Recommenders Fail: Predicting Recommender Failure for Algorithm Selection and Combination. Short paper in Proceedings of the Sixth ACM Conference on Recommender Systems (RecSys ’12). ACM, pp. 233–236. DOI 10.1145/2365952.2366002. Acceptance rate: 32%. Cited 88 times. Cited 73 times.
and .2014. User Perception of Differences in Recommender Algorithms. In Proceedings of the 8th ACM Conference on Recommender Systems (RecSys ’14). ACM. DOI 10.1145/2645710.2645737. Acceptance rate: 23%. Cited 283 times. Cited 186 times.
, , , and .2015. Letting Users Choose Recommender Algorithms: An Experimental Study. In Proceedings of the 9th ACM Conference on Recommender Systems (RecSys ’15). ACM. DOI 10.1145/2792838.2800195. Acceptance rate: 21%. Cited 138 times. Cited 100 times.
, , , and .2016. Behaviorism is Not Enough: Better Recommendations through Listening to Users. In Proceedings of the Tenth ACM Conference on Recommender Systems (RecSys ’16, Past, Present, and Future track). ACM. DOI 10.1145/2959100.2959179. Acceptance rate: 36%. Cited 141 times. Cited 94 times.
and .2018. Retrieving and Recommending for the Classroom: Stakeholders, Objectives, Resources, and Users. In Proceedings of the ComplexRec 2018 Second Workshop on Recommendation in Complex Scenarios (ComplexRec ’18), at RecSys 2018. Cited 8 times. Cited 3 times.
, , , and .2018. Recommending Texts to Children with an Expert in the Loop. In Proceedings of the 2nd International Workshop on Children & Recommender Systems (KidRec ’18), at IDC 2018. DOI 10.18122/cs_facpubs/140/boisestate. Cited 7 times. Cited 6 times.
, , and .2018. Monte Carlo Estimates of Evaluation Metric Error and Bias. Computer Science Faculty Publications and Presentations 148, Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems at RecSys 2018. DOI 10.18122/cs_facpubs/148/boisestate. NSF PAR 10074452. Cited 1 time. Cited 1 time.
and .2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 287 times. Cited 211 times.
, , , , , , and .2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18). ACM, pp. 242–250. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Citations reported under UMUAI21◊. Citations reported under UMUAI21◊.
, , , , and .2017. Sturgeon and the Cool Kids: Problems with Random Decoys for Top-N Recommender Evaluation. In Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track). AAAI, pp. 639–644. No acceptance rate reported. Cited 16 times. Cited 11 times.
and .2018. Privacy for All: Ensuring Fair and Equitable Privacy Protections. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:35–47. Acceptance rate: 24%. Cited 104 times. Cited 78 times.
, , and .Projects
Funding
- NSF CAREER award
- Boise State University College of Education Civility Grant
Other Work Cited
- ACM Code of Ethics
- Crawford, K. 2017. The Trouble with Bias. NIPS 2017 Keynote.
- Robyn Speer. 2017. ConceptNet Numberbatch 17.04: better, less-stereotyped word vectors.
- Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. 2012. Fairness Through Awareness. In (Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214–226). New York, NY, USA: ACM. DOI 10.1145/2090236.2090255
- Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. 2016. On the (im)possibility of fairness. arXiv:1609.07236 [cs, Stat]. Retrieved from http://arxiv.org/abs/1609.07236
- Chouldechova, A. 2016. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arXiv [stat.AP]. Retrieved from http://arxiv.org/abs/1610.07524
- Kleinberg, J., Mullainathan, S., & Raghavan, M. 2016. Inherent Trade-Offs in the Fair Determination of Risk Scores. arXiv [cs.LG]. Retrieved from http://arxiv.org/abs/1609.05807
- Lipton, Z. C., Chouldechova, A., & McAuley, J. 2017. Does mitigating ML’s disparate impact require disparate treatment? arXiv [stat.ML]. Retrieved from http://arxiv.org/abs/1711.07076
- Burke, R. 2017. Multisided Fairness for Recommendation. arXiv [cs.CY]. Retrieved from http://arxiv.org/abs/1707.00093
- Neil Hunt. 2014. 🎞 Quantifying the Value of Better Recommendations.
- FATREC Workshop Series
- Fairness at TREC 2019
- Cremonesi, P., Koren, Y., & Turrin, R. 2010. Performance of Recommender Algorithms on Top-n Recommendation Tasks. In Proceedings of the Fourth ACM Conference on Recommender Systems (RecSys 2010) (pp. 39–46). New York, NY, USA: ACM.
- Sturgeon, T. 1958. ON HAND: A Book. Venture Science Fiction, 2(2), 66. March 1958.