Publications
This page lists my research publications as they appear on my CV. See my research page for a topical view of my research.
Citation counts from Semantic Scholar. Other services, such as Google Scholar or the ACM Digital Library, will report different citation counts.
Citation counts from Google Scholar. Other services, such as Semantic Scholar or the ACM Digital Library, will report different citation counts.
Journal Articles // 10
Referreed articles published in journals.
2025. Recall, Robustness, and Lexicographic Evaluation. Transactions on Recommender Systems (to appear). arXiv:2302.11370. Cited 2 times.
, , and .2024. Building Human Values into Recommender Systems: An Interdisciplinary Synthesis. Transactions on Recommender Systems 2(3) (June 5th, 2024; online November 12th, 2023), 20:1–57. DOI 10.1145/3632297. arXiv:2207.10192 [cs.IR]. Cited 73 times. Cited 48 times.
, , , , , , , , , , , , , , , , , , , , , and .2024. Distributionally-Informed Recommender System Evaluation. Transactions on Recommender Systems 2(1) (March 7th, 2024; online August 4th, 2023), 6:1–27. DOI 10.1145/3613455. arXiv:2309.05892 [cs.IR]. NSF PAR 10461937. Cited 16 times. Cited 9 times.
, , and .2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16(1–2) (July 11th, 2022), 1–177. DOI 10.1561/1500000079. arXiv:2105.05779 [cs.IR]. NSF PAR 10347630. Impact factor: 8. Cited 197 times. Cited 85 times.
, , , and .2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 4th, 2021), 377–420. DOI 10.1007/s11257-020-09284-2. arXiv:1808.07586v2. NSF PAR 10218853. Impact factor: 4.412. Cited 205 times (shared with RecSys18◊). Cited 110 times (shared with RecSys18◊).
and .2020. Enhancing Classroom Instruction with Online News. Aslib Journal of Information Management 72(5) (November 17th, 2020; online June 14th, 2020), 725–744. DOI 10.1108/AJIM-11-2019-0309. Impact factor: 1.903. Cited 19 times. Cited 12 times.
, , and .2016. Dependency Injection with Static Analysis and Context-Aware Policy. Journal of Object Technology 15(1) (February 1st, 2016), 1:1–31. DOI 10.5381/jot.2016.15.1.a1. Cited 16 times.
and .2015. Teaching Recommender Systems at Large Scale: Evaluation and Lessons Learned from a Hybrid MOOC. Transactions on Computer-Human Interaction 22(2) (April 1st, 2015), 10:1–23. DOI 10.1145/2728171. Impact factor: 1.293. Cited 119 times (shared with L@S14◊). Cited 30 times.
, , , , and .2011. RecBench: Benchmarks for Evaluating Performance of Recommender System Architectures. Proceedings of the VLDB Endowment 4(11) (August 1st, 2011), 911–920. Acceptance rate: 18%. Cited 22 times. Cited 9 times.
, , , , , and .2011. Collaborative Filtering Recommender Systems. Foundations and Trends® in Human-Computer Interaction 4(2) (February 1st, 2011), 81–173. DOI 10.1561/1100000009. Cited 1750 times. Cited 665 times.
, , and .Peer-Reviewed Conference Papers // 32
Peer-reviewed full and short papers published in conference proceedings.
2025. The Evolving Landscape of Online Child Safety: Insights from Media Analysis. To appear in Proceedings of the 17th ACM Web Science Conference (WebSci ’25), May 20–24, 2025. Acceptance rate: 39.6%.
, , , , and .2024. It’s Not You, It’s Me: The Impact of Choice Models and Ranking Strategies on Gender Imbalance in Music Recommendation. Short paper in Proceedings of the 18th ACM Conference on Recommender Systems (RecSys ’24), Oct 14, 2024. ACM, pp. 884–889. DOI 10.1145/3640457.3688163. arXiv:2409.03781 [cs.IR]. NSF PAR 10568004. Acceptance rate: 22%.
, , and .2024. Multiple Testing for IR and Recommendation System Experiments. Short paper in Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24), Mar 24–28, 2024. Lecture Notes in Computer Science 14610:449–457. DOI 10.1007/978-3-031-56063-7_37. NSF PAR 10497108. Acceptance rate: 24.3%. Cited 3 times.
and .2024. Not Just Algorithms: Strategically Addressing Consumer Impacts in Information Retrieval. In Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24, IR for Good track), Mar 24–28, 2024. Lecture Notes in Computer Science 14611:314–335. DOI 10.1007/978-3-031-56066-8_25. NSF PAR 10497110. Acceptance rate: 35.9%. Cited 9 times. Cited 3 times.
, , , and .2024. Towards Optimizing Ranking in Grid-Layout for Provider-side Fairness. In Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24, IR for Good track), Mar 24–28, 2024. Lecture Notes in Computer Science 14612:90–105. DOI 10.1007/978-3-031-56069-9_7. NSF PAR 10497109. Acceptance rate: 35.9%. Cited 1 time. Cited 1 time.
and .2023. Candidate Set Sampling for Evaluating Top-N Recommendation. In Proceedings of the 22nd IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT ’23), Oct 26–29, 2023. pp. 88-94. DOI 10.1109/WI-IAT59888.2023.00018. arXiv:2309.11723 [cs.IR]. NSF PAR 10487293. Acceptance rate: 28%. Cited 6 times. Cited 1 time.
and .2023. Patterns of Gender-Specializing Query Reformulation. Short paper in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23), Jul 23, 2023. pp. 2241–2245. DOI 10.1145/3539618.3592034. arXiv:2304.13129. NSF PAR 10423689. Acceptance rate: 25.1%. Cited 4 times. Cited 1 time.
, , , and .2023. Inference at Scale: Significance Testing for Large Search and Recommendation Experiments. Short paper in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23), Jul 23, 2023. pp. 2087–2091. DOI 10.1145/3539618.3592004. arXiv:2305.02461. NSF PAR 10423691. Acceptance rate: 25.1%. Cited 3 times.
and .2023. Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information Access. In Proceedings of the 2023 Conference on Human Information Interaction and Retrieval (CHIIR ’23), Mar 19, 2023. pp. 269–279. DOI 10.1145/3576840.3578316. arXiv:2301.04780. NSF PAR 10423693. Acceptance rate: 39.4%. Cited 23 times. Cited 12 times.
, , , and .2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22), Jul 11, 2022. pp. 726–736. DOI 10.1145/3477495.3532018. NSF PAR 10329880. Acceptance rate: 20%. Cited 67 times. Cited 47 times.
and .2021. Privacy as a Planned Behavior: Effects of Situational Factors on Privacy Perceptions and Plans. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’21), Jul 1, 2021. ACM, pp. 169–178. DOI 10.1145/3450613.3456829. arXiv:2104.11847 [cs.SI]. NSF PAR 10223377. Acceptance rate: 23%. Cited 25 times. Cited 16 times.
, , , and .2021. Estimation of Fair Ranking Metrics with Incomplete Judgments. In Proceedings of The Web Conference 2021 (TheWebConf 2021), Apr 19, 2021. ACM, pp. 1065–1075. DOI 10.1145/3442381.3450080. arXiv:2108.05152. NSF PAR 10237411. Acceptance rate: 21%. Cited 49 times. Cited 36 times.
, , , , , and .2020. LensKit for Python: Next-Generation Software for Recommender Systems Experiments. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20, Resource track), Oct 21, 2020. ACM, pp. 2999–3006. DOI 10.1145/3340531.3412778. arXiv:1809.03125 [cs.IR]. NSF PAR 10199450. No acceptance rate reported. Cited 105 times. Cited 73 times.
.2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20), Oct 21, 2020. ACM, pp. 275–284. DOI 10.1145/3340531.3411962. arXiv:2004.13157 [cs.IR]. NSF PAR 10199451. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 200 times. Cited 173 times.
, , , , and .2020. Estimating Error and Bias in Offline Evaluation Results. Short paper in Proceedings of the 2020 Conference on Human Information Interaction and Retrieval (CHIIR ’20), Mar 14, 2020. ACM, 5 pp. DOI 10.1145/3343413.3378004. arXiv:2001.09455 [cs.IR]. NSF PAR 10146883. Acceptance rate: 47%. Cited 13 times. Cited 10 times.
and .2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18), Oct 3, 2018. ACM, pp. 242–250. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Citations reported under UMUAI21◊. Citations reported under UMUAI21◊.
, , , , and .2018. Privacy for All: Ensuring Fair and Equitable Privacy Protections. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018), Feb 23, 2018. PMLR, Proceedings of Machine Learning Research 81:35–47. Acceptance rate: 24%. Cited 106 times. Cited 78 times.
, , and .2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018), Feb 23, 2018. PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 302 times. Cited 214 times.
, , , , , , and .2017. Sturgeon and the Cool Kids: Problems with Random Decoys for Top-N Recommender Evaluation. In Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track), May 29, 2017. AAAI, pp. 639–644. No acceptance rate reported. Cited 17 times. Cited 11 times.
and .2017. Recommender Response to Diversity and Popularity Bias in User Profiles. Short paper in Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track), May 29, 2017. AAAI, pp. 657–660. No acceptance rate reported. Cited 22 times. Cited 19 times.
and .2016. Behaviorism is Not Enough: Better Recommendations through Listening to Users. In Proceedings of the Tenth ACM Conference on Recommender Systems (RecSys ’16, Past, Present, and Future track), Sep 17, 2016. ACM, pp. 221–224. DOI 10.1145/2959100.2959179. Acceptance rate: 36%. Cited 144 times. Cited 96 times.
and .2015. Letting Users Choose Recommender Algorithms: An Experimental Study. In Proceedings of the 9th ACM Conference on Recommender Systems (RecSys ’15), Sep 16, 2015. ACM, pp. 11–18. DOI 10.1145/2792838.2800195. Acceptance rate: 21%. Cited 140 times. Cited 100 times.
, , , and .2014. User Perception of Differences in Recommender Algorithms. In Proceedings of the 8th ACM Conference on Recommender Systems (RecSys ’14), Oct 6, 2014. ACM, pp. 161–168. DOI 10.1145/2645710.2645737. Acceptance rate: 23%. Cited 287 times. Cited 187 times.
, , , and .2014. Teaching Recommender Systems at Large Scale: Evaluation and Lessons Learned from a Hybrid MOOC. In Proceedings of the First ACM Conference on Learning @ Scale (S ’14), Mar 4, 2014. ACM, pp. 61–70. DOI 10.1145/2556325.2566244. Acceptance rate: 37%. Citations reported under TOCHI15◊. Cited 77 times.
, , , , and .2013. Rating Support Interfaces to Improve User Experience and Recommender Accuracy. In Proceedings of the 7th ACM Conference on Recommender Systems (RecSys ’13), Oct 14, 2013. ACM, pp. 149–156. DOI 10.1145/2507157.2507188. Acceptance rate: 24%. Cited 60 times. Cited 42 times.
, , , , , , and .2012. When Recommenders Fail: Predicting Recommender Failure for Algorithm Selection and Combination. Short paper in Proceedings of the Sixth ACM Conference on Recommender Systems (RecSys ’12), Sep 10, 2012. ACM, pp. 233–236. DOI 10.1145/2365952.2366002. Acceptance rate: 32%. Cited 88 times. Cited 73 times.
and .2012. How Many Bits per Rating?. In Proceedings of the Sixth ACM Conference on Recommender Systems (RecSys ’12), Sep 10, 2012. ACM, pp. 99–106. DOI 10.1145/2365952.2365974. Acceptance rate: 20%. Cited 48 times. Cited 38 times.
, , , , and .2012. RecStore: An Extensible And Adaptive Framework for Online Recommender Queries Inside the Database Engine. In Proceedings of the 15th International Conference on Extending Database Technology (EDBT ’12), Mar 26, 2012. ACM, pp. 86–96. DOI 10.1145/2247596.2247608. Acceptance rate: 23%. Cited 19 times. Cited 16 times.
, , , and .2011. Rethinking The Recommender Research Ecosystem: Reproducibility, Openness, and LensKit. In Proceedings of the Fifth ACM Conference on Recommender Systems (RecSys ’11), Oct 24, 2011. ACM, pp. 133–140. DOI 10.1145/2043932.2043958. Acceptance rate: 27% (20% for oral presentation, which this received). Cited 258 times. Cited 196 times.
, , , and .2011. Searching for Software Learning Resources Using Application Context. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST ’11), Oct 17, 2011. ACM, pp. 195–204. DOI 10.1145/2047196.2047220. Acceptance rate: 25%. Cited 56 times. Cited 48 times.
, , , , and .2010. Automatically Building Research Reading Lists. In Proceedings of the 4th ACM Conference on Recommender Systems (RecSys ’10), Sep 27, 2010. ACM, pp. 159–166. DOI 10.1145/1864708.1864740. Acceptance rate: 19%. Cited 124 times. Cited 101 times.
, , , , , and .2009. rv you’re dumb: Identifying Discarded Work in Wiki Article History. In Proceedings of the 5th International Symposium on Wikis and Open Collaboration (WikiSym ’09), Oct 25, 2009. ACM, 10 pp. DOI 10.1145/1641309.1641317. Acceptance rate: 36%. Selected as Best Paper. Cited 37 times. Cited 28 times.
and .Book Chapters // 2
2022. Fairness in Recommender Systems. In Recommender Systems Handbook (3rd edition). Francesco Ricci, Lior Roach, and Bracha Shapira, eds. Springer-Verlagpp. 679–707. DOI 10.1007/978-1-0716-2197-4_18. ISBN 978-1-0716-2196-7. Cited 38 times. Cited 21 times.
, , , and .2018. Rating-Based Collaborative Filtering: Algorithms and Evaluation. In Social Information Access. Peter Brusilovsky and Daqing He, eds. Springer-Verlag, Lecture Notes in Computer Science vol. 10100, pp. 344–390. DOI 10.1007/978-3-319-90092-6_10. ISBN 978-3-319-90091-9. Cited 152 times. Cited 100 times.
, , and .Workshops and Posters // 17
Peer-reviewed articles for workshops, poster proceedings, and similar venues.
2024. The Impossibility of Fair LLMs. In HEAL: Human-centered Evaluation and Auditing of Language Models, a non-archival workshop at CHI 2024, May 12, 2024. arXiv:2406.03198 [cs.CL]. Cited 13 times. Cited 5 times.
, , , , , and .2023. Towards Measuring Fairness in Grid Layout in Recommender Systems. Presented at the 6th FAccTrec Workshop on Responsible Recommendation at RecSys 2023 (peer-reviewed but not archived). arXiv:2309.10271 [cs.IR]. Cited 1 time.
and .2022. Matching Consumer Fairness Objectives & Strategies for RecSys. Presented at the 5th FAccTrec Workshop on Responsible Recommendation at RecSys 2022 (peer-reviewed but not archived). arXiv:2209.02662 [cs.IR]. Cited 6 times. Cited 4 times.
and .2022. Fire Dragon and Unicorn Princess: Gender Stereotypes and Children’s Products in Search Engine Responses. In SIGIR eCom ’22, Jul 15, 2022. 9 pp. DOI 10.48550/arXiv.2206.13747. arXiv:2206.13747 [cs.IR]. Cited 11 times. Cited 5 times.
and .2021. Baby Shark to Barracuda: Analyzing Children’s Music Listening Behavior. In RecSys 2021 Late-Breaking Results (RecSys ’21), Sep 26, 2021. pp. 639–644. DOI 10.1145/3460231.3478856. NSF PAR 10316668. Cited 10 times. Cited 4 times.
, , , , , , and .2021. Statistical Inference: The Missing Piece of RecSys Experiment Reliability Discourse. In Proceedings of the Perspectives on the Evaluation of Recommender Systems Workshop 2021 (RecSys ’21), Sep 25, 2021. arXiv:2109.06424 [cs.IR]. Cited 8 times. Cited 6 times.
and .2021. Pink for Princesses, Blue for Superheroes: The Need to Examine Gender Stereotypes in Kids’ Products in Search and Recommendations. In Proceedings of the 5th International and Interdisciplinary Workshop on Children & Recommender Systems (KidRec ’21), at IDC 2021, Jun 27, 2021. arXiv:2105.09296. NSF PAR 10335669. Cited 9 times. Cited 5 times.
, , and .2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation at RecSys 2020 (peer-reviewed but not archived). arXiv:2009.01311 [cs.IR]. Cited 39 times. Cited 29 times.
, , , and .2018. Retrieving and Recommending for the Classroom: Stakeholders, Objectives, Resources, and Users. In Proceedings of the ComplexRec 2018 Second Workshop on Recommendation in Complex Scenarios (ComplexRec ’18), at RecSys 2018, Oct 7, 2018. Cited 10 times. Cited 3 times.
, , , and .2018. Monte Carlo Estimates of Evaluation Metric Error and Bias. Computer Science Faculty Publications and Presentations 148, Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems at RecSys 2018. DOI 10.18122/cs_facpubs/148/boisestate. NSF PAR 10074452. Cited 1 time. Cited 1 time.
and .2018. The LKPY Package for Recommender Systems Experiments: Next-Generation Tools and Lessons Learned from the LensKit Project. Computer Science Faculty Publications and Presentations 147, Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems at RecSys 2018. DOI 10.18122/cs_facpubs/147/boisestate. arXiv:1809.03125v1 [cs.IR]. Cited 11 times. Cited 19 times.
.2018. Recommending Texts to Children with an Expert in the Loop. In Proceedings of the 2nd International Workshop on Children & Recommender Systems (KidRec ’18), at IDC 2018, Jun 19, 2018. DOI 10.18122/cs_facpubs/140/boisestate. Cited 7 times. Cited 6 times.
, , and .2018. Do Different Groups Have Comparable Privacy Tradeoffs?. In Moving Beyond a ‘One-Size Fits All’ Approach: Exploring Individual Differences in Privacy, a workshop at CHI 2018, Apr 21, 2018. NSF PAR 10222636. Cited 4 times. Cited 4 times.
, , , and .2017. The Demographics of Cool: Popularity and Recommender Performance for Different Groups of Users. In RecSys 2017 Poster Proceedings, Aug 28, 2017. CEUR, Workshop Proceedings 1905. Cited 17 times. Cited 6 times.
and .2017. Challenges in Evaluating Recommendations for Children. In Proceedings of the International Workshop on Children & Recommender Systems (KidRec), at RecSys 2017, Aug 27, 2017. Cited 11 times.
.2016. First Do No Harm: Considering and Minimizing Harm in Recommender Systems Designed for Engendering Health. In Proceedings of the Workshop on Recommender Systems for Health at RecSys ’16, Sep 15, 2016. Cited 16 times. Cited 11 times.
and .2014. Building Open-Source Tools for Reproducible Research and Education. At Sharing, Re-use, and Circulation of Resources in Cooperative Scientific Work, a workshop at CSCW 2014, Feb 15, 2014.
.Editorially-Reviewed Publications // 4
Articles in magazines, journals, etc. that were editorially reviewed.
2023. Seeking Information with a ‘More Knowledgeable Other’. ACM Interactions 30(1) (January 11th, 2023), 70–73. DOI 10.1145/3573364. Cited 8 times. Cited 3 times.
, , and .2022. The Multisided Complexity of Fairness in Recommender Systems. AI Magazine 43(2) (June 23rd, 2022), 164–176. DOI 10.1002/aaai.12054. NSF PAR 10334796. Cited 36 times. Cited 20 times.
, , , and .2019. FACTS-IR: Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval. SIGIR Forum 53(2) (December 12th, 2019), 20–43. DOI 10.1145/3458553.3458556. Cited 54 times. Cited 19 times.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and .2018. The Dagstuhl Perspectives Workshop on Performance Modeling and Prediction. SIGIR Forum 52(1) (June 1st, 2018), 91–101. DOI 10.1145/3274784.3274789. Cited 17 times. Cited 19 times.
, , , , , , , , , , , , , , , , , , , , and .Tutorials // 4
2025. Conducting Recommender Systems User Studies Using POPROX. Tutorial to be presented at Proceedings of the 33rd ACM International Conference on User Modeling, Adaptation, and Personalization (UMAP 2025), Jun 16–19, 2025.
and .2024. Conducting Recommender Systems User Studies Using POPROX. Tutorial presented at Proceedings of the 18th ACM Conference on Recommender Systems (RecSys ’24), Oct 14–18, 2024. pp. 1277–1278. DOI 10.1145/3640457.3687092.
, , and .2019. Fairness and Discrimination in Recommendation and Retrieval. Tutorial presented at Proceedings of the 13th ACM Conference on Recommender Systems (RecSys ’19), Sep 19, 2019. pp. 576–577. DOI 10.1145/3298689.3346964. Cited 49 times. Cited 37 times.
, , and .2019. Fairness and Discrimination in Retrieval and Recommendation. Tutorial presented at Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’19), Jul 21, 2019. pp. 1403–1404. DOI 10.1145/3331184.3331380. Cited 56 times. Cited 39 times.
, , and .Demos // 3
2023. Introducing LensKit-Auto, an Experimental Automated Recommender System (AutoRecSys) Toolkit. Demo recorded in Proceedings of the 17th ACM Conference on Recommender Systems (RecSys ’23), Sep 18–22, 2023. pp. 1212–1216. DOI 10.1145/3604915.3610656. Cited 13 times. Cited 9 times.
, , and .2019. StoryTime: Eliciting Preferences from Children for Book Recommendations. Demo recorded in Proceedings of the 13th ACM Conference on Recommender Systems (RecSys ’19), Sep 19, 2019. 2 pp. DOI 10.1145/3298689.3347048. NSF PAR 10133610. Cited 18 times. Cited 9 times.
, , , , , and .2011. LensKit: A Modular Recommender Framework. Demo recorded in Proceedings of the 5th ACM Conference on Recommender Systems (RecSys ’11), Oct 27, 2011. ACM, pp. 349-350. DOI 10.1145/2043932.2044001. Cited 44 times. Cited 2 times.
, , , and .Preprints and Reports // 5
Unreviewed preprints, technical reports, and similar manuscripts.
2023. Responsible AI Research Needs Impact Statements Too. arXiv:2311.11776 [cs.AI]. Cited 7 times.
, , , and .2023. Unified Browsing Models for Linear and Grid Layouts. arXiv:2310.12524 [cs.IR]. Cited 1 time. Cited 1 time.
and .2021. Multiversal Simulacra: Understanding Hypotheticals and Possible Worlds Through Simulation. arXiv:2110.00811 [cs.IR]. Cited 2 times. Cited 2 times.
.2019. Recommender Systems Notation: Proposed Common Notation for Teaching and Research. Computer Science Faculty Publications and Presentations 177, Boise State University. DOI 10.18122/cs_facpubs/177/boisestate. arXiv:1902.01348 [cs.IR]. Cited 12 times. Cited 4 times.
and .2018. From Evaluating to Forecasting Performance: How to Turn Information Retrieval, Natural Language Processing and Recommender Systems into Predictive Sciences (Dagstuhl Perspectives Workshop 17442). Dagstuhl Manifestos 7(1) (November 21st, 2018), 96–139. DOI 10.4230/DagMan.7.1.96. Cited 20 times. Cited 17 times.
, , , , , , , , , , , , , , , , , , , , and .Workshop Summaries and Reports // 19
Summaries for workshops and special issues I have co-organized, as well as outcome reports that aren't listed under another category.
2024. FAccTRec 2024: The 7th Workshop on Responsible Recommendation. Meeting summary in Proceedings of the 18th ACM Conference on Recommender Systems (RecSys ’24), Oct 14, 2024. ACM. Cited 1 time.
, , , and .2024. AltRecSys: A Workshop on Alternative, Unexpected, and Critical Ideas on Recommendation. Meeting summary in Proceedings of the 18th ACM Conference on Recommender Systems (RecSys ’24), Oct 14, 2024. ACM. DOI 10.1145/3640457.3687104.
, , and .2023. FAccTRec 2023: The 6th Workshop on Responsible Recommendation. Meeting summary in Proceedings of the 17th ACM Conference on Recommender Systems (RecSys ’23), Sep 24, 2023. ACM. DOI 10.1145/3604915.3608761. Cited 1 time. Cited 1 time.
, , , , and .2023. Overview of the TREC 2022 Fair Ranking Track. Meeting summary in The Thirty-First Text REtrieval Conference (TREC 2022) Proceedings (TREC 2022), Mar 1, 2023. arXiv:2302.05558. Cited 42 times. Cited 13 times.
, , , and .2022. Overview of the TREC 2021 Fair Ranking Track. Meeting summary in The Thirtieth Text REtrieval Conference (TREC 2021) Proceedings (TREC 2021), Mar 1, 2022. https://trec.nist.gov/pubs/trec30/papers/Overview-F.pdf
, , , and .2021. FAccTRec 2021: The 4th Workshop on Responsible Recommendation. Meeting summary in Proceedings of the 15th ACM Conference on Recommender Systems (RecSys ’21), Sep 24, 2021. ACM. DOI 10.1145/3460231.3470932. Cited 2 times. Cited 2 times.
, , , and .2021. SimuRec: Workshop on Synthetic Data and Simulation Methods for Recommender Systems Research. Meeting summary in Proceedings of the 15th ACM Conference on Recommender Systems (RecSys ’21), Sep 13, 2021. ACM. DOI 10.1145/3460231.3470938. Cited 20 times. Cited 15 times.
, , , , , and .2021. Preface to the Special Issue on Fair, Accountable, and Transparent Recommender Systems. User Modeling and User-Adapted Interaction 31(3) (July 24th, 2021), 371–375. DOI 10.1007/s11257-021-09297-5. Cited 10 times. Cited 7 times.
, , , and .2021. Overview of the TREC 2020 Fair Ranking Track. Meeting summary in The Twenty-Ninth Text REtrieval Conference (TREC 2020) Proceedings (TREC 2020), Mar 1, 2021. arXiv:2108.05135. Cited 16 times. Cited 7 times.
, , , , and .2020. 3rd FATREC Workshop: Responsible Recommendation. Meeting summary in Proceedings of the 14th ACM Conference on Recommender Systems (RecSys ’20), Sep 15, 2020. ACM. DOI 10.1145/3383313.3411538. Cited 6 times. Cited 6 times.
, , , , and .2020. UMAP 2020 Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2020) Chairs’ Welcome. Meeting summary in Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’20), Jul 1, 2020. ACM. DOI 10.1145/3386392.3399565.
, , , , , and .2020. FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization. Meeting summary in Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’20), Jul 1, 2020. ACM. DOI 10.1145/3340631.3398671. Cited 5 times. Cited 2 times.
, , , , , and .2020. Overview of the TREC 2019 Fair Ranking Track. Meeting summary in The Twenty-Eighth Text REtrieval Conference (TREC 2019) Proceedings (TREC 2019), Mar 25, 2020. arXiv:2003.11650. Cited 49 times. Cited 14 times.
, , , and .2019. Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval (FACTS-IR). Meeting summary in Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’19), Jul 21, 2019. ACM. DOI 10.1145/3331184.3331644. Cited 6 times.
, , , and .2019. FairUMAP 2019 Chairs’ Welcome Overview. Meeting summary in Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization (UMAP ’19), Jun 9, 2019. ACM. DOI 10.1145/3314183.3323842.
, , , , , , , , and .2018. 2nd FATREC Workshop: Responsible Recommendation. Meeting summary in Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18), Oct 7, 2018. ACM. DOI 10.1145/3240323.3240335. Cited 13 times. Cited 11 times.
, , and .2018. UMAP 2018 Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2018) Chairs’ Welcome & Organization. Meeting summary in Adjunct Publication of the 26th Conference on User Modeling, Adaptation, and Personalization (UMAP ’18), Jul 8, 2018. ACM. DOI 10.1145/3213586.3226200.
, , , and .2017. The FATREC Workshop on Responsible Recommendation. Meeting summary in Proceedings of the 11th ACM Conference on Recommender Systems (RecSys ’17), Aug 28, 2017. ACM. DOI 10.1145/3109859.3109960. Cited 6 times. Cited 14 times.
and .2011. UCERSTI 2: Second Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces. Meeting summary in Proceedings of the 5th ACM Conference on Recommender Systems (RecSys ’11), Oct 23, 2011. ACM, pp. 395–396. DOI 10.1145/2043932.2044020. Cited 8 times. Cited 8 times.
, , and .Other Publications // 5
Publications and abstract-only presentations that don't fit elsewhere.
2021. Evaluating Recommenders with Distributions. In Proceedings of the RecSys 2021 Workshop on Perspectives on the Evaluation of Recommender Systems (RecSys ’21), Sep 25, 2021. Cited 2 times.
, , and .2019. Supplementing Classroom Texts with Online Resources. At 2019 American Educational Research Association Conference, Apr 5, 2019. Cited 17 times.
, , , and .2018. Supplementing Classroom Texts with Online Resources. At 2018 Annual Meeting of the Northwest Rocky Mountain Educational Research Association, Oct 18, 2018.
, , and .2017. Yak Shaving with Michael Ekstrand. CSR Tales no. 4 (December 29th, 2017). PURL https://purl.org/mde/alpaca.
.2014. Towards Recommender Engineering: Tools and Experiments in Recommender Differences. Ph.D thesis, University of Minnesota. HDL 11299/165307. Cited 8 times. Cited 4 times.
.