Education
- Ph.D (2014)
- Computer Science, University of Minnesota.
- Advisers: John T. Riedl and Joseph A. Konstan
- B.S. (2007)
- Computer Engineering, Iowa State University.
Employment History
- 2016–present
- Assistant Professor, Dept. of Computer Science, Boise State University
- Co-director, People and Information Research Team (PIReT)
- 2014–2016
- Assistant Professor, Dept. of Computer Science, Texas State University
- 2008–2014
- Graduate Research Assistant, GroupLens Research, University of Minnesota
- Su 2012, F 2013
- Instructor, University of Minnesota
- Summer 2010
- Research Intern, Autodesk Research, Toronto, CA
- 2007–2008, S 2011
- Teaching Assistant, University of Minnesota
- 2005–2007
- Undergrad Research Assistant, Scalable Computing Laboratory, Iowa State University
Teaching
Boise State University
Term | Course | Title | Credits | Students |
---|---|---|---|---|
F21 | CS 538 | Recommender Systems | 3 | 11 |
F20 | CS 533 | Intro to Data Science | 3 | 22 |
S20 | CS 697 | Equity and Discrimination | 3 | 3 |
S20 | CS 410 | Databases | 3 | 36 |
F19 | CS 533 | Intro to Data Science | 3 | 28 |
S19 | CS 538 | Recommender Systems | 3 | 12 |
F18 | CS 410/510 | Databases | 3 | 40 |
Su18 | CS 310-HU | Intro to Databases | 1 | 6 |
S18 | CS 410/510 | Databases | 3 | 22 |
F17 | CS 533 | Intro to Data Science | 3 | 22 |
S17 | CS 597 | Recommender Systems | 3 | 13 |
F16 | CS 410/510 | Databases | 3 | 28 |
Texas State University
- CS 4332 (Intro to Database Systems)
- CS 3320 (Internet Software Development)
- CS 5369Q/4379Q (Recommender Systems)
- CS 4350 (Unix Systems Programming)
Coursera
I co-created the Recommender Systems specialization on Coursera, along with its two previous single-class versions, with Joseph A. Konstan.
University of Minnesota
- Instructor for CS 5980-1 (Intro to Recommender Systems)
- Summer instructor for CS 1902 (Structure of Computer Programming II)
- TA for CSCI 5125 (Collaborative and Social Computing) and CSCI 1902
Teaching Professional Development
- Boise State University Ten for Teaching program.
- Boise State University Center for Teaching and Learning Course Design Institute, a one-week intensive session in Summer 2017.
- CTL workshops on service learning, mastery-based grading, and other topics.
- Texas State University’s Program for Excellence in Teaching and Learning (2014–2015).
- Preparing Future Faculty at the University of Minnesota.
Students
- Amifa Raj (Ph.D, expected 2023)
- Ngozi Ihemelandu (Ph.D, expected 2023)
- Adam Keener (M.S., ongoing)
- Carlos Segura Cerna (M.S. 2020; project: Recommendation Server for LensKit)
- Mucun Tian (M.S. 2019; thesis: Estimating Error and Bias of Offline Recommender System Evaluation Results)
- Vaibhav Mahant (M.S. 2016, Texas State University; thesis: Improving Top-N Evaluation of Recommender Systems)
- Sushma Channamsetty (M.S. 2016, Texas State University; thesis: Recommender Response to User Profile Diversity and Popularity Bias)
- Mohammed Imran R Kazi (M.S. 2016, Texas State University; thesis: Exploring Potentially Discriminatory Biases in Book Recommendation)
- Shuvabrata Saha (M.S. 2016, Texas State University; co-advised with Dr. Apan Qasem; thesis: A Multi-objective Autotuning Framework For The Java Virtual Machine)
Publications
Author formatting key:
- , ,
- †presenter
- ‡equal first authors.
Citation counts from Semantic Scholar.
Book Chapters
2022. Fairness in Recommender Systems. In Recommender Systems Handbook (3rd edition). Francesco Ricci, Lior Roach, and Bracha Shapira, eds. Springer-Verlag. DOI 10.1007/978-1-0716-2197-4_18. ISBN 978-1-0716-2196-7.
, , , and .2018. Rating-Based Collaborative Filtering: Algorithms and Evaluation. In Social Information Access. Peter Brusilovsky and Daqing He, eds. Springer-Verlag, Lecture Notes in Computer Science vol. 10100, pp. 344–390. DOI 10.1007/978-3-319-90092-6_10. ISBN 978-3-319-90091-9. Cited 65 times.
, , and .Journal Publications
2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval (to appear), 92 pp. arXiv:2105.05779 [cs.IR]. Impact factor: 8.
, , , and .2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 2021), 377–420. DOI 10.1007/s11257-020-09284-2. NSF PAR 10218853. Impact factor: 4.412. Cited 3 times.
and .2020. Enhancing Classroom Instruction with Online News. Aslib Journal of Information Management 72(5) (June 2020), 725–744. DOI 10.1108/AJIM-11-2019-0309. Impact factor: 1.903. Cited 4 times.
, , and .2016. Dependency Injection with Static Analysis and Context-Aware Policy. Journal of Object Technology 15(1) (February 2016), 1:1–31. DOI 10.5381/jot.2016.15.1.a1. Cited 1 time.
and .2015. Teaching Recommender Systems at Large Scale: Evaluation and Lessons Learned from a Hybrid MOOC. Transactions on Computer-Human Interaction 22(2) (April 2015). DOI 10.1145/2728171. Impact factor: 1.293. Cited 24 times.
, , , , and .2011. RecBench: Benchmarks for Evaluating Performance of Recommender System Architectures. Proceedings of the VLDB Endowment 4(11) (August 2011), 911–920. Acceptance rate: 18%. Cited 11 times.
, , , , , and .2011. Collaborative Filtering Recommender Systems. Foundations and Trends® in Human-Computer Interaction 4(2) (February 2011), 81–173. DOI 10.1561/1100000009. Cited 656 times.
, , and .Conference Publications
These papers have been published in peer-reviewed conference proceedings.
2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. To appear in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22). DOI 10.1145/3477495.3532018.
and .2021. Privacy as a Planned Behavior: Effects of Situational Factors on Privacy Perceptions and Plans. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’21). ACM. DOI 10.1145/3450613.3456829. arXiv:2104.11847 [cs.SI]. NSF PAR 10223377. Acceptance rate: 23%. Cited 3 times.
, , , and .2021. Estimation of Fair Ranking Metrics with Incomplete Judgments. In Proceedings of The Web Conference 2021 (TheWebConf 2021). ACM. DOI 10.1145/3442381.3450080. arXiv:2108.05152. NSF PAR 10237411. Acceptance rate: 21%. Cited 8 times.
, , , , , and .2020. LensKit for Python: Next-Generation Software for Recommender Systems Experiments. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20, Resource track). ACM, pp. 2999–3006. DOI 10.1145/3340531.3412778. arXiv:1809.03125 [cs.IR]. NSF PAR 10199450. No acceptance rate reported. Cited 14 times.
.2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20). ACM, pp. 275–284. DOI 10.1145/3340531.3411962. arXiv:2004.13157 [cs.IR]. NSF PAR 10199451. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 54 times.
, , , , and .2020. Estimating Error and Bias in Offline Evaluation Results. Short paper in Proceedings of the 2020 Conference on Human Information Interaction and Retrieval (CHIIR ’20). ACM, 5 pp. DOI 10.1145/3343413.3378004. arXiv:2001.09455 [cs.IR]. NSF PAR 10146883. Acceptance rate: 47%. Cited 5 times.
and .2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18). ACM, pp. 242–250. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Cited 78 times.
, , , , and .2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 97 times.
, , , , , , and .2018. Privacy for All: Ensuring Fair and Equitable Privacy Protections. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:35–47. Acceptance rate: 24%. Cited 48 times.
, , and .2017. Sturgeon and the Cool Kids: Problems with Random Decoys for Top-N Recommender Evaluation. In Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track). AAAI, pp. 639–644. No acceptance rate reported. Cited 5 times.
and .2017. Recommender Response to Diversity and Popularity Bias in User Profiles. Short paper in Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track). AAAI, pp. 657–660. No acceptance rate reported. Cited 11 times.
and .2015. Letting Users Choose Recommender Algorithms: An Experimental Study. In Proceedings of the 9th ACM Conference on Recommender Systems (RecSys ’15). ACM. DOI 10.1145/2792838.2800195. Acceptance rate: 21%. Cited 81 times.
, , , and .2014. User Perception of Differences in Recommender Algorithms. In Proceedings of the 8th ACM Conference on Recommender Systems (RecSys ’14). ACM. DOI 10.1145/2645710.2645737. Acceptance rate: 23%. Cited 134 times.
, , , and .2014. Teaching Recommender Systems at Large Scale: Evaluation and Lessons Learned from a Hybrid MOOC. In Proceedings of the First ACM Conference on Learning @ Scale (S ’14). ACM. DOI 10.1145/2556325.2566244. Acceptance rate: 37%. Cited 54 times.
, , , , and .2013. Rating Support Interfaces to Improve User Experience and Recommender Accuracy. In Proceedings of the 7th ACM Conference on Recommender Systems (RecSys ’13). ACM. DOI 10.1145/2507157.2507188. Acceptance rate: 24%. Cited 38 times.
, , , , , , and .2012. How Many Bits per Rating?. In Proceedings of the Sixth ACM Conference on Recommender Systems (RecSys ’12). ACM, pp. 99–106. DOI 10.1145/2365952.2365974. Acceptance rate: 20%. Cited 39 times.
, , , , and .2012. When Recommenders Fail: Predicting Recommender Failure for Algorithm Selection and Combination. Short paper in Proceedings of the Sixth ACM Conference on Recommender Systems (RecSys ’12). ACM, pp. 233–236. DOI 10.1145/2365952.2366002. Acceptance rate: 32%. Cited 59 times.
and .2012. RecStore: An Extensible And Adaptive Framework for Online Recommender Queries Inside the Database Engine. In Proceedings of the 15th International Conference on Extending Database Technology (EDBT ’12). ACM, pp. 86–96. DOI 10.1145/2247596.2247608. Acceptance rate: 23%. Cited 15 times.
, , , and .2011. Rethinking The Recommender Research Ecosystem: Reproducibility, Openness, and LensKit. In Proceedings of the Fifth ACM Conference on Recommender Systems (RecSys ’11). ACM, pp. 133–140. DOI 10.1145/2043932.2043958. Acceptance rate: 27% (20% for oral presentation, which this received). Cited 169 times.
, , , and .2011. Searching for Software Learning Resources Using Application Context. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST ’11). ACM, pp. 195–204. DOI 10.1145/2047196.2047220. Acceptance rate: 25%. Cited 45 times.
, , , , and .2010. Automatically Building Research Reading Lists. In Proceedings of the 4th ACM Conference on Recommender Systems (RecSys ’10). ACM, pp. 159–166. DOI 10.1145/1864708.1864740. Acceptance rate: 19%. Cited 96 times.
, , , , , and .2009. rv you’re dumb: Identifying Discarded Work in Wiki Article History. In Proceedings of the 5th International Symposium on Wikis and Open Collaboration (WikiSym ’09). ACM, 10 pp. DOI 10.1145/1641309.1641317. Acceptance rate: 36%. Selected as Best Paper. Cited 32 times.
and .Workshops, Seminars, Posters, Etc.
These papers have been peer-reviewed for workshops, poster proceedings, and similar venues.
2021. Statistical Inference: The Missing Piece of RecSys Experiment Reliability Discourse. In Proceedings of the Perspectives on the Evaluation of Recommender Systems Workshop 2021 (RecSys ’21). arXiv:2109.06424. Cited 1 time.
and .2021. Baby Shark to Barracuda: Analyzing Children’s Music Listening Behavior. In RecSys 2021 Late-Breaking Results (RecSys ’21). DOI 10.1145/3460231.3478856. NSF PAR 10316668.
, , , , , , and .2021. Pink for Princesses, Blue for Superheroes: The Need to Examine Gender Stereotypes in Kids’ Products in Search and Recommendations. In Proceedings of the 5th International and Interdisciplinary Workshop on Children & Recommender Systems (KidRec ’21), at IDC 2021. arXiv:2105.09296. Cited 2 times.
, , and .2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). arXiv:2009.01311 [cs.IR]. Cited 10 times.
, , , and .2019. StoryTime: Eliciting Preferences from Children for Book Recommendations. Demo recorded in Proceedings of the 13th ACM Conference on Recommender Systems (RecSys ’19). 2 pp. DOI 10.1145/3298689.3347048. NSF PAR 10133610. Cited 1 time.
, , , , , and .2018. Retrieving and Recommending for the Classroom: Stakeholders, Objectives, Resources, and Users. In Proceedings of the ComplexRec 2018 Second Workshop on Recommendation in Complex Scenarios (ComplexRec ’18), at RecSys 2018. Cited 4 times.
, , , and .2018. Monte Carlo Estimates of Evaluation Metric Error and Bias. Computer Science Faculty Publications and Presentations 148. Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems, a workshop at RecSys 2018. DOI 10.18122/cs_facpubs/148/boisestate. NSF PAR 10074452. Cited 1 time.
and .2018. The LKPY Package for Recommender Systems Experiments: Next-Generation Tools and Lessons Learned from the LensKit Project. Computer Science Faculty Publications and Presentations 147. Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems, a workshop at RecSys 2018. DOI 10.18122/cs_facpubs/147/boisestate. arXiv:1809.03125v1 [cs.IR]. Cited 19 times.
.2018. Recommending Texts to Children with an Expert in the Loop. In Proceedings of the 2nd International Workshop on Children & Recommender Systems (KidRec ’18), at IDC 2018. DOI 10.18122/cs_facpubs/140/boisestate. Cited 7 times.
, , and .2018. Do Different Groups Have Comparable Privacy Tradeoffs?. At Moving Beyond a ‘One-Size Fits All’ Approach: Exploring Individual Differences in Privacy, a workshop at CHI 2018. NSF PAR 10222636. Cited 1 time.
, , , and .2017. The Demographics of Cool: Popularity and Recommender Performance for Different Groups of Users. In RecSys 2017 Poster Proceedings. CEUR, Workshop Proceedings 1905. Cited 4 times.
and .2017. Challenges in Evaluating Recommendations for Children. In Proceedings of the International Workshop on Children & Recommender Systems (KidRec), at RecSys 2017.
.2016. Behaviorism is Not Enough: Better Recommendations through Listening to Users. In Proceedings of the Tenth ACM Conference on Recommender Systems (RecSys ’16, Past, Present, and Future track). ACM. DOI 10.1145/2959100.2959179. Acceptance rate: 36%. Cited 61 times.
and .2016. First Do No Harm: Considering and Minimizing Harm in Recommender Systems Designed for Engendering Health. In Proceedings of the Workshop on Recommender Systems for Health at RecSys ’16. Cited 10 times.
and .2014. Building Open-Source Tools for Reproducible Research and Education. At Sharing, Re-use, and Circulation of Resources in Cooperative Scientific Work, a workshop at CSCW 2014.
.2011. LensKit: A Modular Recommender Framework. Demo recorded in Proceedings of the 5th ACM Conference on Recommender Systems (RecSys ’11). ACM, pp. 349-350. DOI 10.1145/2043932.2044001. Cited 1 time.
, , , and .Tutorials
2019. Fairness and Discrimination in Recommendation and Retrieval. Tutorial presented at Proceedings of the 13th ACM Conference on Recommender Systems (RecSys ’19). 2 pp. DOI 10.1145/3298689.3346964.
, , and .2019. Fairness and Discrimination in Retrieval and Recommendation. Tutorial presented at Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’19). 2 pp. DOI 10.1145/3331184.3331380.
, , and .Other Publications and Presentations
These publications are unreviewed reports, preprints, abstract-only presentations, etc.
2022. Overview of the TREC 2020 Fair Ranking Track. In The Thirtieth Text REtrieval Conference (TREC 2021) Proceedings (TREC 2021). https://trec.nist.gov/pubs/trec30/papers/Overview-F.pdf.
, , , and .2021. Multiversal Simulacra: Understanding Hypotheticals and Possible Worlds Through Simulation. arXiv:2110.00811 [cs.IR].
.2021. Evaluating Recommenders with Distributions. At Proceedings of the RecSys 2021 Workshop on Perspectives on the Evaluation of Recommender Systems (RecSys ’21).
, , and .2022. The Multisided Complexity of Fairness in Recommender Systems. AI Magazine (to appear).
, , , and .2021. SimuRec: Workshop on Synthetic Data and Simulation Methods for Recommender Systems Research. In Proceedings of the 15th ACM Conference on Recommender Systems (RecSys ’21). ACM. DOI 10.1145/3460231.3470938. Cited 1 time.
, , , , , and .2021. FAccTRec 2021: The 4th Workshop on Responsible Recommendation. In Proceedings of the 15th ACM Conference on Recommender Systems (RecSys ’21). ACM. DOI 10.1145/3460231.3470932.
, , , and .2021. Overview of the TREC 2020 Fair Ranking Track. In The Twenty-Ninth Text REtrieval Conference (TREC 2020) Proceedings (TREC 2020). arXiv:2108.05135. Cited 15 times.
, , , , and .2021. Preface to the Special Issue on Fair, Accountable, and Transparent Recommender Systems. User Modeling and User-Adapted Interaction 31(3) (July 2021), 371–375. DOI 10.1007/s11257-021-09297-5.
, , , and .2020. 3rd FATREC Workshop: Responsible Recommendation. In Proceedings of the 14th ACM Conference on Recommender Systems (RecSys ’20). ACM. DOI 10.1145/3383313.3411538. Cited 3 times.
, , , , and .2020. FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’20). ACM. DOI 10.1145/3340631.3398671. Cited 2 times.
, , , , , and .2020. Overview of the TREC 2019 Fair Ranking Track. In The Twenty-Eighth Text REtrieval Conference (TREC 2019) Proceedings (TREC 2019). arXiv:2003.11650. Cited 15 times.
, , , and .2019. FACTS-IR: Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval. SIGIR Forum 53(2) (December 2019), 20–43. DOI 10.1145/3458553.3458556. Cited 4 times.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and .2019. FairUMAP 2019 Chairs’ Welcome Overview. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization (UMAP ’19). ACM. DOI 10.1145/3314183.3323842.
, , , , , , , , and .2019. Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval (FACTS-IR). In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’19). ACM. DOI 10.1145/3331184.3331644. Cited 6 times.
, , , and .2019. Supplementing Classroom Texts with Online Resources. At 2019 American Educational Research Association Conference.
, , , and .2019. Recommender Systems Notation: Proposed Common Notation for Teaching and Research. Computer Science Faculty Publications and Presentations 177. Boise State University. DOI 10.18122/cs_facpubs/177/boisestate. arXiv:1902.01348 [cs.IR]. Cited 3 times.
and .2018. From Evaluating to Forecasting Performance: How to Turn Information Retrieval, Natural Language Processing and Recommender Systems into Predictive Sciences (Dagstuhl Perspectives Workshop 17442). Dagstuhl Manifestos 7(1) (November 2018), 96–139. DOI 10.4230/DagMan.7.1.96. Cited 6 times.
, , , , , , , , , , , , , , , , , , , , and .2018. Supplementing Classroom Texts with Online Resources. At 2018 Annual Meeting of the Northwest Rocky Mountain Educational Research Association.
, , and .2018. 2nd FATREC Workshop: Responsible Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18). ACM. DOI 10.1145/3240323.3240335. Cited 8 times.
, , and .2018. UMAP 2018 Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2018) Chairs’ Welcome & Organization. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation, and Personalization (UMAP ’18). ACM. DOI 10.1145/3213586.3226200.
, , , and .2018. The Dagstuhl Perspectives Workshop on Performance Modeling and Prediction. SIGIR Forum 52(1) (June 2018), 91–101. DOI 10.1145/3274784.3274789. Cited 14 times.
, , , , , , , , , , , , , , , , , , , , and .2017. Yak Shaving with Michael Ekstrand. CSR Tales no. 4 (December 2017). PURL https://purl.org/mde/alpaca.
.2017. The FATREC Workshop on Responsible Recommendation. In Proceedings of the 11th ACM Conference on Recommender Systems (RecSys ’17). ACM. DOI 10.1145/3109859.3109960. Cited 11 times.
and .2014. Towards Recommender Engineering: Tools and Experiments in Recommender Differences. Ph.D thesis, University of Minnesota. HDL 11299/165307. Cited 5 times.
.2011. UCERSTI 2: Second Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces. In Proceedings of the 5th ACM Conference on Recommender Systems (RecSys ’11). ACM, pp. 395–396. DOI 10.1145/2043932.2044020. Cited 8 times.
, , and .Software and Data
I have built several open-source software packages and data sets in the course of my research and other work. Open-source software distribution and open data are key pieces of my research dissemination strategy. My most significant development efforts are:
- LensKit, a toolkit for building, researching, and studying recommender systems. As of Oct. 25, 2021, the original Java software (in development 2010–2018; paper RecSys11) is known to be used in 64 papers and theses and was used by over 2500 students to complete programming assignments in the Recommender Systems MOOC. The Python software (2018–, papers CIKM20-lk and Reveal18-lk) is used in at least 15 papers, theses, and educational resources, including the PBS show Crash Course AI, and has been downloaded 9,193 times from the Python Package Index in the last 6 months (according to PyPIStats). The current version is 0.13.1, released on June 22, 2021; it is the 20th release of LensKit for Python.
https://lenskit.org (current list of known uses: https://lenskit.org/research/) - Book Data Tools, software tools to integrate multiple public sources of book and book consumption data into a data set for studying social effects in book publication, reading, and recommendation. Used in UMUAI21 and RecSys18. https://bookdata.piret.info
My work has also produced a number of utility packages to support this software and other efforts, including:
- seedbank, a Python package for consistently seeding random number generators. https://seedbank.lenskit.org
- csr, a Python package for managing sparse matrices in CSR format compatible with the Numba JIT for scientific python, and with Intel MKL acceleration for several operations.
https://csr.lenskit.org - binpickle, a Python package for saving scientific data structures (such as machine learning models) to disk in either compressed or memory-mappable format. LensKit uses this package to serialize models for both storage and shared-memory parallelism. https://binpickle.lenskit.org
- happylog, a Rust package for easily configuring log output for command-line programs.
https://github.com/mdekstrand/happylog - Grapht, a dependency injection framework for Java with novel configuration and static analysis capabilities (paper JOT16). http://grapht.grouplens.org
Research Funding
External Grants
- 2018–2023: NSF award CHS 17-51278, $482,081: CAREER: User-Based Simulation Methods for Quantifying Sources of Error and Bias in Recommender Systems (PI). Received $16K REU supplements in 2020 and 2021.
Internal Grants
- 2017: $19K Boise State College of Education Civility Grant LITERATE: Locating Informational Texts for Engaging Readers And Teaching Equitably (co-PI; with PI Katherine Wright & co-PI Sole Pera)
- 2014: $8K Texas State University Research Enhancement Program (competitive internal research grant) Temporal Analysis of Recommender Systems (PI)
Invited Talks
- Mar. 2022: ‘You Might Also Think This Is Unfair’ at University of Michigan School of Information (online).
- Nov. 2021: ‘Information Systems for Human Flourishing’ at Vector Institute, Toronto, Canada (online).
- Oct. 2020: Guest lecture on recommender systems and fairness for Carnegie Mellon University Human-AI Interaction course
- Apr. 2020: Guest lecture on recommender systems and fairness for Emory University recommender systems course
- Oct. 2019: ‘Online Recommendation: What? Where? Why? How?’ session at the Idaho Library Association 2019 Conference
- Aug. 2019: ‘User, Agent, Subject, Spy’ seminar at Microsoft Research Montréal
- Jul. 2019: ‘User, Agent, Subject, Spy’ seminar at Criteo AI Labs, Paris, France
- May 2019: ‘Recommendations, Decisions, Feedback Loops, and Maybe Saving the Planet’ at the CRA CCC Visioning Workshop on Economics and Fairness.
- Dec. 2018: ‘User, Agent, Subject, Spy’ seminar at Clemson University
- Nov. 2018: ‘User, Agent, Subject, Spy’ seminar at Carnegie Mellon University Human-Computer Interaction Institute
- Nov. 2018: Guest lecture on recommender systems for Carnegie Mellon University Human-AI Interaction course
- Nov. 2017: ‘Making Information Systems Good for People’ at Whitman College (Walla Walla, WA)
- Jun. 2017: ‘Recommending for People’ seminar at RecSysNL at TU Delft
- Jun. 2017: ‘Recommending for People’ seminar at Jheronimus Academy of Data Science
- Jun. 2017: ‘Recommending for People’ seminar at UCL Mons
- Jun. 2017: ‘Responsible Recommendation’ at the Brussels Big Data and Ethics Meetup, the inaugural event of the DigitYser Big Data community
- Nov. 2016: ‘Recommending for People’ colloquium at the University at Albany Dept. of Computer Science
- Oct. 2016: ‘Introduction to Recommender Systems’ at the Clearwater Developer Conference
- Sep. 2015: ‘Challenges in Scaling Recommender Systems Research’ at the Workshop on Large-Scale Recommender Systems at RecSys ’15 in Vienna, Austria
- Sep. 2015: ‘Levelling Up your Academic Career’ at the Doctoral Symposium at RecSys ’15 in Vienna, Austria
- Sep. 2012: ‘Flexible Recommender Experiments with LensKit’ at the RecSys Challenge Workshop at RecSys ’12 in Dublin, Ireland
- Sep. 2012: ‘The MovieLens Data Set’ (invited talk) at the RecSys Challenge Workshop at RecSys ’12 in Dublin, Ireland
Service
Ongoing Professional Service, Memberships, and Honors
- Senior Member of the Association for Computing Machinery
- Executive committee, ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2020–present
- Co-chair, FAccT Network, 2019–present
- Steering committee, ACM Conference on Recommender Systems (RecSys), 2017–present
- Steering committee, ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2017–present
Program Committee and Editorial Service
- Program Co-chair, 16th ACM Conference on Recommender Systems (RecSys 2022)
- Guest editor, 2021 special issue of User Modeling and User-Adapted Interaction on fairness in user modeling.
- Distinguished Reviewer, ACM Transactions on Interactive Intelligent Systems (TiiS) (2017–present)
- ACM Conference on Recommender Systems (Senior PC 2019–2021, PC 2014–2017)
- ACM Conference on Fairness, Accountability, and Transparency (FAccT) (2018–2021, Area Chair 2018)
- ACM CIKM (Resource Track PC 2020–2021)
- ACM SIGIR (PC 2020–2021 Full and Short Papers; 2021 Perspective and Resource papers)
- NeurIPS Ethical Review panel (2021)
- TheWebConf Track on Behavior Analysis and Personalization (Senior PC 2021, PC 2016–2020)
- Track chair, User Modeling and Adaptive Personalization (UMAP) 2021
- User Modeling and Adaptive Personalization (2019–2020)
- Workshop on Fairness, Accountability, and Transparency in Machine Learning (FATML) (2017)
- FLAIRS Special Track on Recommender Systems (2015–2017)
- SAC Recommender Systems track (2013, 2017)
- Ad-hoc conference reviews for CHI, CSCW, IUI, UIST, WikiSym, UMAP, ICWSM.
- Reviewed for Communications of the ACM; ACM journals TDS, TOCHI, TIST, TOIS, TWEB, TKDD, and TIIS; IEEE journals TDSC and TKDE; Interacting with Computers; UMUAI; Information Retrieval Journal; ACM Computing Surveys; Artificial Intelligence Review; and others.
- Grant proposal reviews for NSF (US 2019, 2020), NWO (NL), FWF & WWTF (AT)
Other Professional Service
- Co-organizer, SimuRec Workshop on Simulation and Synthetic Data for Recommender Systems at RecSys 2021
- Sponsorship co-chair, ACM FAccT 2021–2022
- Organized and moderated panel at RecSys 2019 on responsible recommendation
- Co-organizer, TREC Track on Fairness in Information Retrieval (2019–2021)
- PR & Publicity co-chair, 2nd Conference on Fairness, Accountability, and Transparency (ACM FAT* 2019)
- General co-chair, ACM RecSys 2018
- Publications working group, FAccT steering committee (2017)
- Co-organizer, FATREC Workshop on Responsible Recommendation at RecSys 2017, 2018, 2020, 2021
- Co-organizer, Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval (FACTS-IR) at SIGIR 2019
- Co-organizer, FairUMAP workshop at UMAP 2018–2020
- Track co-chair, 2018 Conference on Fairness, Accountability, and Transparency Systems track
- Participant in Dagstuhl Perspectives Workshop Towards Cross-Domain Performance Modeling and Prediction: IR/RecSys/NLP
- Publicity co-chair, ACM RecSys 2016
- External advisor, CrowdRec (EU Framework Programme collaborative research project, 2014–2016)
- Proceedings co-chair, ACM CHI 2012–2013
- Demos co-chair, ACM RecSys 2012
Department and University Service
- 2020–2021 CS Faculty Search Committee
- COEN SAGE Scholars Program Mentor (2019–2021)
- Boise State College of Engineering Curriculum Committee (2019–present)
- Boise State Ph.D in Computing Steering Committee (2017–present)
- Boise State CS Dept. Curriculum Committee (2017–present)
- Boise State CS Dept. Graduate Recruiting Committee (2017)
- Texas State CS Dept. Undergraduate Committee (2014–2016)
- Texas State CS Dept. Written Comp Exam Grading (2014–2016)
- UMN CS Graduate Student Association secretary (2009–2010)
Community Service
- July 2020 — taught continuing education session for Idaho Council for Libraries
- October 2019 — presented at Idaho Library Association Annual Conference
- February 2019 — addressed Idaho State House Judiciary Committee on H.B. 118, regulating pretrial risk assessment algorithms
- December 2017 — Boise Public Library panel on preparing for a career in computer science
- Judge, 2015 — Travis Elementary School Science Fair
Media Mentions
- “Out of the Blue”. (Ravi Shankar, The New Indian Express, May 1, 2022. https://www.newindianexpress.com/opinions/columns/ravi-shankar/2022/may/01/outof-theblue-2447591.html). Quotes from Washington Post article below.
- “Elon Musk wants Twitter’s algorithm to be public. It’s not that simple.” (Reed Albergotti, The Washington Post, April 16, 2022. https://www.washingtonpost.com/technology/2022/04/16/elon-musk-twitter-algorithm/).
- Quoted at length about how artificial intelligence learns from social signals in “Can AI be horny?” (Chris Stokel-Walker, Input, April 28, 2021; Bustle Digital Group. https://www.inputmag.com/culture/artificial-intelligence-ai-archillect-twitter-horny-sex).
- Quoted in several articles about FAccT suspending Google’s sponsorship for the 2021 conference, in my role as FAccT Sponsor Co-chair and a member of the Executive Committee. These articles include:
- “AI ethics research conference suspends Google sponsorship.” (Khari Johnson, VentureBeat, March 2, 2021. https://venturebeat.com/2021/03/02/ai-ethics-research-conference-suspends-google-sponsorship/)
- “Conference suspends Google sponsorship after ethics experts’ exit.” (D. Matthews, Times Higher Education, March 8, 2021. https://www.timeshighereducation.com/news/conference-suspends-google-sponsorship-after-ethics-experts-exit)
- “Tech transparency conference suspends Google sponsorship over transparency concerns.” (Colleen Flaherty, Inside Higher Ed, March 9, 2021. https://www.insidehighered.com/news/2021/03/09/tech-transparency-conference-suspends-google-sponsorship-over-transparency-concerns)
- “Google offered a professor $60,000, but he turned it down. Here’s why.” (Rachel Metz, CNN Business, March 24, 2021. https://www.cnn.com/2021/03/24/tech/google-ai-ethics-reputation/index.html). I am not the professor who declined funding, but am quoted for context.
- Quoted about voter file data leaks in “D.C. makes it shockingly easy to snoop on your fellow voters.” (Brian Fung, The Switch [a blog by The Washington Post], June 14, 2016. https://www.washingtonpost.com/news/the-switch/wp/2016/06/14/d-c-s-board-of-elections-makes-it-shockingly-easy-to-snoop-on-your-fellow-voters/)
- Quoted about recommender systems principles in “TV seems to know what you want to see; algorithms at work.” (Scott Collins, Los Angeles Times, November 21, 2014. https://www.latimes.com/entertainment/tv/la-et-st-tv-section-algorithm-20141123-story.html)