Information Systems for Human Flourishing
I gave this talk on November 19, 2021 for Vector Institute.
Abstract
Every day, information access systems mediate our experience of the world beyond our immediate senses. Google and Bing help us find what we seek, Amazon and Netflix recommend things for us to buy and watch, Apple News gives us the day’s events, and BuzzFeed guides us to related articles. These systems deliver immense value, but also have profound influence on how we experience information and the resources and perspectives we see. There are significant challenges, however, in measuring this influence and characterizing the benefits and harms these systems deliver to the various people they affect.
In this talk, I will present our work on the question “what does it mean for an information access system to be good for people?”. Through a combination of system-building, experimentation, and data analysis, my collaborators and I are working to provide some answers to this question. I will report on several projects on understanding information needs, quantifying systematic biases in recommender system outputs and evaluation, and discuss what it takes to make recommendation, retrieval, and the other algorithmic components of information work for people.
My Research
These papers provide more details on the research I presented. Many of them have accompanying code to reproduce the experiments and results.
2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16(1–2) (July 11th, 2022), 1–177. DOI 10.1561/1500000079. arXiv:2105.05779 [cs.IR]. NSF PAR 10347630. Impact factor: 8. Cited 184 times. Cited 84 times.
, , , and .2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 290 times. Cited 212 times.
, , , , , , and .2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 4th, 2021), 377–420. DOI 10.1007/s11257-020-09284-2. arXiv:1808.07586v2. NSF PAR 10218853. Impact factor: 4.412. Cited 201 times (shared with RecSys18◊). Cited 107 times (shared with RecSys18◊).
and .2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation at RecSys 2020 (peer-reviewed but not archived). arXiv:2009.01311 [cs.IR]. Cited 37 times. Cited 29 times.
, , , and .2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20). ACM, pp. 275–284. DOI 10.1145/3340531.3411962. arXiv:2004.13157 [cs.IR]. NSF PAR 10199451. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 187 times. Cited 166 times.
, , , , and .2020. LensKit for Python: Next-Generation Software for Recommender Systems Experiments. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20, Resource track). ACM, pp. 2999–3006. DOI 10.1145/3340531.3412778. arXiv:1809.03125 [cs.IR]. NSF PAR 10199450. No acceptance rate reported. Cited 101 times. Cited 72 times.
.2021. Estimation of Fair Ranking Metrics with Incomplete Judgments. In Proceedings of The Web Conference 2021 (TheWebConf 2021). ACM. DOI 10.1145/3442381.3450080. arXiv:2108.05152. NSF PAR 10237411. Acceptance rate: 21%. Cited 47 times. Cited 36 times.
, , , , , and .Projects
Funding
- NSF CAREER award
- Boise State University College of Education Civility Grant
Other Work Cited
- Green, B and Viljoen, S. 2020. Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought. In Conference on Fairness, Accountability, and Transparency (FAT* ’20) doi:10.1145/3351095.3372840.
- ACM Code of Ethics
- Kate Crawford, 2017. The Trouble with Bias. NIPS 2017 Keynote.
- Rishabh Mehrotra, Ashton Anderson, Fernando Diaz, Amit Sharma, Hanna Wallach, and Emine Yilmaz. 2017. “Auditing Search Engines for Differential Satisfaction Across Demographics.” In Proceedings of the 26th International Conference on World Wide Web Companion, 626–33. doi:10.1145/3041021.3054197.
- Robin Burke, 2017. Multisided Fairness for Recommendation. arXiv [cs.CY]. urn:arxiv:1707.00093.
- Steck, H. 2018. Calibrated Recommendations. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys 2018). doi:10.1145/3240323.3240372.
- Biega, A.J., Gummadi, K. P., and Weikum, G. 2018. Equity of Attention: Amortizing Individual Fairness in Rankings. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, ACM. doi:10.1145/3209978.3210063.
- Ashudeep Singh and Thorsten Joachims. 2018. Fairness of Exposure in Rankings. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’18), ACM, New York, NY, USA, 2219–2228.
- Alex Beutel, Ed H. Chi, Cristos Goodrow, Jilin Chen, Tulsee Doshi, Hai Qian, Li Wei, Yi Wu, Lukasz Heldt, Zhe Zhao, and Lichan Hong. 2019. Fairness in Recommendation Ranking through Pairwise Comparisons. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. doi:10.1145/3292500.3330745