Students
One of the great parts of my job is working with students on research and software development. This page collects information for and about those students.
If you are interested in doing research with me, particularly as one of my advisees, see my information for prospective students for details and current openings. 2023. Inference at Scale: Significance Testing for Large Search and Recommendation Experiments. Short paper in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23). Proc. SIGIR ’23. DOI 10.1145/3539618.3592004. arXiv:2305.02461. NSF PAR 10423691. 2021. Statistical Inference: The Missing Piece of RecSys Experiment Reliability Discourse. In Proceedings of the Perspectives on the Evaluation of Recommender Systems Workshop 2021 (RecSys ’21). Proc. PERSPECTIVES @ RecSys ’21. DOI 10.48550/arXiv.2109.06424. arXiv:2109.06424. Cited 5 times. Cited 3 times. 2023. Towards Measuring Fairness in Grid Layout in Recommender Systems. Presented at the 6th FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). DOI 10.48550/arXiv.2309.10271. arXiv:2309.10271. 2023. Patterns of Gender-Specializing Query Reformulation. Short paper in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23). Proc. SIGIR ’23. DOI 10.1145/3539618.3592034. arXiv:2304.13129. NSF PAR 10423689. 2023. Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information Access. In Proceedings of the 2023 Conference on Human Information Interaction and Retrieval (CHIIR ’23). Proc. CHIIR ’23. DOI 10.1145/3576840.3578316. arXiv:2301.04780. NSF PAR 10423693. Acceptance rate: 39.4%. Cited 3 times. Cited 1 time. 2022. Fire Dragon and Unicorn Princess: Gender Stereotypes and Children’s Products in Search Engine Responses. In SIGIR eCom ’22. DOI 10.48550/arXiv.2206.13747. arXiv:2206.13747. Cited 2 times. Cited 2 times. 2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22). pp. 726–736. Proc. SIGIR ’22. DOI 10.1145/3477495.3532018. NSF PAR 10329880. Acceptance rate: 20%. Cited 17 times. Cited 15 times. 2021. Baby Shark to Barracuda: Analyzing Children’s Music Listening Behavior. In RecSys 2021 Late-Breaking Results (RecSys ’21). Proc. RecSys ’21 LBR. DOI 10.1145/3460231.3478856. NSF PAR 10316668. Cited 3 times. Cited 3 times. 2021. Pink for Princesses, Blue for Superheroes: The Need to Examine Gender Stereotypes in Kids’ Products in Search and Recommendations. In Proceedings of the 5th International and Interdisciplinary Workshop on Children & Recommender Systems (KidRec ’21), at IDC 2021. Proc. KidRec ’21. DOI 10.48550/arXiv.2105.09296. arXiv:2105.09296. NSF PAR 10335669. Cited 4 times. Cited 4 times. 2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). DOI 10.48550/arXiv.2009.01311. arXiv:2009.01311. Cited 20 times. Cited 21 times. 2020. Estimating Error and Bias in Offline Evaluation Results. Short paper in Proceedings of the 2020 Conference on Human Information Interaction and Retrieval (CHIIR ’20). ACM, 5 pp. Proc. CHIIR ’20. DOI 10.1145/3343413.3378004. arXiv:2001.09455. NSF PAR 10146883. Acceptance rate: 47%. Cited 8 times. Cited 7 times. 2018. Monte Carlo Estimates of Evaluation Metric Error and Bias. Computer Science Faculty Publications and Presentations 148. Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems, a workshop at RecSys 2018. DOI 10.18122/cs_facpubs/148/boisestate. NSF PAR 10074452. Cited 1 time. Cited 1 time. 2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Proc. FAT* 2018. Acceptance rate: 24%. Cited 170 times. Cited 190 times. 2017. Recommender Response to Diversity and Popularity Bias in User Profiles. Short paper in Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track). AAAI, pp. 657–660. (Recommender Systems track). No acceptance rate reported. Cited 15 times. Cited 15 times. 2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18). ACM, pp. 242–250. Proc. RecSys ’18. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1. Acceptance rate: 17.5%. Citations reported under UMUAI21. Citations reported under UMUAI21. 2017. Sturgeon and the Cool Kids: Problems with Random Decoys for Top-N Recommender Evaluation. In Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track). AAAI, pp. 639–644. (Recommender Systems track). No acceptance rate reported. Cited 9 times. Cited 13 times. Co-advised with Dr. Apan Qasem.Current Students
Ngozi Ihemelandu (Ph.D)
Graduated Students
Amifa Raj (Ph.D 2023)
Srabanti Guha (MS 2023)
Carlos Segura Cerna (MS 2020)
Mucun Tian (MS 2019)
Sushma Channamsetty (MS 2016 @ TXST)
Mohammed Imran R Kazi (MS 2016 @ TXST)
Vaibhav Mahant (MS 2016 @ TXST)
Shuvabrata Saha (MS 2016 @ TXST)