Students
One of the great parts of my job is working with students on research and software development. This page collects information for and about those students.
If you are interested in doing research with me, particularly as one of my advisees, see my information for prospective students. Students interested in doing graduate work me and the People and Information Research Team (PIReT) should apply to our M.S. in Computer Science or Ph.D in Computing program; see my Information for Prospective Students for more detail. I am also interested in involving Boise State undergraduate students in our research. Sometimes I have funds myself for paid research opportunities; the university also has several programs that support undergraduate students involved in faculty research, and I am happy to discuss the possibilities of doing an independent study or an opportunity where you get course credit for research. 2022. Fire Dragon and Unicorn Princess: Gender Stereotypes and Children’s Products in Search Engine Responses. In SIGIR eCom ’22. DOI 10.48550/arXiv.2206.13747. arXiv:2206.13747 [cs.IR]. Cited 1 time. 2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22). pp. 726–736. DOI 10.1145/3477495.3532018. NSF PAR 10329880. Acceptance rate: 20%. Cited 6 times. Cited 3 times. 2021. Baby Shark to Barracuda: Analyzing Children’s Music Listening Behavior. In RecSys 2021 Late-Breaking Results (RecSys ’21). DOI 10.1145/3460231.3478856. NSF PAR 10316668. Cited 1 time. Cited 1 time. 2021. Pink for Princesses, Blue for Superheroes: The Need to Examine Gender Stereotypes in Kids’ Products in Search and Recommendations. In Proceedings of the 5th International and Interdisciplinary Workshop on Children & Recommender Systems (KidRec ’21), at IDC 2021. DOI 10.48550/arXiv.2105.09296. arXiv:2105.09296. NSF PAR 10335669. Cited 3 times. Cited 5 times. 2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation (peer-reviewed but not archived). DOI 10.48550/arXiv.2009.01311. arXiv:2009.01311 [cs.IR]. Cited 15 times. Cited 16 times. 2021. Statistical Inference: The Missing Piece of RecSys Experiment Reliability Discourse. In Proceedings of the Perspectives on the Evaluation of Recommender Systems Workshop 2021 (RecSys ’21). DOI 10.48550/arXiv.2109.06424. arXiv:2109.06424 [cs.IR]. Cited 2 times. Cited 1 time. 2020. Estimating Error and Bias in Offline Evaluation Results. Short paper in Proceedings of the 2020 Conference on Human Information Interaction and Retrieval (CHIIR ’20). ACM, 5 pp. DOI 10.1145/3343413.3378004. arXiv:2001.09455 [cs.IR]. NSF PAR 10146883. Acceptance rate: 47%. Cited 5 times. Cited 7 times. 2018. Monte Carlo Estimates of Evaluation Metric Error and Bias. Computer Science Faculty Publications and Presentations 148. Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems, a workshop at RecSys 2018. DOI 10.18122/cs_facpubs/148/boisestate. NSF PAR 10074452. Cited 1 time. Cited 1 time. 2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 144 times. Cited 154 times. 2017. Recommender Response to Diversity and Popularity Bias in User Profiles. Short paper in Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track). AAAI, pp. 657–660. No acceptance rate reported. Cited 11 times. Cited 12 times. 2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18). ACM, pp. 242–250. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Cited 112 times. Citations reported under UMUAI21*. 2017. Sturgeon and the Cool Kids: Problems with Random Decoys for Top-N Recommender Evaluation. In Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track). AAAI, pp. 639–644. No acceptance rate reported. Cited 8 times. Cited 13 times. Co-advised with Dr. Apan Qasem.Current Openings
Current Students
Amifa Raj (Ph.D)
Ngozi Ihemelandu (Ph.D)
Srabanti Guha (MS)
Graduated Students
Carlos Segura Cerna (MS 2020)
Mucun Tian (MS 2019)
Sushma Channamsetty (MS 2016 @ TXST)
Mohammed Imran R Kazi (MS 2016 @ TXST)
Vaibhav Mahant (MS 2016 @ TXST)
Shuvabrata Saha (MS 2016 @ TXST)