Students
One of the great parts of my job is working with students on research and software development. This page collects information for and about those students.
If you are interested in doing research with me, particularly as one of my advisees, see my information for prospective students for details and current openings.
My Drexel University students are part of the INERTIA Laboratory (Impact, Novation, Effectiveness, and Responsibility of Technology for Information Access), and you can read more at the lab website. 2024. Multiple Testing for IR and Recommendation System Experiments. Short paper in Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24), Mar 24–28, 2024. Lecture Notes in Computer Science 14610:449–457. DOI 10.1007/978-3-031-56063-7_37. NSF PAR 10497108. Acceptance rate: 24.3%. Cited 3 times. 2023. Candidate Set Sampling for Evaluating Top-N Recommendation. In Proceedings of the 22nd IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT ’23), Oct 26–29, 2023. pp. 88-94. DOI 10.1109/WI-IAT59888.2023.00018. arXiv:2309.11723 [cs.IR]. NSF PAR 10487293. Acceptance rate: 28%. Cited 6 times. Cited 1 time. 2023. Inference at Scale: Significance Testing for Large Search and Recommendation Experiments. Short paper in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23), Jul 23, 2023. pp. 2087–2091. DOI 10.1145/3539618.3592004. arXiv:2305.02461. NSF PAR 10423691. Acceptance rate: 25.1%. Cited 3 times. 2021. Statistical Inference: The Missing Piece of RecSys Experiment Reliability Discourse. In Proceedings of the Perspectives on the Evaluation of Recommender Systems Workshop 2021 (RecSys ’21), Sep 25, 2021. arXiv:2109.06424 [cs.IR]. Cited 8 times. Cited 6 times. 2024. Towards Optimizing Ranking in Grid-Layout for Provider-side Fairness. In Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24, IR for Good track), Mar 24–28, 2024. Lecture Notes in Computer Science 14612:90–105. DOI 10.1007/978-3-031-56069-9_7. NSF PAR 10497109. Acceptance rate: 35.9%. Cited 1 time. Cited 1 time. 2023. Towards Measuring Fairness in Grid Layout in Recommender Systems. Presented at the 6th FAccTrec Workshop on Responsible Recommendation at RecSys 2023 (peer-reviewed but not archived). arXiv:2309.10271 [cs.IR]. Cited 1 time. 2023. Patterns of Gender-Specializing Query Reformulation. Short paper in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23), Jul 23, 2023. pp. 2241–2245. DOI 10.1145/3539618.3592034. arXiv:2304.13129. NSF PAR 10423689. Acceptance rate: 25.1%. Cited 4 times. Cited 1 time. 2023. Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information Access. In Proceedings of the 2023 Conference on Human Information Interaction and Retrieval (CHIIR ’23), Mar 19, 2023. pp. 269–279. DOI 10.1145/3576840.3578316. arXiv:2301.04780. NSF PAR 10423693. Acceptance rate: 39.4%. Cited 23 times. Cited 12 times. 2022. Fire Dragon and Unicorn Princess: Gender Stereotypes and Children’s Products in Search Engine Responses. In SIGIR eCom ’22, Jul 15, 2022. 9 pp. DOI 10.48550/arXiv.2206.13747. arXiv:2206.13747 [cs.IR]. Cited 11 times. Cited 5 times. 2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22), Jul 11, 2022. pp. 726–736. DOI 10.1145/3477495.3532018. NSF PAR 10329880. Acceptance rate: 20%. Cited 67 times. Cited 47 times. 2021. Baby Shark to Barracuda: Analyzing Children’s Music Listening Behavior. In RecSys 2021 Late-Breaking Results (RecSys ’21), Sep 26, 2021. pp. 639–644. DOI 10.1145/3460231.3478856. NSF PAR 10316668. Cited 10 times. Cited 4 times. 2021. Pink for Princesses, Blue for Superheroes: The Need to Examine Gender Stereotypes in Kids’ Products in Search and Recommendations. In Proceedings of the 5th International and Interdisciplinary Workshop on Children & Recommender Systems (KidRec ’21), at IDC 2021, Jun 27, 2021. arXiv:2105.09296. NSF PAR 10335669. Cited 9 times. Cited 5 times. 2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation at RecSys 2020 (peer-reviewed but not archived). arXiv:2009.01311 [cs.IR]. Cited 39 times. Cited 29 times. 2020. Estimating Error and Bias in Offline Evaluation Results. Short paper in Proceedings of the 2020 Conference on Human Information Interaction and Retrieval (CHIIR ’20), Mar 14, 2020. ACM, 5 pp. DOI 10.1145/3343413.3378004. arXiv:2001.09455 [cs.IR]. NSF PAR 10146883. Acceptance rate: 47%. Cited 13 times. Cited 10 times. 2018. Monte Carlo Estimates of Evaluation Metric Error and Bias. Computer Science Faculty Publications and Presentations 148, Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems at RecSys 2018. DOI 10.18122/cs_facpubs/148/boisestate. NSF PAR 10074452. Cited 1 time. Cited 1 time. 2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018), Feb 23, 2018. PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 302 times. Cited 214 times. 2017. Recommender Response to Diversity and Popularity Bias in User Profiles. Short paper in Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track), May 29, 2017. AAAI, pp. 657–660. No acceptance rate reported. Cited 22 times. Cited 19 times. 2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18), Oct 3, 2018. ACM, pp. 242–250. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Citations reported under UMUAI21◊. Citations reported under UMUAI21◊. 2017. Sturgeon and the Cool Kids: Problems with Random Decoys for Top-N Recommender Evaluation. In Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference (Recommender Systems track), May 29, 2017. AAAI, pp. 639–644. No acceptance rate reported. Cited 17 times. Cited 11 times. Co-advised with Dr. Apan Qasem.Current Ph.D. Students
Samira Vaez Barenji
Sushobhan Parajuli
Past Ph.D. Students
Ngozi Ihemelandu (Ph.D. 2024, Boise State)
Amifa Raj (Ph.D. 2023, Boise State)
M.S. Students
Srabanti Guha (MS 2023, Boise State)
Carlos Segura Cerna (MS 2020, Boise State)
Mucun Tian (MS 2019, Boise State)
Sushma Channamsetty (MS 2016, TXST)
Mohammed Imran R Kazi (MS 2016, TXST)
Vaibhav Mahant (MS 2016, TXST)
Shuvabrata Saha (MS 2016, TXST)