Blog Articles 71–75

Search and Recommendation

Search and recommendation are very interrelated concepts.

What is search, but query-directed recommendation? Or recommendation, but zero-query search?

My students asked me about this relationship in my recommender systems class last spring. Trying to answer this question brought me to a formulation that seemed to help them, and perhaps it will be more broadly illuminating as well. For some of you this may well be old hat.

Organizing and Documenting Work

My second-year evaluation packet is due in a couple of weeks. A lot of my materials were in order; I took the time to reorganize and backfill my archive, so the next review submissions will hopefully be pretty easy. And there will be many of them: reappointment reviews until tenure, annual performance reviews to determine pay raises, the Big One for tenure and promotion itself, and hopefully someday an application for Full Professor.

The two biggest problems for preparing these packets seems to be maintaining a complete log of relevant activities, and maintaining supporting documentation for them. Given the importance of these reviews, and the many other activities that require some form of CV or snapshot of accomplishments, it seems worth some up-front investment in tooling and organization.

Here’s how I do it.

Why User Control in Recommender Systems?

The theme of my RecSys 2015 paper, along with the other papers in its session, is on giving users control over their recommendation experience.

Why do we want to do this? Isn’t the idea of recommender systems to figure out what the user wants and give it to them, without needing significant intervention?

There are a few reasons I think user control is an important research direction for recommender systems. First, different users have different needs, and different algorithms have different strengths. This is the idea behind McNee’s human-recommender interaction framework, and the thesis and results of several of my experiments. So far, we don’t have good meta-recommenders for identifying which recommender will best meet a particular user’s needs, so giving them control is a way to punt on this.

First-and-a-half, if we give users control in the short term, then we can obtain more training data to develop potential meta-recommenders to provide a better user experience.

Letting Users Choose Recommenders

Switching Algorithms

This is a paper about a drop-down menu. I’ll be presenting on it in the first session on Monday at RecSys 2015; the work is joint work with Daniel Kluver, Max Harper, and Joe Konstan at GroupLens.

In our previous paper, we examined what algorithms users say they prefer and the differences they perceive between those algorithms. This paper asks the follow-up question: when users are allowed to select the algorithm they actually use, what do they do?

RecSys 2015 Agenda

Going to RecSys 2015 this year? I’ll be there, giving several talks:

  • On Wednesday, I’ll present a paper on what happens when you give users a ‘pick recommender’ menu (work with Joe Konstan, Max Harper, and Daniel Kluver).

  • On Saturday, I’ll be speaking with the doctoral symposium on finishing out a recommender systems Ph.D and moving to the next step in your career.

  • On Sunday, I am giving a talk to the Large Scale Recommender Systems workshop on scalability challenges in recommender systems research. This might be a bit of an odd talk, but I plan to talk about what happens when the thing you want to scale is experimental capability, possibly over smaller data sets, and some of the things we’ve done in LensKit to support that kind of work, as well as capabilities and challenges in different kinds of experimental setups for understanding recommender systems and user behavior.

Slides for each of my talks will be published on my talks page.

Mixed in with all this, of course, is a lot of meetings with various friends and colleagues (waves to CrowdRec) and talking about research. I look forward to seeing you all in Vienna.