Blog Articles 76–80

Why User Control in Recommender Systems?

The theme of my RecSys 2015 paper, along with the other papers in its session, is on giving users control over their recommendation experience.

Why do we want to do this? Isn’t the idea of recommender systems to figure out what the user wants and give it to them, without needing significant intervention?

There are a few reasons I think user control is an important research direction for recommender systems. First, different users have different needs, and different algorithms have different strengths. This is the idea behind McNee’s human-recommender interaction framework, and the thesis and results of several of my experiments. So far, we don’t have good meta-recommenders for identifying which recommender will best meet a particular user’s needs, so giving them control is a way to punt on this.

First-and-a-half, if we give users control in the short term, then we can obtain more training data to develop potential meta-recommenders to provide a better user experience.

Letting Users Choose Recommenders

Switching Algorithms

This is a paper about a drop-down menu. I’ll be presenting on it in the first session on Monday at RecSys 2015; the work is joint work with Daniel Kluver, Max Harper, and Joe Konstan at GroupLens.

In our previous paper, we examined what algorithms users say they prefer and the differences they perceive between those algorithms. This paper asks the follow-up question: when users are allowed to select the algorithm they actually use, what do they do?

RecSys 2015 Agenda

Going to RecSys 2015 this year? I’ll be there, giving several talks:

  • On Wednesday, I’ll present a paper on what happens when you give users a ‘pick recommender’ menu (work with Joe Konstan, Max Harper, and Daniel Kluver).

  • On Saturday, I’ll be speaking with the doctoral symposium on finishing out a recommender systems Ph.D and moving to the next step in your career.

  • On Sunday, I am giving a talk to the Large Scale Recommender Systems workshop on scalability challenges in recommender systems research. This might be a bit of an odd talk, but I plan to talk about what happens when the thing you want to scale is experimental capability, possibly over smaller data sets, and some of the things we’ve done in LensKit to support that kind of work, as well as capabilities and challenges in different kinds of experimental setups for understanding recommender systems and user behavior.

Slides for each of my talks will be published on my talks page.

Mixed in with all this, of course, is a lot of meetings with various friends and colleagues (waves to CrowdRec) and talking about research. I look forward to seeing you all in Vienna.

Serendipity at the Library

Cover of Programming in BASIC
Programming in BASIC

It’s hard, I think, for a recommender system to beat a good library shelf. The wonderful serendipity of browsing the stacks of books, be they the New Arrivals rack or section Z1002, is a high bar to match, or at least they’ve been very good to me.

I basically owe my career to finding Christopher Lampton’s Programming in BASIC on the shelves of the children’s nonfiction section of the Pocahontas Public Library1. My programming skill grew through additional books from its shelves, and those of the libraries in larger neighboring towns.

Also at the Pocahontas library, I discovered The Cuckoo’s Egg, one of my favorite books in high school. A footnote therein presented a recipe for chocolate chocolate chip cookies, dubbed ‘Hacker Cookies’ as they entered my family’s repertoire.

Similarity Functions in Item-Item CF

The core of an item-item collaborative filter is the item similarity function: a function of two items s(i,j):𝒾×𝒾[1,1]s(i,j): \mathcal{i}\times\mathcal{i} \to [-1,1] that measures how similar those items are. Common choices are vector similarity functions over the vectors of users’ ratings of each item, such as the cosine similarity or Pearson correlation.

Early on, Sarwar et al. tested a few choices:

  • The Pearson correlation from statistics:

    (ruiμi)(rujμj)(ruiμi)2(rujμj)2 \frac{\sum{(r_{ui} - \mu_i) (r_{uj} - \mu_j)}}{\sqrt{\sum{(r_{ui} - \mu_i)^2}} \sqrt{\sum{(r_{uj} - \mu_j)^2}}}

  • The cosine similarity between raw vectors:

    rirjri2rj2=ruirujrui2ruj2 \frac{\vec{r_i}\cdot\vec{r_j}}{\|\vec{r_i}\|_2\|\vec{r_j}\|_2} = \frac{\sum{r_{ui} r_{uj}}}{\sqrt{\sum{r_{ui}^2}} \sqrt{\sum{r_{uj}^2}}}

  • The adjusted cosine similarity between vectors normalized by subtracting the user’s mean rating:

    r̂ir̂jr̂i2r̂j2=(ruiμu)(rujμu)(ruiμu)2(rujμu)2 \frac{\vec{\hat{r}_i}\cdot\vec{\hat{r}_j}}{\|\vec{\hat{r}_i}\|_2\|\vec{\hat{r}_j}\|_2} = \frac{\sum{(r_{ui} - \mu_u) (r_{uj} - \mu_u)}}{\sqrt{\sum{(r_{ui} - \mu_u)^2}} \sqrt{\sum{(r_{uj} - \mu_u)^2}}}

They found adjusted cosine to work better, and so far as I know, it has been the dominant similarity function for rating-based item-item CF systems.