Blog Articles 76–80

Tips for Personal Computer Security

For the good of yourself and your friends, family, and neighbors, it’s important to keep your computer (and phone) as secure as you practically can. But how do you do this? There is a lot of security advice floating around; a lot of it is confusing, and some of it is inaccurate. I get things wrong, too!

I’m pretty excited for Decent Security; it’s very much a work in progress, but as Taylor Swift continues to fill it out, I expect that it will be a very good resource.

But until then, and perhaps as something of a Cliff’s Notes, here are some of my top suggestions. Basic things that I’d suggest to any friends and family.

These also aren’t just limited to your desktop or laptop PC; some of them pertain to online accounts and mobile devices. This guide is also more of a ‘what to do’ than ‘how to do’; it assumes you are comfortable with clicking through settings pages, but don’t know what settings to check.

Sautée Satay Seitan (or something)

Tried a culinary experiment tonight!

Here’s the recipe (serves 2-3):

  • 1 medium onion, chopped
  • ½ medium jalapeño, diced
  • 2 tsp Satay spices, ground
  • Dash of paprika
  • 1 tsp salt (I didn’t measure, this is a guess)
  • 8oz seitan strips
  • Frozen broccoli (1–2C)
  • ½C water or stock
  • 4 large-ish baby bella mushrooms, sliced
  • Oil (I used vegatable)

Why Microsoft?

This is a joint post by Michael and Jennifer.

We each started using Linux more than a decade ago, and for our entire married life, we have been a primarily Linux-based household.

This spring, we decided to finally get smartphones. In the course of making this decision ­ and selecting our phones ­ we reevaluated many aspects of our technology use. This has resulted in a number of changes that many may find surprising:

  • We carry Nokia phones running Windows Phone 8.1.
  • E-mail service for elehack.net is now hosted by Microsoft, via their hosted Exchange service as a part of Office 365 business subscriptions.
  • We are running mainly Windows on our personal laptops.
  • We use Outlook for our e-mail, contacts, and calendars.
  • We use OneDrive for Business and SharePoint to ferry data between our devices and coordinate shared data for our household.

Old Papers on Recommender Systems

There’s a lot of research on recommender systems. There’s a lot of other research that, while not directly mentioning recommenders, is very relevant, including research from decades ago.

A few of my favorite old papers that I think recommender systems researchers would do well to read (and perhaps cite):

  • Back to Bentham? Explorations of experienced utility (Kahneman et al., 1997) — how people experience and remember pain and pleasure. Strong implications for what ratings mean and what kind of utility our recommenders should optimize for.

  • User Modeling via Stereotypes (Rich, 1979) — the first computer-based recommender system that I know about.

  • A searching procedure for information retrieval (Goffman, 1964) — this early IR paper has the crucial insight that the relevance of an item in a search search result list (or recommendation list) is not independent of the items that appear before or after it. Rather, an item may be less relevant if it is (partially) redundant with a previous item.

Comparing Recommendation Lists

In my research, I am trying to understand how different recommender algorithms behave in different situations. We’ve known for a while that ‘different recommenders are different’1, to paraphrase Sean McNee. However, we lack thorough data on how they are different in a variety of contexts. Our RecSys 2014 paper, User Perception of Differences in Recommender Algorithms (by myself, Max Harper, Martijn Willemsen, and Joseph Konstan), reports on an experiment that we ran to collect some of this data.

I have done some work on this subject in offline contexts already; my When Recommenders Fail paper looked at contexts in which different algorithms make different mistakes. LensKit makes it easy to test many different algorithms in the same experimental setup and context. This experiment brings my research goals back into the user experiment realm: directly measuring the ways in which users experience the output of different algorithms as being different.