Blog Articles 1–5

Using YubiKey as a Windows SSH Smartcard

A key in a hand
Photo by CMDR Shane on Unsplash.

Some time ago, I got a YubiKey 4. I use it to secure access to a number of web services I use, but also to authenticate myself over SSH. Among its features, it supports being an an OpenPGP smartcard, which means — with some fiddling — it can be used for SSH authentication, so my SSH private key does not actually live on my physical computers.

This page documents the pieces I need to put together in order to get it working on Windows with all of the different SSH interfaces I use: PuTTY, WinSCP, OpenSSH for Windows, and Git. I do this through the Pageant agent.

Software and Requirements

I use SSH in several places in my workflow:

  • Remote shells via PuTTY, MobaXterm, or Windows OpenSSH.
  • Transfer files with WinSCP.
  • Push and pull from GitHub. Most of my local repositories are pulled over HTTPS, but a couple use SSH, and I use SSH (authenticated with a forwarded SSH agent connection) for all my repositories on servers.
  • Visual Studio Code remote sessions.

The first three can all be done with PuTTY, so as long as I can connect PuTTY to the smartcard, I'm good. VS Code, however, only supports Windows OpenSSH for its remote sessions, so I need it to be able to connect as well.

I occasionally use WSL, which induces yet a third set of requirements for connection. I don't do this very often, though.

History: GPG4Win

Most existing documentation focuses on using the YubiKey with GPG4Win and gpg-agent's OpenSSH and Pageant compatibility layers.

This works, but I found gpg-agent to be less than reliable, particularly when I inserted and removed my key. I commonly needed to restart the agent in order to make the public keys available again. I wrote a script to do that, but it was annoying. It also required custom editing of the configuration file to actually use my YubiKey.

But since I was using GPG4Win when I started, I used it to initialize the YubiKey's keys. I therefore cannot provide instructions for setting up the public and private keys without GPG.

Installing: Scoop

I use Scoop to install a lot of my Windows command line (and some GUI) utilities. Most of the software I use here is available with Scoop:

scoop bucket add extras
scoop install putty wsl-ssh-pageant git

The Heart: Pageant

PuTTY (and compatible programs, such as WinSCP and MobaXterm) use the Pageant SSH agent (included with PuTTY). This agent lives in your system tray and handles authentication with your SSH private keys. Before using a YubiKey, I used it as my standard SSH agent on Windows with an on-disk private key, and it worked well.

Dr. Peter Koch has made a smartcard-enabled version of Pageant that Just Works, without configuration, and I have never needed to restart it after inserting my YubiKey.

I have Pageant SC automatically start on startup, and have no problems connecting any of my PuTTY-compatible programs to it. To start it on startup, create a shortcut in the following directory:

%USERPROFILE%\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup

%USERPROFILE% is the path to your user profile directory, that contains all your user folders. For me, it's usually C:\Users\MichaelEkstrand or something like that.

Connecting Git

By default, Git uses its own bundled version of OpenSSH (which is distinct from Microsoft's OpenSSH for Windows project). This SSH cannot talk to Pageant.

I fix this by configuring Git to use plink.exe from PuTTY. I set the GIT_SSH environment variable (in the Start menu, search for ‘Edit the environment variables for your account’) to the path to my plink.exe executable. Since PuTTY is installed with Scoop, this path is:


By using the current path, I avoid needing to change it when I update PuTTY.

Connecting Windows OpenSSH

The last piece is to connect Windows OpenSSH (and if desired, WSL). The wsl-ssh-pageant program does this quite well, and it is available in Scoop.

I created another shortcut in the startup directory to run the following command:

%USERPROFILE%\scoop\apps\wsl-ssh-pageant\current\wsl-ssh-pageant-gui.exe -systray -winssh ssh-pageant

This exports an OpenSSH-compatible agent connection and proxies it Pageant, which in turn hands it off to the YubiKey.

To make OpenSSH use this connection, set the environment variable SSH_AUTH_SOCK to \\.\pipe\ssh-pageant. Running ssh-add -l in PowerShell should show your YubiKey's keys.

Next Steps & Final Remarks

The WSL/SSH Pageant bridge supports WSL in addition to Windows OpenSSH. I haven't needed it yet, but once I do and get it working, I'll update these instructions with the details.

All of this software works without administrator privileges, so this setup is usable in e.g. university computing lab environments.


I have spent some time experimenting with doing everything with Windows OpenSSH, with a per-machine private key stored in each machine's SSH agent. There are a few drawbacks to this approach, however:

  1. WinSCP and MobaXterm don't work. File transfer and X11 forwarding are therefore more difficult.
  2. There is no prompting for any kind of PIN or passphrase after enrolling a private key in the Windows OpenSSH agent. It's therefore possible for authentication attempts to happen without my knowledge, and without any way to stop them. While I am already in substantial trouble if an attacker is running code on my machine, the ability to easily bridge from my machine to another without any notice or plug to pull is troubling.
  3. Managing a different public key for each client machine is cumbersome, as I need to install that public key on several different servers and web accounts. There are at least 4 different places I need to store public keys for regular access.

Spoiled Books: Foundation

Cover of the 1966 Avon edition of Foundation

We're going to try something new here. Writing about books. And maybe other creative works. I'd like to put some more content on my blog, and books seem like a good source of that.

This isn't a formal review, or an essay submitted for academic consideration. It's just some of my thoughts about the work, why it's meaningful to me, what I think it says to the world, that sort of thing. It's opinionated and full of spoilers — if you would prefer to avoid them, the close-tab button is up there somewhere.

So with that, let's get started. Isaac Asimov's Foundation trilogy (comprising Foundation, Foundation and Empire, and Second Foundation) was probably, until last year, my favorite trilogy.

Premise: Quantitative Social Science, Perfected

Foundation follows the standard pattern of ‘straight’ sci-fi1: posit a scientific development or context and work out social, environmental, and other implications of it. I enjoy reading sci-fi that does that and does it well.

When Foundation opens, we aren't left guessing at the premise. The recently-perfected science of ‘psychohistory’ — quantitative history, sociology, political science, etc., developed to be as predictive as physics in terms of the statistical behavior of societies of people — has shown that the Galactic Empire will soon collapse, and there will be 10,000 years of war and conflict before a similarly stable arrangement is once again achieved. Hari Seldon, the discoverer and principle expert of psychohistory, has discovered a means of shortening this period to 1000 years, and to that end, created two foundations at opposite ends of the galaxy. The books are primarily directly concerned with the activities of the First Foundation, ostensibly founded to curate an Encyclopedia of galactic knowledge and history. Through psychohistory, Seldon predicted that creating these foundations, with particular goals and instructions, would cause the emergence of a second empire after only 1000 years of conflict.

What are the ramifications of such a science? In this scheme of predictable courses of human events, what are the roles of science, commerce, religion, and government? These are the questions with which Foundation is concerned, at least at the outset.

Asimov at his Height

These books are, in my opinion, the height of Asimov's creative work. Short stories were the form in which he was by far the strongest, and Foundation was written and originally serialized as 8 short stories (4 for the first book, and 2 each — approaching novellas in their length — for the second and third).

With the spare strokes of a sketch artist, Asimov tells his story — the story of the first few centuries of the inter-empire conflict — by dropping in to key moments and describing specific events and characters that shape the broader universe. He paints its inflection points, and leaves the reader to interpolate the rest of the curve.

Most of Foundation would make terrible TV or cinema.

Science and its Subjects

The single most fascinating thing to me about the world of Foundation is the social-scientific premise: that we can predict the future course of human events with the same accuracy with which we can chart orbital mechanics.

Two crucial caveats to ‘psychohistory’ make it particularly tenable as a premise. First, it is statistical; it operates at the level of societies, at least as large as a good-sized city (better if it's being used to model an entire planet's population). It cannot predict the behavior of individual people, and it becomes less accurate as the size of the group being modeled decreases. This is how we would expect any such science to work.

Second, the predictions are invalid if the population for which they are computed is aware of them. Members of society can be aware of the existence of pyscho-history, but cannot know its particular predictions; as a corollary, if sufficient members of the society in question know pyscho-history, then they could deduce and thereby invalidate the predictions, and thus there were no psychohistorians in the Encyclopedia Foundation.

I've wondered how we could test whether widespread dissemination of the findings of social and behavioral science affect their future validity. In some cases, could effects fail to replicate because they became sufficiently well-known so as to inoculate future research participants against them?

The necessity for subjects' ignorance also brings us to a major weakness: psychohistory is only deployable in heavily paternalistic settings. The Seldon Plan is the mother of all Nudges. There is no room for autonomy, for self-determination, except within the degrees of freedom afforded by the intrinsically statistical nature of psychohistory.

Breaking the Premise

The first book and a half are entirely concerned with working out the course of history under psychohistory in a relatively straightforward fashion.

In the second part of Foundation and Empire, the story ‘The Mule’, we take a turn: what happens when events arise that psychohistory cannot account for? In this case, it was the rise of ‘the Mule’, a mutant who is able to telepathically influence significant groups of people. Psychohistory cannot model individuals, and when an individual arises with such outsized ability to affect the course of events things break down.

Second Foundation describes the search for the other foundation. Seldon said he founded two, but did not specify the location of the second it had no visible activity or influence; some were questioning whether it ever actually existed. In concluding the search, however, Asimov takes us to a second level of breaking down the premise: what if psychohistory never really worked? Or, at the very least, what if it was incomplete? The Second Foundation, it turns out, was entirely psycho-historians, working out the remaining details of the Seldon Plan that he was unable to complete before his death.

As a reader, I loved the trajectory of the scientific premise. Psychohistory itself was almost a character. What if it works? What happens when it meets an insurmountable obstacle? What if it never really worked as well as we were led to believe?

Staring in STS at the Great Men

But the Foundation is cracked. For all his imagination, Asimov couldn't create a world where the Important Decisions weren't mostly being made by old men of unmarked race literally smoking cigars in private meetings in back or upper or whatever rooms. We have interstellar travel, safe nuclear power that fits in your pocket, an empire that spans a galaxy, and the day-to-day of who is deciding the course of history and how is precisely as it was in 1950s America, cigars and all. We get a small breath of change in the last installment of the trilogy, when young Arkady Darrel works around her father's rules and heads off to follow her grandmother's footsteps searching for the Second Foundation, but it is a very standard story of that type; it does not represent any real subversion or re-imagination of the workings of of society. Everything is entirely predictable, continuing as it did when Asimov wrote. Could psychohistory account for the rise and consequences of intersectional feminism? Can it conceive of a society that takes seriously the work of building itself upon equitable justice?

It is perhaps this frustration that caused me to resonate so deeply when, on Page 3 of The Fifth Season, N. K. Jemisin said of the government and its trappings:

None of these places or people matter, by the way. I simply point them out for context.

No one book will do everything, but can we have a little imagination on what makes society tick? Please?

Throughout Foundation, Asimov also has a complex relationship with the Great Man view of history. Psychohistory itself, and the enactment of the plan, depend heavily on the Great Man Hari Seldon. He has research assistants, but there is little sign of serious collaborators. When the Second Foundation is revealed, however, psychohistory has taken a significantly more collaborative turn. It's a rite of passage for members of the Second Foundation to contribute something to the Seldon Plan, to work out some theorem of history that fills in one of its many remaining holes.

As the history unfolds, Asimov focuses on the men at the heart of the action for each of the inflection points. Psychohistory's inability to model individuals at first seems like it precludes a Great Man view, but yet, at each turn, it is a Man who brings about the shift that psychohistory predicted. Governance of the foundation's society shifts to the mayors; Mayor Hardin solidified and strengthened the office of the Mayor and made it happen. History was destined to flow through the rise of the Traders and Merchant Princes; Hober Mallow made it happen. We're left with an unclear picture of how socio-environmental factors and individuals relate in the balance of influence on history, but the picture is one that is uncomfortably reliant on great men, a reliance I felt went beyond credibility.

Finally, the social science underlying Foundation is exclusively quantitative. There is little room for qualitative work (or if there is room, it is not well-stated), let alone critical analysis.

Other Books

Many years after publishing the trilogy, Asimov wrote two successor books (Foundation's Edge and Foundation and Earth) and a few prequels.

I can't recommend anything other than the trilogy. Foundation's Edge is a good enough book — it's clunky, but a significant improvement on some of Asimov's earlier attempts at novels. It's an interesting story that explores in much more depth things we learn about the Second Foundation.

But it leaves some questions open, and to answer those questions, one turns to Foundation and Earth.

In my humble opinion, Foundation and Earth is one of those rare books that retroactively makes other books worse. In his later career, Asimov was working to unify his sci-fi worlds (Robot, Empire, and Foundation) into a single, coherent universe. Connecting Foundation and Empire works well enough, but the way Foundation and Earth connects them to the Robot stories I found profoundly unsatisfying. Recasting the origin of psychohistory and the Seldon Plan so that they were really the work of telepathic robot R. Daneel Olivaw who has been secretly guiding human history across the galaxy from his secret base on the moon for 20–50K years, instead of a scientific discovery we could roll with as a premise, left a pretty bad taste in my mouth and stripped the wonder I experienced when I first read Foundation. So I prefer to pretend they do not exist, and enjoy the trilogy on its own.

(I haven't read the prequels at all — Asimov wrote them after Foundation and Earth, so I can't see how they wouldn't be predicated on the Robot connection I didn't like.)


I first read Foundation in grad school, at a time when I was beginning to think more about the import of social science on my understanding of the world and my work as a computer scientist. To read sci-fi that grabbed a social science premise head-on and ran with it was thrilling, and helped me sharpen some of my thinking about how the science I was learning interacted with life. It was also a series that John enjoyed, if my memory serves, and the time in which I read it was the time I was really starting to have productive discussions about this science-life interaction with him. Some of my fondness may well be a result of that context and impact, rather than any intrinsic merit of the trilogy. I don't particularly care.

It's unimaginative in problematic ways. It's got holes you can drive a visi-sonor delivery truck through. But I expect I'll read it again a few more times, and dearly love the way in which the story unfolds through little painted windows. I appreciate literature that gives a window on a much larger story, and in that respect, Foundation delivers.

I hope this won't be the last of these I do! I'm going to aim for writing them on Sundays for a while; we'll see if that's regular, or more of an intermittent Sunday activity. Not making any promises. But I hope to write one of them about my new favorite trilogy.

  1. I'm trying to avoid terms here that bring value judgements, like ‘pure’ or ‘hard’. This kind of sci-fi is no better or worse than any other; it's just one kind and purpose.

Fairness in Reviewing

Many computer science research communities are considering various changes to their reviewing processes to reduce bias, reduce barriers to participation, or accomplish other goals.

This week, Suresh Venkatasubramanian wrote about proposed changes in SODA to allow PC members to submit to the conference. There was a bunch of interesting discussion in the comments; this exchange in particular jumped out. Thomas Steinke said:

I completely disagree with the assertion that double-blinding is "a really easy solution" to conflicts of interest. It's particularly ridiculous given that you are active in the FAT* and FATML community, which (to the best of my knowledge) fundamentally rejects the idea that bias can simply be removed by blindness to race/gender/etc.

To which Suresh responded:

Why this works differently compared to "fairness through bliindness" in automated decision making is something i have to ponder.

I have a few thoughts on this. I originally wrote up a version of this as a comment there, but a wrong button push deleted my comment. So I'll write it up in more detail here, where I can include figures and have git to save the results.

First, a brief note on terminology — even though it is not near as widely used, I will refer to double-blind reviewing as ‘mutually anonymous’ and fairness-through-blindness as fairness-through-unawareness.

Fairness: Imperfect and Contextual

I want to begin with a couple of points about the pursuit of fairness. First, fairness in an unfair world will always be imperfect. As Suresh pointed out elsewhere, mutual anonymity achieves useful but limited outcomes in reducing implicit bias. It is not perfect, even on its own terms (it is often easy for experienced community members to guess authorship, though I expect this is less reliable than many raising this argument against mutually anonymous reviewing believe). However, given the empirical evidence that mutually anonymous reviewing reduces bias in decision outcomes, and the plausible mechanism of operation, it seems like a worthwhile endeavor. Further, given the incompatibility between fairness definitions, in many problem settings we will have arguable unfairness of one kind even if we achieve it perfectly under another definition.

Second, the tradeoffs and possibilities in the pursuit of fairness are contextual. Different problem settings have different causes and costs of unfairness, as well as different affordances for reducing or mitigating bias. The peer review process has significant impact on livelihoods and careers, but it is a different problem than loan decision making or hiring.

So it seems to me that ‘does fairness-through-unawareness work here but not there?’ is not the most productive way to approach the question. Rather, do the limitations and possibilities — or lack thereof — of fairness-through-unawareness represent an acceptable or optimal tradeoff here, but unacceptable elsewhere? I don't have the answers, but I think contextualized tradeoffs will be better way to pursue clarity than bright-line answers.

Peer Review Fairness Goals

graph LR
  Author --> Quality
  Author --> Relevance
  Author --> Secondary
  Quality --> Acceptance
  Relevance --> Acceptance
  Secondary --> Acceptance
  Author --> Acceptance
Structural equation model of peer review.

To think about what we would like to achieve in making peer review more fair, and what possible interventions are available to us, it helps to look at a path model of the reviewing problem and its relevant variables.

One way to frame the problem of debiasing peer review is that we want acceptance to be independent of authorship. That is, Pr[AcceptAuth]=Pr[Accept]\mathrm{Pr}[\mathrm{Accept}|\mathrm{Auth}] = \mathrm{Pr}[\mathrm{Accept}], or at least that acceptance is independent of protected characteristics of the author(s) such as community connections or institutional prestige.

We can also reframe so that a paper should be accepted solely on the basis of its quality and relevance. This leads to a conditional independence view of the issue:

Pr[AcceptQual,Rel,Auth]=Pr[AcceptQual,Rel]\mathrm{Pr}[\mathrm{Accept}|\mathrm{Qual}, \mathrm{Rel}, \mathrm{Auth}] = \mathrm{Pr}[\mathrm{Accept}|\mathrm{Qual}, \mathrm{Rel}]

Ok, great. But what are the paths through which authorship can affect acceptance? This will help us better analyze possible levers for correcting them. If we accept my path model as sufficiently complete for useful discussion, there are four:

  • Through quality (Author → Quality → Acceptance). We don't want to break the Quality → Acceptance link, since it is largely the point of peer review. We cannot do a lot about the Author → Quality link; authors with more experience are likely to write better papers, or at least papers that are perceived as better (though more on this later).

  • Through relevance (Author → Relevance → Acceptance). This has the same basic problems as quality. The author link is probably more pronounced here, though, as authors who have long experience in a particular community have a better read on what the community thinks is relevant, and how to sell their work as relevant, than newcomers. This is perhaps undesirable, but I also think it is likely unavoidable.

  • Through secondary characteristics (Author → Secondary → Acceptance). This is deliberately vague; it can include secondary characteristics that give away author identities, but also includes other things that aren't quality or relevance but affect reviewer decisions.

  • Directly (Author → Acceptance). This is a clearly problematic effect.

Debiasing Levers

Mutually anonymous peer review deals with the direct influence of authorship on acceptance. That's all it can affect; the indirect paths are all still present. It is imperfect, but available empirical data indicates it is useful.

What would a fairness-through-awareness approach to debiasing peer review look like? In an ideal world, it might look like discounting the effects of secondary characteristics while leaving the influence of quality and relevance untouched. I think it is extremely unlikely that such a targeted intervention is possible — fairness-through-awareness would likely affect quality and/or relevance judgements. Ideally, it would debias our assessment of quality or relevance, not change their influence on acceptance, but I also think that is unlikely in practice.

However, mutually anonymous reviewing processes are not the only mechanism change at our disposal. Clear reviewer instructions and — crucially — structured review forms can, I think, help reduce the influence of secondary characteristics. Structured review forms break the review judgement down into individual pieces, encouraging the reviewer to focus on specific aspects of the paper relevant to the decision process. Particularly good ones do this in a way that helps counteract bias, through things such as separating the standard to which a contribution should be held from the assessment of whether it meets that standard (CSCW did this at least one one year).

Quality and relevance are much more difficult, and as I said above, I don't think we want to affect their influence on the accept/reject decision. However, it may still be possible to affect the influence of author characteristics on quality and relevance: I would love to see some good data, but I think revise-and-resubmit processes may be able to help authors whose initial submission doesn't meet quality or relevance expectations get their paper over the bar. This isn't perfect, as experienced authors will need to do less revision for publication and thus will be able to publish more papers with comparable resources, but it may help this influence pathway.


Mutually anonymous peer review is not perfect, but it does block one critical pathway by which author characteristics can affect acceptance decisions. I do not think that fairness-through-awareness offers superior debiasing capabilities in this context. Finally, there are additional changes to the reviewing process that, when combined with mutually anonymous review, can reduce the influence of other undesirable bias pathways.

I remain convinced that mutual anonymity is a better way to structure peer review for computer science conferences, and don't think this represents a fundamental incompatibility with the known limitations of fairness-through-unawareness.

2018 State of the Tools

For the last two years, I've written up an annual post describing my current computing setup. Time for another 🙂.


I continue to work to reduce my technical distance: when practical, I want to be able to recommend much of the software I use to others, even to non-technical users.

I also want tools that just work without a great deal of fussing or lots of installation. I want to be able to move in to a new machine quickly, and to be productive witout relying on sophisticated customizations I carry around.

Hardware, Operating System, and Browser

I continue to use Windows 10 as my client OS, using Windows Subsystem for Linux (usually with Debian) and/or Docker when I need local *nix support.

Server is Red Hat at work, and FreeBSD for our (now little-used) NAS at home. I switched from nixOS to FreeBSD because I wasn't getting a lot out of Nix anymore, and FreeBSD has very good ZFS support.

I am still using a Surface Pro 4 for my personal computer. At work I have switched to the Surface Go for my portable machine, and still use a Dell Precision (now with 2 24" 4K displays) as my workstation. I'm running the Kensington Expert Mouse and the Microsoft Sculpt keyboard to help keep my tendonitis in check.

My mobile device is an iPhone SE, and I was very glad the Apple store in Vancouver still had a few in stock the week after they were discontinued. I very much hope Apple releases an SE2 with an OLED display before my SE goes end-of-life.

At home I am still using Firefox as my primary browser, although a recent bug my profile has developed might send me scurrying. At work I use Chrome because we're a Google campus and it's the only browser supported by Paperpile.

E-mail, Storage, Etc.

Boise State is a Google campus, so everything is on Google: e-mail, calendaring, office suite, etc. I use Google Drive for syncing work files between computers, and for mobile access.

For personal things, we are using Office 365, so my e-mail is in Outlook (or Windows Mail) and files on OneDrive.


I try to write in Word when practical, although I often do first drafts in Google Docs to make collaborative discussion with colleagues easier. Final versions of papers are often in LaTeX with Overleaf, because the new ACM template is very difficult to use in Word.

I use PaperPile for citation management; for Word integration, I export to BibTeX and use BibTeX4Word.

Other writing is generally in Markdown (using a variety of parsers).

Programming Languages

I am doing more and more work in Python now. Since switching LensKit to Python, it makes sense to keep things in a consistent language. While I still personally prefer R for data analysis and statistics, Python is good enough and R's benefits aren't worth requiring my students to learn multiple languages. Invoke is replacing Gradle as my standard task runner; I am not entirely happy with it, but it gets the job done well enough for now. I am doing very little Java these days.

I still enjoy JavaScript a lot, and write it when I can. My web site is lovingly crafted with a medium pile of JavaScript code. I'm not using JS for data processing at work very much any more, especially for research, because it addes additional complexity to the toolchain for students to work with the code.

I now have more use for Rust, however; this fact makes me very happy. I recently finished rewriting the data import and linking code for the book gender project from SQL, JavaScript, and R (with a little Fortran) into SQL, Python, and Rust. Rust is used for first-pass data cleaning and munging; it is much faster than JS for some of this (my VIAF importer went from a 20-hour process to about 3 hours changing from Node + expat to Rust + quick-xml). I also suspect Rust will be more approachable than JS for students & collaborators, because the code structures are more straightforward (like Python) instead of the heavy inversion of asynchronous JavaScript code.

That's about all I'm writing, aside from the occasional shell script.

Editing and Developing

VS Code. With Python, JavaScript, and Rust, Code provides me with all the tooling I need and has a significantly lower footprint than JetBrains products.

In the terminal I use GNU Nano.

UNIX Shell

I'm using Bash now; while Fish is nice, the overhead of carrying my own shell around isn't worth it. I've got a modes set of Bash customizations I carry around via Git, and it gets the job done.s

I'm using tmux, direnv, and z to make life easier.


I'm no longer rolling my own backups; BackBlaze is taking care of them for me.

Documents and Drawings

I use Grapholite for diagrams, unless they're too complicated and I need to turn to Visio. I use Inkscape for non-diagram vector graphics. is my first call for raster image editing (install it from the Windows Store though, not its web site) and I upgrade to Krita for more advanced needs and Darktable for dealing with RAW files from the camera.

I use Powerpoint for all my presentations. I share them online with a read-only link in OneDrive.

I use Drawboard PDF for marking up PDFs on the Surface, and usually Adobe Reader for my other PDF viewing needs; I also have Acrobat on hand for when I need to do advanced PDF operations.

I have also been doing some typography design; I use Scribus for print layout and either Montax Imposer or Bookbinder for imposition. I have been toying with the idea of writing a simple PDF imposer as an excuse to learn Electron, but haven't started on that at all. I currently use the free version of High-Logic MainType for font management.


2018 in purple neon fireworks letters
Photo by NordWood Themes on Unsplash.

As I've done the last two years, it's time for the annual what-I-did-this-year post! Well, about time; there are a couple more weeks in the year, but I expect their results to be mostly tidying up loose ends of things in this list.

  • Presented two papers at the inaugural Conference on Fairness, Accountability, and Transparency; one with the PIReTs, and another with Hoda Mehrpouyan and Rezvan Joshaghani.

  • Published a CHI workshop paper on fairness in privacy tradeoffs with Bart Knijnenburg, Hoda Mehrpouyan, and Rezvan Joshaghani.

  • Submitted a paper to SIGIR (rejected).

  • Submitted a proposal to NSF CyberLearning (declined).

  • Won an NSF CAREER award.

  • Saw Hamilton (the traveling company in Portland).

  • Book chapter with Daniel Kluver and Joe Konstan went to press.

  • Bought a road bike and began recreational distance riding. I got up to being able to do 30mi rides before winding down for the weather.

  • Co-organized the FairUMAP workshop on fairness in user modeling and personalization with Bamshad Mobasher, Robin Burke, and Bettina Berendt.

  • Oversaw build-out of the LITERATE prototype and carried out user study with fantastic collaborators Sole Pera and Katherine Wright.

  • Ran a very successful RecSys 2018 with Sole Pera and our amazing organizing committee.

  • Published and presented our work on author gender in RecSys 2018.

  • Taught CS 410/510 (Databases) in both fall and spring.

  • Taught CS-HU 310, our one-credit database introduction, in the summer.

  • Substantially improved my response time in grading student work.

  • Published two workshop papers and contributed to a NRMERA conference talk about the LITERATE project.

  • Supervised my M.S. student Mucun Tian to his first first-author paper, a work-in-progress piece for the REVEAL workshop on offline evaluation.

  • Co-organized the second FATERC Workshop on Responsible Recommendation, with more than 50 registered and a full room all day.

  • With Fernando Diaz and Asia Biega, proposed and had accepted a fairness track for TREC 2019.

  • With Michael Veale, organized publicity & outreach for ACM FAT* 2019 as Publicity & PR Co-chair.

  • Rebuilt LensKit in Python (project, paper).

  • Began supervising my first Ph.D student, Amifa Raj.

  • Submitted a proposal to the NSF 2026 IDEA Machine with Sole Pera, Hoda Mehrpouyan, Cathie Olschanowsky, and Elena Sherman.

  • Sat on commmittees for two successful Ph.D proposals (Ion Madrazo Azpiazu and Kimberley Gardner).

  • Gave invited seminar talks at CMU and Clemson.

  • Reviewed a number of papers, though not as many as last year.

  • Redid my academic visual identity with a website refresh and change of standard font.

I did not submit nearly as many grant proposals this year as last, because I received the CAREER early in the year and needed to focus on getting that research going along with RecSys organization.


  • Spring — CS 410/510 (Databases)
  • Summer — CS-HU 310 (Intro to Databases)
  • Fall — CS 410/510 (Databases)

Active Grants


Nicola Ferro, Norbert Fuhr, Gregory Grefenstette, Joseph A. Konstan, Pablo Castells, Elizabeth M. Daly, Thierry Declerck, Michael D. Ekstrand, Werner Geyer, Julio Gonzalo, Tsvi Kuflik, Krister Lindén, Bernardo Magnini, Jian-Yun Nie, Raffaele Perego, Bracha Shapira, Ian Soboroff, Nava Tintarev, Karin Verspoor, Martijn C. Willemsen, and Justin Zobel. 2018. From Evaluating to Forecasting Performance: How to Turn Information Retrieval, Natural Language Processing and Recommender Systems into Predictive Sciences (Dagstuhl Perspectives Workshop 17442). Dagstuhl Manifestos 7(1) (November 2018), 96–139. DOI10.4230/DagMan.7.1.96.

Katherine Landau Wright, Michael D. Ekstrand, and Maria Soledad Pera. 2018. Supplementing Classroom Texts with Online Resources. At 2018 Annual Meeting of the Northwest Rocky Mountain Educational Research Association.

Michael D. Ekstrand, Ion Madrazo Azpiazu, Katherine Landau Wright, and Maria Soledad Pera. 2018. Retrieving and Recommending for the Classroom: Stakeholders, Objectives, Resources, and Users. In Proceedings of the ComplexRec 2018 Second Workshop on Recommendation in Complex Scenarios (ComplexRec '18), at RecSys 2018. Cited 2 times.

Toshihiro Kamishima, Pierre-Nicolas Schwab, and Michael D. Ekstrand. 2018. 2nd FATREC Workshop: Responsible Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM. DOI10.1145/3240323.3240335. Cited 3 times.

Michael D. Ekstrand, Mucun Tian, Mohammed R. Imran Kazi, Hoda Mehrpouyan, and Daniel Kluver. 2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM, pp. 242–250. DOI10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Cited 10 times.

Mucun Tian and Michael D. Ekstrand. 2018. Monte Carlo Estimates of Evaluation Metric Error and Bias. Computer Science Faculty Publications and Presentations 148. Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems, a workshop at RecSys 2018. DOI10.18122/cs_facpubs/148/boisestate. NSF PAR 10074452.

Michael D. Ekstrand. 2018. The LKPY Package for Recommender Systems Experiments: Next-Generation Tools and Lessons Learned from the LensKit Project. Computer Science Faculty Publications and Presentations 147. Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems, a workshop at RecSys 2018. DOI10.18122/cs_facpubs/147/boisestate. arXiv:1809.03125 [cs.IR]. Cited 1 times.

Michael D. Ekstrand, Ion Madrazo Azpiazu, Katherine Landau Wright, and Maria Soledad Pera. 2018. Retrieving and Recommending for the Classroom: Stakeholders, Objectives, Resources, and Users. In Proceedings of the ComplexRec 2018 Second Workshop on Recommendation in Complex Scenarios (ComplexRec '18), at RecSys 2018. Cited 2 times.

Maria Soledad Pera, Katherine Wright, and Michael D. Ekstrand. 2018. Recommending Texts to Children with an Expert in the Loop. In Proceedings of the 2nd International Workshop on Children & Recommender Systems (KidRec '18), at IDC 2018. DOI10.18122/cs_facpubs/140/boisestate.

Nicola Ferro, Norbert Fuhr, Gregory Grefenstette, Joseph A. Konstan, Pablo Castells, Elizabeth M. Daly, Thierry Declerck, Michael D. Ekstrand, Werner Geyer, Julio Gonzalo, Tsvi Kuflik, Krister Lindén, Bernardo Magnini, Jian-Yun Nie, Raffaele Perego, Bracha Shapira, Ian Soboroff, Nava Tintarev, Karin Verspoor, Martijn C. Willemsen, and Justin Zobel. 2018. The Dagstuhl Perspectives Workshop on Performance Modeling and Prediction. SIGIR Forum 52(1) (June 2018), 91–101. DOI10.1145/3274784.3274789. Cited 6 times.

Bamshad Mobasher, Robin Burke, Michael D. Ekstrand, and Bettina Berendt. 2018. UMAP 2018 Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2018) Chairs' Welcome & Organization. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation, and Personalization (UMAP '18). ACM. DOI10.1145/3213586.3226200.

Daniel Kluver, Michael D. Ekstrand, and Joseph A. Konstan. 2018. Rating-Based Collaborative Filtering: Algorithms and Evaluation. In Social Information Access. Peter Brusilovsky and Daqing He, eds. Springer-Verlag, Lecture Notes in Computer Science vol. 10100, pp. 344–390. ISBN 978-3-319-90091-9. DOI10.1007/978-3-319-90092-6_10. Cited 21 times.

Rezvan Joshaghani, Michael D. Ekstrand, Bart Knijnenburg, and Hoda Mehrpouyan. 2018. Do Different Groups Have Comparable Privacy Tradeoffs?. At Moving Beyond a ‘One-Size Fits All’ Approach: Exploring Individual Differences in Privacy, a workshop at CHI 2018.

Michael D. Ekstrand, Mucun Tian, Ion Madrazo Azpiazu, Jennifer D. Ekstrand, Oghenemaro Anuyah, David McNeill, and Maria Soledad Pera. 2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 10 times.

Michael D. Ekstrand, Rezvan Joshaghani, and Hoda Mehrpouyan. 2018. Privacy for All: Ensuring Fair and Equitable Privacy Protections. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:35–47. Acceptance rate: 24%. Cited 10 times.