Published on Saturday, October 12, 2019 and tagged with howto and tools. Updated on Saturday, October 12, 2019.
Some time ago, I got a YubiKey 4. I use it to secure access to a number of web services I use, but also to authenticate myself over SSH. Among its features, it supports being an an OpenPGP smartcard, which means — with some fiddling — it can be used for SSH authentication, so my SSH private key does not actually live on my physical computers.
This page documents the pieces I need to put together in order to get it working on Windows with all of the different SSH interfaces I use: PuTTY, WinSCP, OpenSSH for Windows, and Git. I do this through the Pageant agent.
Software and Requirements
I use SSH in several places in my workflow:
Remote shells via PuTTY, MobaXterm, or Windows OpenSSH.
Transfer files with WinSCP.
Push and pull from GitHub. Most of my local repositories are pulled over HTTPS, but a couple use SSH, and I use SSH (authenticated with a forwarded SSH agent connection) for all my repositories on servers.
Visual Studio Code remote sessions.
The first three can all be done with PuTTY, so as long as I can connect PuTTY to the smartcard, I'm good. VS Code, however, only supports Windows OpenSSH for its remote sessions, so I need it to be able to connect as well.
I occasionally use WSL, which induces yet a third set of requirements for connection. I don't do this very often, though.
Most existing documentation focuses on using the YubiKey with GPG4Win and gpg-agent's OpenSSH and Pageant compatibility layers.
This works, but I found gpg-agent to be less than reliable, particularly when I inserted and removed my key. I commonly needed to restart the agent in order to make the public keys available again. I wrote a script to do that, but it was annoying. It also required custom editing of the configuration file to actually use my YubiKey.
But since I was using GPG4Win when I started, I used it to initialize the YubiKey's keys. I therefore cannot provide instructions for setting up the public and private keys without GPG.
I use Scoop to install a lot of my Windows command line (and some GUI) utilities. Most of the software I use here is available with Scoop:
PuTTY (and compatible programs, such as WinSCP and MobaXterm) use the Pageant SSH agent (included with PuTTY). This agent lives in your system tray and handles authentication with your SSH private keys. Before using a YubiKey, I used it as my standard SSH agent on Windows with an on-disk private key, and it worked well.
%USERPROFILE% is the path to your user profile directory, that contains all your user folders. For me, it's usually C:\Users\MichaelEkstrand or something like that.
By default, Git uses its own bundled version of OpenSSH (which is distinct from Microsoft's OpenSSH for Windows project). This SSH cannot talk to Pageant.
I fix this by configuring Git to use plink.exe from PuTTY. I set the GIT_SSH environment variable (in the Start menu, search for ‘Edit the environment variables for your account’) to the path to my plink.exe executable. Since PuTTY is installed with Scoop, this path is:
By using the current path, I avoid needing to change it when I update PuTTY.
Connecting Windows OpenSSH
The last piece is to connect Windows OpenSSH (and if desired, WSL). The wsl-ssh-pageant program does this quite well, and it is available in Scoop.
I created another shortcut in the startup directory to run the following command:
This exports an OpenSSH-compatible agent connection and proxies it Pageant, which in turn hands it off to the YubiKey.
To make OpenSSH use this connection, set the environment variable SSH_AUTH_SOCK to \\.\pipe\ssh-pageant. Running ssh-add -l in PowerShell should show your YubiKey's keys.
Next Steps & Final Remarks
The WSL/SSH Pageant bridge supports WSL in addition to Windows OpenSSH. I haven't needed it yet, but once I do and get it working, I'll update these instructions with the details.
All of this software works without administrator privileges, so this setup is usable in e.g. university computing lab environments.
I have spent some time experimenting with doing everything with Windows OpenSSH, with a per-machine private key stored in each machine's SSH agent. There are a few drawbacks to this approach, however:
WinSCP and MobaXterm don't work. File transfer and X11 forwarding are therefore more difficult.
There is no prompting for any kind of PIN or passphrase after enrolling a private key in the Windows OpenSSH agent. It's therefore possible for authentication attempts to happen without my knowledge, and without any way to stop them. While I am already in substantial trouble if an attacker is running code on my machine, the ability to easily bridge from my machine to another without any notice or plug to pull is troubling.
Managing a different public key for each client machine is cumbersome, as I need to install that public key on several different servers and web accounts. There are at least 4 different places I need to store public keys for regular access.
Published on Sunday, July 7, 2019 and tagged with books. Updated on Tuesday, August 13, 2019.
We're going to try something new here. Writing about books. And maybe other creative works. I'd like to put some more content on my blog, and books seem like a good source of that.
This isn't a formal review, or an essay submitted for academic consideration. It's just some of my thoughts about the work, why it's meaningful to me, what I think it says to the world, that sort of thing. It's opinionated and full of spoilers — if you would prefer to avoid them, the close-tab button is up there somewhere.
So with that, let's get started. Isaac Asimov's Foundation trilogy (comprising Foundation, Foundation and Empire, and Second Foundation) was probably, until last year, my favorite trilogy.
Premise: Quantitative Social Science, Perfected
Foundation follows the standard pattern of ‘straight’ sci-fi1: posit a scientific development or context and work out social, environmental, and other implications of it. I enjoy reading sci-fi that does that and does it well.
When Foundation opens, we aren't left guessing at the premise. The recently-perfected science of ‘psychohistory’ — quantitative history, sociology, political science, etc., developed to be as predictive as physics in terms of the statistical behavior of societies of people — has shown that the Galactic Empire will soon collapse, and there will be 10,000 years of war and conflict before a similarly stable arrangement is once again achieved. Hari Seldon, the discoverer and principle expert of psychohistory, has discovered a means of shortening this period to 1000 years, and to that end, created two foundations at opposite ends of the galaxy. The books are primarily directly concerned with the activities of the First Foundation, ostensibly founded to curate an Encyclopedia of galactic knowledge and history. Through psychohistory, Seldon predicted that creating these foundations, with particular goals and instructions, would cause the emergence of a second empire after only 1000 years of conflict.
What are the ramifications of such a science? In this scheme of predictable courses of human events, what are the roles of science, commerce, religion, and government? These are the questions with which Foundation is concerned, at least at the outset.
Asimov at his Height
These books are, in my opinion, the height of Asimov's creative work. Short stories were the form in which he was by far the strongest, and Foundation was written and originally serialized as 8 short stories (4 for the first book, and 2 each — approaching novellas in their length — for the second and third).
With the spare strokes of a sketch artist, Asimov tells his story — the story of the first few centuries of the inter-empire conflict — by dropping in to key moments and describing specific events and characters that shape the broader universe. He paints its inflection points, and leaves the reader to interpolate the rest of the curve.
Most of Foundation would make terrible TV or cinema.
Science and its Subjects
The single most fascinating thing to me about the world of Foundation is the social-scientific premise: that we can predict the future course of human events with the same accuracy with which we can chart orbital mechanics.
Two crucial caveats to ‘psychohistory’ make it particularly tenable as a premise. First, it is statistical; it operates at the level of societies, at least as large as a good-sized city (better if it's being used to model an entire planet's population). It cannot predict the behavior of individual people, and it becomes less accurate as the size of the group being modeled decreases. This is how we would expect any such science to work.
Second, the predictions are invalid if the population for which they are computed is aware of them. Members of society can be aware of the existence of pyscho-history, but cannot know its particular predictions; as a corollary, if sufficient members of the society in question know pyscho-history, then they could deduce and thereby invalidate the predictions, and thus there were no psychohistorians in the Encyclopedia Foundation.
I've wondered how we could test whether widespread dissemination of the findings of social and behavioral science affect their future validity. In some cases, could effects fail to replicate because they became sufficiently well-known so as to inoculate future research participants against them?
The necessity for subjects' ignorance also brings us to a major weakness: psychohistory is only deployable in heavily paternalistic settings. The Seldon Plan is the mother of all Nudges. There is no room for autonomy, for self-determination, except within the degrees of freedom afforded by the intrinsically statistical nature of psychohistory.
Breaking the Premise
The first book and a half are entirely concerned with working out the course of history under psychohistory in a relatively straightforward fashion.
In the second part of Foundation and Empire, the story ‘The Mule’, we take a turn: what happens when events arise that psychohistory cannot account for? In this case, it was the rise of ‘the Mule’, a mutant who is able to telepathically influence significant groups of people. Psychohistory cannot model individuals, and when an individual arises with such outsized ability to affect the course of events things break down.
Second Foundation describes the search for the other foundation. Seldon said he founded two, but did not specify the location of the second it had no visible activity or influence; some were questioning whether it ever actually existed. In concluding the search, however, Asimov takes us to a second level of breaking down the premise: what if psychohistory never really worked? Or, at the very least, what if it was incomplete? The Second Foundation, it turns out, was entirely psycho-historians, working out the remaining details of the Seldon Plan that he was unable to complete before his death.
As a reader, I loved the trajectory of the scientific premise. Psychohistory itself was almost a character. What if it works? What happens when it meets an insurmountable obstacle? What if it never really worked as well as we were led to believe?
Staring in STS at the Great Men
But the Foundation is cracked. For all his imagination, Asimov couldn't create a world where the Important Decisions weren't mostly being made by old men of unmarked race literally smoking cigars in private meetings in back or upper or whatever rooms. We have interstellar travel, safe nuclear power that fits in your pocket, an empire that spans a galaxy, and the day-to-day of who is deciding the course of history and how is precisely as it was in 1950s America, cigars and all. We get a small breath of change in the last installment of the trilogy, when young Arkady Darrel works around her father's rules and heads off to follow her grandmother's footsteps searching for the Second Foundation, but it is a very standard story of that type; it does not represent any real subversion or re-imagination of the workings of of society. Everything is entirely predictable, continuing as it did when Asimov wrote. Could psychohistory account for the rise and consequences of intersectional feminism? Can it conceive of a society that takes seriously the work of building itself upon equitable justice?
It is perhaps this frustration that caused me to resonate so deeply when, on Page 3 of The Fifth Season, N. K. Jemisin said of the government and its trappings:
None of these places or people matter, by the way. I simply point them out for context.
No one book will do everything, but can we have a little imagination on what makes society tick? Please?
Throughout Foundation, Asimov also has a complex relationship with the Great Man view of history. Psychohistory itself, and the enactment of the plan, depend heavily on the Great Man Hari Seldon. He has research assistants, but there is little sign of serious collaborators. When the Second Foundation is revealed, however, psychohistory has taken a significantly more collaborative turn. It's a rite of passage for members of the Second Foundation to contribute something to the Seldon Plan, to work out some theorem of history that fills in one of its many remaining holes.
As the history unfolds, Asimov focuses on the men at the heart of the action for each of the inflection points. Psychohistory's inability to model individuals at first seems like it precludes a Great Man view, but yet, at each turn, it is a Man who brings about the shift that psychohistory predicted. Governance of the foundation's society shifts to the mayors; Mayor Hardin solidified and strengthened the office of the Mayor and made it happen. History was destined to flow through the rise of the Traders and Merchant Princes; Hober Mallow made it happen. We're left with an unclear picture of how socio-environmental factors and individuals relate in the balance of influence on history, but the picture is one that is uncomfortably reliant on great men, a reliance I felt went beyond credibility.
Finally, the social science underlying Foundation is exclusively quantitative. There is little room for qualitative work (or if there is room, it is not well-stated), let alone critical analysis.
Many years after publishing the trilogy, Asimov wrote two successor books (Foundation's Edge and Foundation and Earth) and a few prequels.
I can't recommend anything other than the trilogy. Foundation's Edge is a good enough book — it's clunky, but a significant improvement on some of Asimov's earlier attempts at novels. It's an interesting story that explores in much more depth things we learn about the Second Foundation.
But it leaves some questions open, and to answer those questions, one turns to Foundation and Earth.
In my humble opinion, Foundation and Earth is one of those rare books that retroactively makes other books worse. In his later career, Asimov was working to unify his sci-fi worlds (Robot, Empire, and Foundation) into a single, coherent universe. Connecting Foundation and Empire works well enough, but the way Foundation and Earth connects them to the Robot stories I found profoundly unsatisfying. Recasting the origin of psychohistory and the Seldon Plan so that they were really the work of telepathic robot R. Daneel Olivaw who has been secretly guiding human history across the galaxy from his secret base on the moon for 20–50K years, instead of a scientific discovery we could roll with as a premise, left a pretty bad taste in my mouth and stripped the wonder I experienced when I first read Foundation. So I prefer to pretend they do not exist, and enjoy the trilogy on its own.
(I haven't read the prequels at all — Asimov wrote them after Foundation and Earth, so I can't see how they wouldn't be predicated on the Robot connection I didn't like.)
I first read Foundation in grad school, at a time when I was beginning to think more about the import of social science on my understanding of the world and my work as a computer scientist. To read sci-fi that grabbed a social science premise head-on and ran with it was thrilling, and helped me sharpen some of my thinking about how the science I was learning interacted with life. It was also a series that John enjoyed, if my memory serves, and the time in which I read it was the time I was really starting to have productive discussions about this science-life interaction with him. Some of my fondness may well be a result of that context and impact, rather than any intrinsic merit of the trilogy. I don't particularly care.
It's unimaginative in problematic ways. It's got holes you can drive a visi-sonor delivery truck through. But I expect I'll read it again a few more times, and dearly love the way in which the story unfolds through little painted windows. I appreciate literature that gives a window on a much larger story, and in that respect, Foundation delivers.
I hope this won't be the last of these I do! I'm going to aim for writing them on Sundays for a while; we'll see if that's regular, or more of an intermittent Sunday activity. Not making any promises. But I hope to write one of them about my new favorite trilogy.
I'm trying to avoid terms here that bring value judgements, like ‘pure’ or ‘hard’. This kind of sci-fi is no better or worse than any other; it's just one kind and purpose.↩
I completely disagree with the assertion that double-blinding is "a really easy solution" to conflicts of interest. It's particularly ridiculous given that you are active in the FAT* and FATML community, which (to the best of my knowledge) fundamentally rejects the idea that bias can simply be removed by blindness to race/gender/etc.
Why this works differently compared to "fairness through bliindness" in automated decision making is something i have to ponder.
I have a few thoughts on this. I originally wrote up a version of this as a comment there, but a wrong button push deleted my comment. So I'll write it up in more detail here, where I can include figures and have git to save the results.
First, a brief note on terminology — even though it is not near as widely used, I will refer to double-blind reviewing as ‘mutually anonymous’ and fairness-through-blindness as fairness-through-unawareness.
Fairness: Imperfect and Contextual
I want to begin with a couple of points about the pursuit of fairness. First, fairness in an unfair world will always be imperfect. As Suresh pointed out elsewhere, mutual anonymity achieves useful but limited outcomes in reducing implicit bias. It is not perfect, even on its own terms (it is often easy for experienced community members to guess authorship, though I expect this is less reliable than many raising this argument against mutually anonymous reviewing believe). However, given the empiricalevidence that mutually anonymous reviewing reduces bias in decision outcomes, and the plausible mechanism of operation, it seems like a worthwhile endeavor. Further, given the incompatibility between fairness definitions, in many problem settings we will have arguable unfairness of one kind even if we achieve it perfectly under another definition.
Second, the tradeoffs and possibilities in the pursuit of fairness are contextual. Different problem settings have different causes and costs of unfairness, as well as different affordances for reducing or mitigating bias. The peer review process has significant impact on livelihoods and careers, but it is a different problem than loan decision making or hiring.
So it seems to me that ‘does fairness-through-unawareness work here but not there?’ is not the most productive way to approach the question. Rather, do the limitations and possibilities — or lack thereof — of fairness-through-unawareness represent an acceptable or optimal tradeoff here, but unacceptable elsewhere? I don't have the answers, but I think contextualized tradeoffs will be better way to pursue clarity than bright-line answers.
Peer Review Fairness Goals
To think about what we would like to achieve in making peer review more fair, and what possible interventions are available to us, it helps to look at a path model of the reviewing problem and its relevant variables.
One way to frame the problem of debiasing peer review is that we want acceptance to be independent of authorship. That is, Pr[Accept∣Auth]=Pr[Accept], or at least that acceptance is independent of protected characteristics of the author(s) such as community connections or institutional prestige.
We can also reframe so that a paper should be accepted solely on the basis of its quality and relevance. This leads to a conditional independence view of the issue:
Ok, great. But what are the paths through which authorship can affect acceptance? This will help us better analyze possible levers for correcting them. If we accept my path model as sufficiently complete for useful discussion, there are four:
Through quality (Author → Quality → Acceptance). We don't want to break the Quality → Acceptance link, since it is largely the point of peer review. We cannot do a lot about the Author → Quality link; authors with more experience are likely to write better papers, or at least papers that are perceived as better (though more on this later).
Through relevance (Author → Relevance → Acceptance). This has the same basic problems as quality. The author link is probably more pronounced here, though, as authors who have long experience in a particular community have a better read on what the community thinks is relevant, and how to sell their work as relevant, than newcomers. This is perhaps undesirable, but I also think it is likely unavoidable.
Through secondary characteristics (Author → Secondary → Acceptance). This is deliberately vague; it can include secondary characteristics that give away author identities, but also includes other things that aren't quality or relevance but affect reviewer decisions.
Directly (Author → Acceptance). This is a clearly problematic effect.
Mutually anonymous peer review deals with the direct influence of authorship on acceptance. That's all it can affect; the indirect paths are all still present. It is imperfect, but available empirical data indicates it is useful.
What would a fairness-through-awareness approach to debiasing peer review look like? In an ideal world, it might look like discounting the effects of secondary characteristics while leaving the influence of quality and relevance untouched. I think it is extremely unlikely that such a targeted intervention is possible — fairness-through-awareness would likely affect quality and/or relevance judgements. Ideally, it would debias our assessment of quality or relevance, not change their influence on acceptance, but I also think that is unlikely in practice.
However, mutually anonymous reviewing processes are not the only mechanism change at our disposal. Clear reviewer instructions and — crucially — structured review forms can, I think, help reduce the influence of secondary characteristics. Structured review forms break the review judgement down into individual pieces, encouraging the reviewer to focus on specific aspects of the paper relevant to the decision process. Particularly good ones do this in a way that helps counteract bias, through things such as separating the standard to which a contribution should be held from the assessment of whether it meets that standard (CSCW did this at least one one year).
Quality and relevance are much more difficult, and as I said above, I don't think we want to affect their influence on the accept/reject decision. However, it may still be possible to affect the influence of author characteristics on quality and relevance: I would love to see some good data, but I think revise-and-resubmit processes may be able to help authors whose initial submission doesn't meet quality or relevance expectations get their paper over the bar. This isn't perfect, as experienced authors will need to do less revision for publication and thus will be able to publish more papers with comparable resources, but it may help this influence pathway.
Mutually anonymous peer review is not perfect, but it does block one critical pathway by which author characteristics can affect acceptance decisions. I do not think that fairness-through-awareness offers superior debiasing capabilities in this context. Finally, there are additional changes to the reviewing process that, when combined with mutually anonymous review, can reduce the influence of other undesirable bias pathways.
I remain convinced that mutual anonymity is a better way to structure peer review for computer science conferences, and don't think this represents a fundamental incompatibility with the known limitations of fairness-through-unawareness.
Published on Wednesday, December 26, 2018 and tagged with tools and software. Updated on Friday, December 28, 2018.
For the last twoyears, I've written up an annual post describing my current computing setup. Time for another 🙂.
I continue to work to reduce my technical distance: when practical, I want to be able to recommend much of the software I use to others, even to non-technical users.
I also want tools that just work without a great deal of fussing or lots of installation. I want to be able to move in to a new machine quickly, and to be productive witout relying on sophisticated customizations I carry around.
Hardware, Operating System, and Browser
I continue to use Windows 10 as my client OS, using Windows Subsystem for Linux (usually with Debian) and/or Docker when I need local *nix support.
Server is Red Hat at work, and FreeBSD for our (now little-used) NAS at home. I switched from nixOS to FreeBSD because I wasn't getting a lot out of Nix anymore, and FreeBSD has very good ZFS support.
I am still using a Surface Pro 4 for my personal computer. At work I have switched to the Surface Go for my portable machine, and still use a Dell Precision (now with 2 24" 4K displays) as my workstation. I'm running the Kensington Expert Mouse and the Microsoft Sculpt keyboard to help keep my tendonitis in check.
My mobile device is an iPhone SE, and I was very glad the Apple store in Vancouver still had a few in stock the week after they were discontinued. I very much hope Apple releases an SE2 with an OLED display before my SE goes end-of-life.
At home I am still using Firefox as my primary browser, although a recent bug my profile has developed might send me scurrying. At work I use Chrome because we're a Google campus and it's the only browser supported by Paperpile.
E-mail, Storage, Etc.
Boise State is a Google campus, so everything is on Google: e-mail, calendaring, office suite, etc. I use Google Drive for syncing work files between computers, and for mobile access.
For personal things, we are using Office 365, so my e-mail is in Outlook (or Windows Mail) and files on OneDrive.
I try to write in Word when practical, although I often do first drafts in Google Docs to make collaborative discussion with colleagues easier. Final versions of papers are often in LaTeX with Overleaf, because the new ACM template is very difficult to use in Word.
I use PaperPile for citation management; for Word integration, I export to BibTeX and use BibTeX4Word.
Other writing is generally in Markdown (using a variety of parsers).
I am doing more and more work in Python now. Since switching LensKit to Python, it makes sense to keep things in a consistent language. While I still personally prefer R for data analysis and statistics, Python is good enough and R's benefits aren't worth requiring my students to learn multiple languages. Invoke is replacing Gradle as my standard task runner; I am not entirely happy with it, but it gets the job done well enough for now. I am doing very little Java these days.
That's about all I'm writing, aside from the occasional shell script.
Editing and Developing
In the terminal I use GNU Nano.
I'm using Bash now; while Fish is nice, the overhead of carrying my own shell around isn't worth it. I've got a modes set of Bash customizations I carry around via Git, and it gets the job done.s
I'm using tmux, direnv, and z to make life easier.
I'm no longer rolling my own backups; BackBlaze is taking care of them for me.
Documents and Drawings
I use Grapholite for diagrams, unless they're too complicated and I need to turn to Visio. I use Inkscape for non-diagram vector graphics. Paint.net is my first call for raster image editing (install it from the Windows Store though, not its web site) and I upgrade to Krita for more advanced needs and Darktable for dealing with RAW files from the camera.
I use Powerpoint for all my presentations. I share them online with a read-only link in OneDrive.
I use Drawboard PDF for marking up PDFs on the Surface, and usually Adobe Reader for my other PDF viewing needs; I also have Acrobat on hand for when I need to do advanced PDF operations.
I have also been doing some typography design; I use Scribus for print layout and either Montax Imposer or Bookbinder for imposition. I have been toying with the idea of writing a simple PDF imposer as an excuse to learn Electron, but haven't started on that at all. I currently use the free version of High-Logic MainType for font management.
As I've done the last two years, it's time for the annual what-I-did-this-year post! Well, about time; there are a couple more weeks in the year, but I expect their results to be mostly tidying up loose ends of things in this list.
Presented twopapers at the inaugural Conference on Fairness, Accountability, and Transparency; one with the PIReTs, and another with Hoda Mehrpouyan and Rezvan Joshaghani.
Published a CHI workshop paper on fairness in privacy tradeoffs with Bart Knijnenburg, Hoda Mehrpouyan, and Rezvan Joshaghani.
Submitted a paper to SIGIR (rejected).
Submitted a proposal to NSF CyberLearning (declined).