Blog Articles 1–5

Fairness in Reviewing

Many computer science research communities are considering various changes to their reviewing processes to reduce bias, reduce barriers to participation, or accomplish other goals.

This week, Suresh Venkatasubramanian wrote about proposed changes in SODA to allow PC members to submit to the conference. There was a bunch of interesting discussion in the comments; this exchange in particular jumped out. Thomas Steinke said:

I completely disagree with the assertion that double-blinding is "a really easy solution" to conflicts of interest. It's particularly ridiculous given that you are active in the FAT* and FATML community, which (to the best of my knowledge) fundamentally rejects the idea that bias can simply be removed by blindness to race/gender/etc.

To which Suresh responded:

Why this works differently compared to "fairness through bliindness" in automated decision making is something i have to ponder.

I have a few thoughts on this. I originally wrote up a version of this as a comment there, but a wrong button push deleted my comment. So I'll write it up in more detail here, where I can include figures and have git to save the results.

First, a brief note on terminology β€” even though it is not near as widely used, I will refer to double-blind reviewing as β€˜mutually anonymous’ and fairness-through-blindness as fairness-through-unawareness.

Fairness: Imperfect and Contextual

I want to begin with a couple of points about the pursuit of fairness. First, fairness in an unfair world will always be imperfect. As Suresh pointed out elsewhere, mutual anonymity achieves useful but limited outcomes in reducing implicit bias. It is not perfect, even on its own terms (it is often easy for experienced community members to guess authorship, though I expect this is less reliable than many raising this argument against mutually anonymous reviewing believe). However, given the empirical evidence that mutually anonymous reviewing reduces bias in decision outcomes, and the plausible mechanism of operation, it seems like a worthwhile endeavor. Further, given the incompatibility between fairness definitions, in many problem settings we will have arguable unfairness of one kind even if we achieve it perfectly under another definition.

Second, the tradeoffs and possibilities in the pursuit of fairness are contextual. Different problem settings have different causes and costs of unfairness, as well as different affordances for reducing or mitigating bias. The peer review process has significant impact on livelihoods and careers, but it is a different problem than loan decision making or hiring.

So it seems to me that β€˜does fairness-through-unawareness work here but not there?’ is not the most productive way to approach the question. Rather, do the limitations and possibilities β€” or lack thereof β€” of fairness-through-unawareness represent an acceptable or optimal tradeoff here, but unacceptable elsewhere? I don't have the answers, but I think contextualized tradeoffs will be better way to pursue clarity than bright-line answers.

Peer Review Fairness Goals

graph LR
  Author --> Quality
  Author --> Relevance
  Author --> Secondary
  Quality --> Acceptance
  Relevance --> Acceptance
  Secondary --> Acceptance
  Author --> Acceptance
Structural equation model of peer review.

To think about what we would like to achieve in making peer review more fair, and what possible interventions are available to us, it helps to look at a path model of the reviewing problem and its relevant variables.

One way to frame the problem of debiasing peer review is that we want acceptance to be independent of authorship. That is, Pr[Accept∣Auth]=Pr[Accept]\mathrm{Pr}[\mathrm{Accept}|\mathrm{Auth}] = \mathrm{Pr}[\mathrm{Accept}], or at least that acceptance is independent of protected characteristics of the author(s) such as community connections or institutional prestige.

We can also reframe so that a paper should be accepted solely on the basis of its quality and relevance. This leads to a conditional independence view of the issue:

Pr[Accept∣Qual,Rel,Auth]=Pr[Accept∣Qual,Rel]\mathrm{Pr}[\mathrm{Accept}|\mathrm{Qual}, \mathrm{Rel}, \mathrm{Auth}] = \mathrm{Pr}[\mathrm{Accept}|\mathrm{Qual}, \mathrm{Rel}]

Ok, great. But what are the paths through which authorship can affect acceptance? This will help us better analyze possible levers for correcting them. If we accept my path model as sufficiently complete for useful discussion, there are four:

  • Through quality (Author β†’ Quality β†’ Acceptance). We don't want to break the Quality β†’ Acceptance link, since it is largely the point of peer review. We cannot do a lot about the Author β†’ Quality link; authors with more experience are likely to write better papers, or at least papers that are perceived as better (though more on this later).

  • Through relevance (Author β†’ Relevance β†’ Acceptance). This has the same basic problems as quality. The author link is probably more pronounced here, though, as authors who have long experience in a particular community have a better read on what the community thinks is relevant, and how to sell their work as relevant, than newcomers. This is perhaps undesirable, but I also think it is likely unavoidable.

  • Through secondary characteristics (Author β†’ Secondary β†’ Acceptance). This is deliberately vague; it can include secondary characteristics that give away author identities, but also includes other things that aren't quality or relevance but affect reviewer decisions.

  • Directly (Author β†’ Acceptance). This is a clearly problematic effect.

Debiasing Levers

Mutually anonymous peer review deals with the direct influence of authorship on acceptance. That's all it can affect; the indirect paths are all still present. It is imperfect, but available empirical data indicates it is useful.

What would a fairness-through-awareness approach to debiasing peer review look like? In an ideal world, it might look like discounting the effects of secondary characteristics while leaving the influence of quality and relevance untouched. I think it is extremely unlikely that such a targeted intervention is possible β€” fairness-through-awareness would likely affect quality and/or relevance judgements. Ideally, it would debias our assessment of quality or relevance, not change their influence on acceptance, but I also think that is unlikely in practice.

However, mutually anonymous reviewing processes are not the only mechanism change at our disposal. Clear reviewer instructions and β€” crucially β€” structured review forms can, I think, help reduce the influence of secondary characteristics. Structured review forms break the review judgement down into individual pieces, encouraging the reviewer to focus on specific aspects of the paper relevant to the decision process. Particularly good ones do this in a way that helps counteract bias, through things such as separating the standard to which a contribution should be held from the assessment of whether it meets that standard (CSCW did this at least one one year).

Quality and relevance are much more difficult, and as I said above, I don't think we want to affect their influence on the accept/reject decision. However, it may still be possible to affect the influence of author characteristics on quality and relevance: I would love to see some good data, but I think revise-and-resubmit processes may be able to help authors whose initial submission doesn't meet quality or relevance expectations get their paper over the bar. This isn't perfect, as experienced authors will need to do less revision for publication and thus will be able to publish more papers with comparable resources, but it may help this influence pathway.

Conclusion

Mutually anonymous peer review is not perfect, but it does block one critical pathway by which author characteristics can affect acceptance decisions. I do not think that fairness-through-awareness offers superior debiasing capabilities in this context. Finally, there are additional changes to the reviewing process that, when combined with mutually anonymous review, can reduce the influence of other undesirable bias pathways.

I remain convinced that mutual anonymity is a better way to structure peer review for computer science conferences, and don't think this represents a fundamental incompatibility with the known limitations of fairness-through-unawareness.

2018 State of the Tools

For the last two years, I've written up an annual post describing my current computing setup. Time for another πŸ™‚.


Philosophy

I continue to work to reduce my technical distance: when practical, I want to be able to recommend much of the software I use to others, even to non-technical users.

I also want tools that just work without a great deal of fussing or lots of installation. I want to be able to move in to a new machine quickly, and to be productive witout relying on sophisticated customizations I carry around.

Hardware, Operating System, and Browser

I continue to use Windows 10 as my client OS, using Windows Subsystem for Linux (usually with Debian) and/or Docker when I need local *nix support.

Server is Red Hat at work, and FreeBSD for our (now little-used) NAS at home. I switched from nixOS to FreeBSD because I wasn't getting a lot out of Nix anymore, and FreeBSD has very good ZFS support.

I am still using a Surface Pro 4 for my personal computer. At work I have switched to the Surface Go for my portable machine, and still use a Dell Precision (now with 2 24" 4K displays) as my workstation. I'm running the Kensington Expert Mouse and the Microsoft Sculpt keyboard to help keep my tendonitis in check.

My mobile device is an iPhone SE, and I was very glad the Apple store in Vancouver still had a few in stock the week after they were discontinued. I very much hope Apple releases an SE2 with an OLED display before my SE goes end-of-life.

At home I am still using Firefox as my primary browser, although a recent bug my profile has developed might send me scurrying. At work I use Chrome because we're a Google campus and it's the only browser supported by Paperpile.

E-mail, Storage, Etc.

Boise State is a Google campus, so everything is on Google: e-mail, calendaring, office suite, etc. I use Google Drive for syncing work files between computers, and for mobile access.

For personal things, we are using Office 365, so my e-mail is in Outlook (or Windows Mail) and files on OneDrive.

Writing

I try to write in Word when practical, although I often do first drafts in Google Docs to make collaborative discussion with colleagues easier. Final versions of papers are often in LaTeX with Overleaf, because the new ACM template is very difficult to use in Word.

I use PaperPile for citation management; for Word integration, I export to BibTeX and use BibTeX4Word.

Other writing is generally in Markdown (using a variety of parsers).

Programming Languages

I am doing more and more work in Python now. Since switching LensKit to Python, it makes sense to keep things in a consistent language. While I still personally prefer R for data analysis and statistics, Python is good enough and R's benefits aren't worth requiring my students to learn multiple languages. Invoke is replacing Gradle as my standard task runner; I am not entirely happy with it, but it gets the job done well enough for now. I am doing very little Java these days.

I still enjoy JavaScript a lot, and write it when I can. My web site is lovingly crafted with a medium pile of JavaScript code. I'm not using JS for data processing at work very much any more, especially for research, because it addes additional complexity to the toolchain for students to work with the code.

I now have more use for Rust, however; this fact makes me very happy. I recently finished rewriting the data import and linking code for the book gender project from SQL, JavaScript, and R (with a little Fortran) into SQL, Python, and Rust. Rust is used for first-pass data cleaning and munging; it is much faster than JS for some of this (my VIAF importer went from a 20-hour process to about 3 hours changing from Node + expat to Rust + quick-xml). I also suspect Rust will be more approachable than JS for students & collaborators, because the code structures are more straightforward (like Python) instead of the heavy inversion of asynchronous JavaScript code.

That's about all I'm writing, aside from the occasional shell script.

Editing and Developing

VS Code. With Python, JavaScript, and Rust, Code provides me with all the tooling I need and has a significantly lower footprint than JetBrains products.

In the terminal I use GNU Nano.

UNIX Shell

I'm using Bash now; while Fish is nice, the overhead of carrying my own shell around isn't worth it. I've got a modes set of Bash customizations I carry around via Git, and it gets the job done.s

I'm using tmux, direnv, and z to make life easier.

Backups

I'm no longer rolling my own backups; BackBlaze is taking care of them for me.

Documents and Drawings

I use Grapholite for diagrams, unless they're too complicated and I need to turn to Visio. I use Inkscape for non-diagram vector graphics. Paint.net is my first call for raster image editing (install it from the Windows Store though, not its web site) and I upgrade to Krita for more advanced needs and Darktable for dealing with RAW files from the camera.

I use Powerpoint for all my presentations. I share them online with a read-only link in OneDrive.

I use Drawboard PDF for marking up PDFs on the Surface, and usually Adobe Reader for my other PDF viewing needs; I also have Acrobat on hand for when I need to do advanced PDF operations.

I have also been doing some typography design; I use Scribus for print layout and either Montax Imposer or Bookbinder for imposition. I have been toying with the idea of writing a simple PDF imposer as an excuse to learn Electron, but haven't started on that at all. I currently use the free version of High-Logic MainType for font management.

2018

2018 in purple neon fireworks letters
Photo by NordWood Themes on Unsplash.

As I've done the last two years, it's time for the annual what-I-did-this-year post! Well, about time; there are a couple more weeks in the year, but I expect their results to be mostly tidying up loose ends of things in this list.

  • Presented two papers at the inaugural Conference on Fairness, Accountability, and Transparency; one with the PIReTs, and another with Hoda Mehrpouyan and Rezvan Joshaghani.

  • Published a CHI workshop paper on fairness in privacy tradeoffs with Bart Knijnenburg, Hoda Mehrpouyan, and Rezvan Joshaghani.

  • Submitted a paper to SIGIR (rejected).

  • Submitted a proposal to NSF CyberLearning (declined).

  • Won an NSF CAREER award.

  • Saw Hamilton (the traveling company in Portland).

  • Book chapter with Daniel Kluver and Joe Konstan went to press.

  • Bought a road bike and began recreational distance riding. I got up to being able to do 30mi rides before winding down for the weather.

  • Co-organized the FairUMAP workshop on fairness in user modeling and personalization with Bamshad Mobasher, Robin Burke, and Bettina Berendt.

  • Oversaw build-out of the LITERATE prototype and carried out user study with fantastic collaborators Sole Pera and Katherine Wright.

  • Ran a very successful RecSys 2018 with Sole Pera and our amazing organizing committee.

  • Published and presented our work on author gender in RecSys 2018.

  • Taught CS 410/510 (Databases) in both fall and spring.

  • Taught CS-HU 310, our one-credit database introduction, in the summer.

  • Substantially improved my response time in grading student work.

  • Published two workshop papers and contributed to a NRMERA conference talk about the LITERATE project.

  • Supervised my M.S. student Mucun Tian to his first first-author paper, a work-in-progress piece for the REVEAL workshop on offline evaluation.

  • Co-organized the second FATERC Workshop on Responsible Recommendation, with more than 50 registered and a full room all day.

  • With Fernando Diaz and Asia Biega, proposed and had accepted a fairness track for TREC 2019.

  • With Michael Veale, organized publicity & outreach for ACM FAT* 2019 as Publicity & PR Co-chair.

  • Rebuilt LensKit in Python (project, paper).

  • Began supervising my first Ph.D student, Amifa Raj.

  • Submitted a proposal to the NSF 2026 IDEA Machine with Sole Pera, Hoda Mehrpouyan, Cathie Olschanowsky, and Elena Sherman.

  • Sat on commmittees for two successful Ph.D proposals (Ion Madrazo Azpiazu and Kimberley Gardner).

  • Gave invited seminar talks at CMU and Clemson.

  • Reviewed a number of papers, though not as many as last year.

  • Redid my academic visual identity with a website refresh and change of standard font.

I did not submit nearly as many grant proposals this year as last, because I received the CAREER early in the year and needed to focus on getting that research going along with RecSys organization.

Teaching

  • Spring β€” CS 410/510 (Databases)
  • Summer β€” CS-HU 310 (Intro to Databases)
  • Fall β€” CS 410/510 (Databases)

Active Grants

Publications

Nicola Ferro, Norbert Fuhr, Gregory Grefenstette, Joseph A. Konstan, Pablo Castells, Elizabeth M. Daly, Thierry Declerck, Michael D. Ekstrand, Werner Geyer, Julio Gonzalo, Tsvi Kuflik, Krister LindΓ©n, Bernardo Magnini, Jian-Yun Nie, Raffaele Perego, Bracha Shapira, Ian Soboroff, Nava Tintarev, Karin Verspoor, Martijn C. Willemsen, and Justin Zobel. 2018. From Evaluating to Forecasting Performance: How to Turn Information Retrieval, Natural Language Processing and Recommender Systems into Predictive Sciences (Dagstuhl Perspectives Workshop 17442). Dagstuhl Manifestos 7(1) (November 2018), 96–139. DOI10.4230/DagMan.7.1.96.

Katherine Landau Wright, Michael D. Ekstrand, and Maria Soledad Pera. 2018. Supplementing Classroom Texts with Online Resources. At 2018 Annual Meeting of the Northwest Rocky Mountain Educational Research Association.

Michael D. Ekstrand, Ion Madrazo Azpiazu, Katherine Landau Wright, and Maria Soledad Pera. 2018. Retrieving and Recommending for the Classroom: Stakeholders, Objectives, Resources, and Users. In Proceedings of the ComplexRec 2018 Second Workshop on Recommendation in Complex Scenarios (ComplexRec '18), at RecSys 2018.

Toshihiro Kamishima, Pierre-Nicolas Schwab, and Michael D. Ekstrand. 2018. 2nd FATREC Workshop: Responsible Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM. DOI10.1145/3240323.3240335.

Michael D. Ekstrand, Mucun Tian, Mohammed R. Imran Kazi, Hoda Mehrpouyan, and Daniel Kluver. 2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM, pp. 242–250. DOI10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%.Cited 3 times.

Mucun Tian and Michael D. Ekstrand. 2018. Monte Carlo Estimates of Evaluation Metric Error and Bias. Computer Science Faculty Publications and Presentations 148. Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems, a workshop at RecSys 2018. DOI10.18122/cs_facpubs/148/boisestate. NSF PAR 10074452.

Michael D. Ekstrand. 2018. The LKPY Package for Recommender Systems Experiments: Next-Generation Tools and Lessons Learned from the LensKit Project. Computer Science Faculty Publications and Presentations 147. Boise State University. Presented at the REVEAL 2018 Workshop on Offline Evaluation for Recommender Systems, a workshop at RecSys 2018. DOI10.18122/cs_facpubs/147/boisestate. arXiv:1809.03125 [cs.IR]. Cited 1 times.

Michael D. Ekstrand, Ion Madrazo Azpiazu, Katherine Landau Wright, and Maria Soledad Pera. 2018. Retrieving and Recommending for the Classroom: Stakeholders, Objectives, Resources, and Users. In Proceedings of the ComplexRec 2018 Second Workshop on Recommendation in Complex Scenarios (ComplexRec '18), at RecSys 2018.

Maria Soledad Pera, Katherine Wright, and Michael D. Ekstrand. 2018. Recommending Texts to Children with an Expert in the Loop. In Proceedings of the 2nd International Workshop on Children & Recommender Systems (KidRec '18), at IDC 2018. DOI10.18122/cs_facpubs/140/boisestate.

Nicola Ferro, Norbert Fuhr, Gregory Grefenstette, Joseph A. Konstan, Pablo Castells, Elizabeth M. Daly, Thierry Declerck, Michael D. Ekstrand, Werner Geyer, Julio Gonzalo, Tsvi Kuflik, Krister LindΓ©n, Bernardo Magnini, Jian-Yun Nie, Raffaele Perego, Bracha Shapira, Ian Soboroff, Nava Tintarev, Karin Verspoor, Martijn C. Willemsen, and Justin Zobel. 2018. The Dagstuhl Perspectives Workshop on Performance Modeling and Prediction. SIGIR Forum 52(1) (June 2018), 91–101. DOI10.1145/3274784.3274789. Cited 3 times.

Bamshad Mobasher, Robin Burke, Michael D. Ekstrand, and Bettina Berendt. 2018. UMAP 2018 Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2018) Chairs' Welcome & Organization. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation, and Personalization (UMAP '18). ACM. DOI10.1145/3213586.3226200.

Daniel Kluver, Michael D. Ekstrand, and Joseph A. Konstan. 2018. Rating-Based Collaborative Filtering: Algorithms and Evaluation. In Social Information Access. Peter Brusilovsky and Daqing He, eds. Springer-Verlag, Lecture Notes in Computer Science vol. 10100, pp. 344–390. ISBN 978-3-319-90091-9. DOI10.1007/978-3-319-90092-6_10. Cited 10 times.

Rezvan Joshaghani, Michael D. Ekstrand, Bart Knijnenburg, and Hoda Mehrpouyan. 2018. Do Different Groups Have Comparable Privacy Tradeoffs?. At Moving Beyond a β€˜One-Size Fits All’ Approach: Exploring Individual Differences in Privacy, a workshop at CHI 2018.

Michael D. Ekstrand, Mucun Tian, Ion Madrazo Azpiazu, Jennifer D. Ekstrand, Oghenemaro Anuyah, David McNeill, and Maria Soledad Pera. 2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%.Cited 4 times.

Michael D. Ekstrand, Rezvan Joshaghani, and Hoda Mehrpouyan. 2018. Privacy for All: Ensuring Fair and Equitable Privacy Protections. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:35–47. Acceptance rate: 24%.Cited 3 times.

Surface Go First Impressions

Surface Go with burgandy type cover

Microsoft has repeatedly been trying to make strides into an entry-level market for its Surface devices, and so far none of them have stuck. There was the Surface RT, which used an incompatible processor and couldn't run normal Windows software. The Surface 3 used an Atom CPU and didn't last long. And now they have the Surface Go, a 10" Surface sporting a Pentium processor and full Windows 10.

I have been using the Surface Pro for a few years now. I love them, but have also had some reliability issues: my work SP4 has been glitchy as long as I have had it (display freezes), and my personal device ceased to boot about a year and a half after I bought it. They are on the large side for a lot of tablet use cases β€” it's hard to use it as a reading device β€” but it is fantastic for marking up PDFs and drawing, and I have made significant use of its drawing capabilities in class. The Windows Ink Workspace is very helpful, because I can take a screenshot and start drawing on it to mark up different parts of the query we just ran against the database.

But when the Surface Go came out, and I was increasingly frustrated with the display glitch on my SP4, it seemed like a great potential fit. An so far, so good.

What I Need

I work on a combination of my portable device and my desktop workstation. The primary cases where I need my portable device, however, are teaching, meetings, and travel. For that, I want:

  • Small enough I can use in small environments
  • Light weight (changing from the 3lb Zenbook Prime to 1.85lb Surface Pro 4 was a noticable improvement)
  • Solid battery performance
  • Good performance for basic remote work (browsing Google suite, Office, some programming)
  • Ability to read and mark up PDFs, tablet-style, for review, grading, and student collaborations
  • Run software needed for teaching (DataGrip, sometimes IntelliJ)

The SP4 did these quite well, although its battery (especially in the i7 version with the standard university software load) was underwhelming.

But the SP4 is still a little large for an airport tray table, and I can go about a half a day in a conference before the battery is done. Also, since I am moving my primary software from Java to Python, I no longer need heavy JetBrains IDEs for programming and instead can do almost everything in VS Code.

Surface Go Benefits

Looking at the Surface Go, I saw a number of benefits:

  • Smaller size will work better in airplanes
  • Even less weight (1.15lbs or so)
  • Decent battery (but rated for less life than the 2017 Surface Pro)
  • USB C, including power delivery support, opening up a wider range of secondary batteries
  • Surface connecter, so I can continue to leverage my investment in Surface docks and chargers

The processor is significantly less powerful. I don't really understand the Pentium line, but I think the Go's CPU is a Core-based CPU, not an Atom, but it's no Core i5. However, since my local client processing needs have decreased, that isn't a big deal if it gives me decent battery life.

The USB-C benefit is one of the things that finally sold me. I had looked at battery packs that could charge a Surface Pro, but they were big, heavy, and hard to find. There are quite a few options for USB-C, including several that can provide enough power to charge the Go. The Anker PowerCore+ 26800 has 3x the capacity of the Go's internal battery and produces sufficient wattage to charge it. This opens the door to being able to use my tablet for an entire day of conferenceing without needing to find one of the scarce power outlets.

Initial Impressions

Now that I have the device (8GB model w/ 128GB SSD), what do I think?

I think it's going to work out pretty well. Battery seems pretty good for what I've done so far; a few hours with general usage. I've been using the Edge browser to help keep battery life up.

My hand on the Surface Go keyboard

The keyboard is small. Uncomfortably so, sometimes, but I am writing this post on it. I think this may be a benefit: encouraging me to not try to do everything while I am traveling or having it at home, and to use my desktop (with better ergonomics) when I am in my office.

The CPU is fast enough for most of what I do. GMail is a little sluggish but usable. General web browsing in Edge is pretty snappy. TweetDeck is slow (typing is surprisingly slow), but it works. Some software installation was very slow (Anaconda and VS Code extensions); the Windows anti-malware scanner was working overtime while they dropped all their various software files on the SSD. Compiling my web site is also pretty slow. But now that things are installed, it works pretty well in general (and there's no noticable lag editing in VS Code).

The display is small, and not quite as dense (it runs at 1.5x scaling instead of the 2x on a Surface Pro), but it is clear and smooth.

Physical manufacture doesn't feel quite as solid as the Pro (kickstand hinge feels a little weaker, and the physical buttons aren't as refined). There's still the magnet the pen on the left side of the display, but the pen tip goes almost all the way to the bottom of the screen, so I'm concerned about damaging the tip if I keep it there most of the time.

But overall, I think it's going to be a good device for my needs.

Spending Startup

If you are starting a tenure-track research-oriented position at a US university, you should have a startup package to help you get started. When I began as a faculty member, I did not have a clear idea of how to use it effectively; 4 years in, here are some thoughts about good use of startup funds based on my experience and reflection, as well as things I've read and heard from others along the way.

This is written from the perspective of a computer science tenure-track position at a mid-tier research-oriented US university. Startup levels, existence, and structure vary between universities and disciplines, so keep that in mind.

The Purpose of Startup

The purpose of your startup funds is to enable you to establish your research program with the expectation that you will obtain grant funding to continue it.

That is, your startup fund is there to give you the starting point to get grants. Really, that's it. It isn't enough money to fund a research program that will earn you tenure β€” it's to land the funding that will fund your research program that will earn you tenure.

Structure and Negotiation

Startup funds vary in their structure and amounts. Some are all-cash, others are a combination of cash and specific resources such as a department-funded research assistant.

There will usually be a time limit for spending them. At both of my universities, the limit was 2 years. In the negotiation process, you can ask for an extension of this. Do so. Giving you 3 years to spend your startup is quite possibly one of the easiest concessions for the university to make and gives you more time to figure things out. Your first year will probably suck, and more runway to respond to the lessons you learn will only help.

Try to find out as much as possible about the resources and their spending power during the interview and negotiation processes.

If your startup is all-cash, how much does a graduate assistant cost? Do you need to pay their benefits as well? Some universities don't bill you for benefits if you are using startup to fund a student.

If your startup includes a GA, are they 20 hours dedicated to your research? 10 hours? Ph.D or M.S.?

Beyond that, I don't have a lot to say about the negotiation process, in large part because I am not very good at it. I didn't even negotiate an extension, but was able to obtain one after my first year.

Existing Resources

Find out what existing resources are available before spending startup. For example, if your university has access to a computing cluster, you may not need to purchase your own statistical or scientific computing hardware.

Other resources may also be available. I talked to our department's sysadmin about my need for a database server, and it turned out we had some donated used hardware no one had put to use yet. $5–10K saved.

Looking Ahead

In order to develop a successful research career β€” and earn tenure β€” you will need to develop an independent career. It should naturally draw from your previous work in Ph.D and, if applicable, postdoc, but starting a faculty position is the time to begin a new line of work that is meaningfully different from that of your mentors.

Startup is to fund that. Anything else is probably a distraction.

If you have the opportunity, getting a head start doing a first project in that new line as a side project while you finish writing your Ph.D dissertation is a good idea, but isn't necessary. I had started thinking about the ideas that are now my main direction my last year but didn't have an opportunity to start doing anything with them.

It's common to have some loose ends to tie up from the dissertation; last pieces to get published, or immediate follow-on work. Do that work β€” it's a great way to keep your publication pipeline going while you wait to get results on the next thing β€” but try not to spend much money on it.

Targetting Grants

You should probably plan on submitting to NSF CAREER. Do be careful not to put too much hope in it; it is a complicated and difficult grant to write, and many successful researchers do not win the CAREER. However, if you are at a research-intensive or research-growing institution, your chair and dean probably expect you to go for it. In computer science, its funding rate is also higher than many other NSF programs (as of 2017, 20–25% vs. less than 10% for some core programs).

Think about when best to apply. You have 3 shots before tenure. Unless you're coming off of a postdoc or similar experience and already have a clear, strong direction, your first summer is probably not the best time to make your first attempt. It takes time to develop the research direction, education and outreach integrations, and make the connections to make it all credible.

Computer science also has CRII, the CISE Research Initiation Initiative. This is a small ($175K over 2 years) program meant as a starter grant for junior faculty in computer science, and it should be on any new faculty member's radar. You can apply twice in your first three years; receiving any other NSF grant as PI also disqualifies you, so apply early.

These are the two major programs that most junior faculty in CS should definitely target, along with relevant general programs from federal agencies, state and local governments, and companies. But the key point, for the purposes of this article, is that your goal is to get one or more of these to hit by the time your startup runs out. Assuming you negotiate an extension, by the end of year 3 you want funding lined up to keep paying your Ph.D student.

Doing the work needed to secure that is the purpose of your startup fund.

Preliminary Results

So how does startup funding help you get grants?

By funding the research that produces the preliminary results you will use as evidence that your grant proposals are worth funding.

In my successful grant proposal, I had three main pieces of evidence from my prior research. One was my body of work from my Ph.D, demonstrating that I can do the software development and methodological work needed to carry out my research, because I've done it before. The other two were more proper preliminary results: showing existing techniques don't solve one of my research questions, and a set of early results on a first-order approach to another research question. Both of these results came from M.S. students' theses, with follow-on work by additional students I employed.

If you have student lines, or employ students directly out of your startup funds, preliminary results for the next thing should be your priority. Equipment you need to carry out this work is also top on the list.

Building your Network

Another useful purpose for startup is to work on building the network of collaborators you will need to carry out your research, either by maintaining existing collaborations or building new ones.

Will your next line of work engage with a research community that you haven't been part of yet? Go to a relevant conference.

Is there a more senior researcher in your topic you can bring in for a seminar talk? Besides being fun, this a good opportunity to exchange ideas, introduce your students to someone from your community, and give your department leadership another perspective on the importance and impact of your work.

Get Training

If your NSF directorate has a workshop for CAREER applicants, go to this. It's a very good use of startup funds.

There may be other grant-writing or research cohort-building activities that are worth attending as well.

Your department or college may have a separte pool of professional development funds that can partially support one of these trips, enabling you to stretch your startup funds more.

Start Slow

This wasn't entirely deliberate β€” I failed to hire a post-doc, for which I am grateful β€” but I spent my startup slowly at first.

This was a good idea, I think. Especially if you negotiate a spending extension, burning slowly the first year while you get your feet wet, tie up some loose ends, and work on building and maintaining your network frees you up to spend the money after you've spend a year thinking about what you want to do next.

Things Not To Do

You may have loose ends to publish from your dissertation, or immediate follow-up work. This work is good to pursue; the dissertation should be the beginning of your career, not its conclusion, and those papers help you keep your publication pipeline going while you start the next project.

But they do not, directly, establish you as an independent researcher, and so they're good things to pursue on your own or with existing collaborators; I don't think it's wise to spend much startup on them, unless you have surplus after funding work on the Next Thing or they are a clear bridge from your prior work to the Next Thing.

Don't automatically hire the first student that comes your way.

Wrapping Up

To reiterate, your startup is a launchpad for your career as an independent PI with a robust, externally-funded research program.

Focus spend on that.