Requesting Whiteboard Coding is Doing it Wrong
Recently Vivek Haldar responded to a tweet of mine in which I disparaged the practice of having programming candidates write code on whiteboards. Vivek’s point was to advise candidates, but here I’d like to advise interviewers. Although a concise explanation can fit in 140 characters, I think the subject warrants elaboration.
As in Vivek’s example, suppose you have asked a candidate to code a sorting algorithm on a whiteboard.
Suppose instead you had them describe the solution aloud. Is that useful? Absolutely—it’s a test of their ability to communicate in English about programming. This happens all the time on teams: someone will say “I was thinking about doing it this way,” and others will propose alternative solutions. A productive discussion will yield a better solution, on average, than the original, so this is an important skill to evaluate in a candidate.
Of course, if you handle all the interview exercises this way, you will never get to see the candidate solve a problem with actual code. I have personally hired people who turned out to be much better at discussing programming concepts than writing programs, and it went badly for the team. I learned the hard way to carefully evaluate a candidate’s ability to describe programming solutions both in English and in code.
However, whiteboard coding only tells you how good someone is at coding on whiteboards. It does not tell you how good they are at coding on computers. How different are these things? Let’s compare:
1. Web Searches
In real coding, Google is an extension of your brain. You can’t remember the last day when you didn’t use it to solve a programming problem. You’d be extremely skeptical of any coworker who didn’t make regular use of Web searches.
In whiteboard coding, Web searches are banned outright. You may write 2012 code using 1970 tools only.
2. Performance Pressure
In real coding, you rarely (if ever) have several observers breathing down your neck as you code something critical. If you did, it would be understood that this was a bad situation to put you in. You are used to dealing with pressure from deadlines, not from performances.
In whiteboard coding, every interviewer is breathing down your neck. They are scrutinizing your every pause, and you know it. Your performance on this live coding exercise will likely determine whether you get this job. You aren’t used to this because it never comes up in your job.
In real coding, you have a monitor and a keyboard. A familiar text editor or IDE. In short, the right tools for the job.
In whiteboard coding you have markers. No find-and-replace, automatic indentation, tab completion, or any other modern programming convenience. Poor penmanship can distract the interviewer. You might skip things you’d normally have an editor do because they’re too time-consuming to do with a marker.
Each area where these diverge is an opportunity to disqualify a good candidate, or to qualify a bad candidate.
If someone makes a mistake because they are nervous about being put on the spot—in a way they will never again be in the course of their job—is that a good reason to disqualify them? Of course not. If someone writes sloppy code and you brush it off because you’re thinking “obviously they’d refactor this if they had find/replace” and it turns out they don’t in practice, wouldn’t you rather have known that before hiring them? Of course.
So if whiteboard coding risks hiring bad candidates and passing on good ones, why not have the candidates do computer coding?
The usual reason is time. It takes longer…doesn’t it?
Not for the interviewer—not if you’re doing it right. It takes seconds to email a piece of toy code with the instructions “add these features, fix any bugs you spot, and refactor the result into something you’d be comfortable checking in.” In return you’ll receive a wealth of information about the code the candidate produces given a keyboard, editor, the Web, and all the rest. You can scan this for deal-breakers in the time you’d have spent waiting for the candidate to handwrite a solution on a whiteboard.
Although whiteboard coding can’t accomplish much more than checking for deal-breakers (you learn their proposed solution from what they say out loud, and you rightly wouldn’t consider whiteboard code indicative of their coding style or cleanliness), with real code you can dig deeper.
Having real code in advance gives you time to prepare questions for the in-person interview like “Why did you implement this using Library A instead of Library B?” or “If this one requirement changed, how would you change this?” In an interview you’re on the spot too, and the most useful questions are often the ones that don’t come to you right away. A planned follow-up conversation about code they’ve written teaches you about their discussion abilities while verifying that they are conversant about the code they wrote—so you can be sure a friend didn’t write it.
Some candidates may balk at the idea of spending an hour or two (let alone upwards of 40—whoa!) on an interview exercise, and some may even ask to bill for that time. Naturally, it’s up to you whether to accommodate them based on what you’ve seen so far. Depending on their disposition, this can be an actionable data point; if they consider this an unacceptable inconvenience, you have learned something about their likely reaction to the frustrating red tape that every organization inevitably encounters from time to time.
Likewise, some interviewers might balk at the idea of having to come up with a toy code exercise. In my experience it only takes about an hour or two to develop an appropriate one, and once you have it, you can reuse it for as many different candidates as you want to screen.
All in all, comparing whiteboard coding to the “toy code” approach, we see that it:
- Takes longer for you, the interviewer, to identify deal-breakers
- Is more error-prone, introducing new ways to mistakenly qualify or mistakenly disqualify candidates
- Yields less data on which to dig deeper with follow-up questions
Whiteboard coding is a lose-lose-lose. If you’ve ever done this, or still do this, please reconsider. It’s better on several levels to evaluate candidates based on real code.