The traditional technical interview process is designed to ferret out a candidate's weaknesses whereas the process should be designed to find a candidate's strengths.
No one can possibly master all of the arcana of today's technology landscape, let alone bring that mastery to bear on a problem under pressure and with no tools other than a whiteboard.
Under those circumstances, everyone can make anyone look like an idiot.
The fundamental problem with the traditional technical interview process is that it is based on a chain of inference that seems reasonable but is in fact deeply flawed. That chain goes something like this:
- Writing code is hard.
- Because writing code is hard, only smart people can do it.
- Therefore we want to hire smart people.
- The best way to find smart people is to design a process that filters out dumb people.
- The best way to filter out dumb people is to ask them hard questions that we know the answer to and see if they also know (or can figure out) the answer. (Because we, of course, are not dumb.)
Hence phone screens, whiteboard coding, puzzles... And then we are surprised when it turns out that these make terrible predictors of the actual ability to code. Why are they terrible?
Because smartness is not a scalar, it is a very-high-dimensional vector, and the basis vector for puzzle-solving-ability is pointing in a very different direction than the coding-ability basis vector.
Coding in today's world and puzzle-solving are just two very, very different skills.
No one codes ab initio any more, so coding has more to do with the ability to look things up, read other people's code and documentation, debugging, and use tools effectively than the ability to solve problems closed-book.
But the dumb-person filter explicitly does not test for the ability to look things up, because of course anyone can look things up, even dumb people.
Worse, what the traditional process really selects for is a willingness to put up with bullshit and an ability to game the system. There is also a significant component of pure luck involved. If you get asked a question that you just happen to know the answer to, you're golden. If you get stuck in a situation where you have to work something out on the fly, you can easily get stuck in a mental wedgie that makes you look like a complete moron.
As someone who occasionally hires coders there is only one thing I care about: can they write -- and debug -- code?
Debugging is almost as important a skill as writing the code in the first place, maybe more so. But the traditional process is absolutely horrible at predicting debugging ability, and with good reason: debugging real code means running the code and looking to see what it actually does.
That is not possible on a whiteboard. On a whiteboard, you have to mentally compile and "run" the code in your head to see if it works. That is not what coders need to be good at. Running code is what computers are for. We invented them specifically so we wouldn't have to do that sort of work in our heads any more.
We made Type12 (https://type12.com) to let companies embrace this philosophy, go beyond riddles and brain-teasers when testing candidates, but actually putting them in their day-to-day scenarios. While the ability to solve a coding puzzles tells you almost nothing about the skills the candidate has to solve day-to-day challenges, real scenarios and assignments lets you simulate your day-1 work experience and are - indeed - good predictors.
In a bit more technical details, for each interview, Type12 creates a new (docker instance under hood) sandbox fully configured with languages (Python, JS, Ruby, Java, Scala, etc) frameworks (Django, Flask, Ruby on Rails, Node, Spring, etc), databases (MongoDB, Redis, MySQL, PostgreSQL, etc) and other technologies (Amazon S3, Amazon DynamoDB, etc) of your choice.
You'd be able to test candidates in an environment similar to your company production environment.