Yes. Another one of these posts. I've lost count of how many have crossed this site, and I'm sure you have too! Meanwhile, I've counted several sites that have been built specifically to help job applicants skip all or part of this process. If you read between the lines, there's a compelling argument to be made that something needs to be changed. Not just for the sake of eager people who'd love to work for a particular company, but also for managers who, meanwhile, are stuck with an empty desk instead of someone making potentially great contributions to the codebase.
Coding assessments, often the first filter for coding jobs of any stripe, usually come in two flavors.
- Build something for us based on [a/an often loose] set of requirements
- Log into a shared screen and solve a coding puzzle
The most common defense of either of these approaches is roughly the same, with small variations here and there, and it reads "How else are we supposed to filter out people from the overwhelming number of applicants?"
And you know what? On it's face that's fair. But in practice it's often just a really bad filter.
Some of the CoderPad exercises I've been asked to complete while on a Google Hangout/Skype session include, but are not limited to:
- Deduce if a string is a palindrome
- Parse an (overly nested) object and only count the nested objects
- Build a Roman Numeral translator
- Build a new object based on relationship data that lives in two other objects
Some of the time-limited, take-home tests include:
- Build a lightbox that can scale its size based on scroll position (~3 days given)
- Build a working game of Minesweeper (~3 hours given)
- Add functionality to an app built in HackerRank (~90 minutes)
None of these are terribly difficult. With some exceptions, they've been completed. The problem is the inherent disadvantage that comes with being the interviewee. You don't know the expectations about coding standards at the company, and often the answer regarding that is intentionally answered vaguely.
And that's the problem.
If I can vent for a brief moment (you knew this was coming), the worst part of both of these assessment types is that there's never feedback when you don't make it to the next around. The closest I got to "cracking" the code was with a first-round screener who, in some extra time at the end, I asked quite honestly what the expectations were and how my submission would be graded.
He effectively said that it's almost arbitrary; that it comes to down to the person "grading" and if they just like the style you chose at the moment. "Some came back way, way over-engineered," he said of some hiring he did last month. "Others leaned too much on libraries, and even though we didn't prohibit their use, we passed on that candidate."
What is someone supposed to do? A one-and-done approach can't possibly assess the breadth of someone's talent and needlessly punishes a candidate if the particular random question or assignment they get doesn't play to their strengths. It keeps that open seat I mentioned before empty way longer than it should be; to be frank: It doesn't help anyone.
At this point, you're probably asking "So what's the answer?" Especially if you're a hiring manager trying to figure out what to do with a pile of resumes sitting on your desk.
What's worked for me - and I'm speaking from experience on both sides of the interview table - is that you put the coding test away; you put the whiteboard marker down. Probe the candidate for general knowledge about the language in question. The more knowledge they demonstrate, dig one level deeper until you hit their "wall." If you must use the whiteboard, break out the trivia and ask the difference between certain reserved keywords, variable scope, inheritence, etc. Walk through a problem and see how they think it out. Just speaking out loud, how are they architecting a first-pass solution?
A candidate with a strong foundation in the core rules of a language and who can think logically is probably a really safe bet - I haven't recommended a bad one yet with this approach - and no one's time is wasted building functions that solve problems that don't exist anywhere but in an assessment packet.
Top comments (17)
I think the problem is more with the arbitrary nature of interviewing and reviewing than the review itself (whiteboard and live code pairing exercises on random problems present additional problems).
We use the take-home code project approach - and use a standardized scoring sheet and review process. If the code project passes, we then use a standardized process for interviewing them, centered on the code project itself: can you defend your code, can you add a small feature to it, can you refactor part of it. We allow the candidate to use Google/StackOverflow/whatever during these code pairings, or to 'think out loud' on a whiteboard or notepad or whatever.
We follow this exact process every time. We also provide mentoring and training to staff on what we look for in both the code reviews and the interview. It isn't perfect, but most candidates who've given us reviews on Glassdoor have liked the process, and most of the candidates that we've hired have been great additions to our team.
I like that you allow the candidate to use external resources to find a solution - that's definitely a real-world example.
I would advise caution around take-home coding. You maybe excluding people and not even know it
As a mom, and for a while single mom, this was not an option for me. I couldn't set aside an extra 4 hours in my day to sit down and fully concentrate on a coding project outside of my regular (baby-sitter-available) hours. Just a friendly heads up.
Yeah, it's definitely a weakness that we're aware of.
Hmmmm seems like a situation where it might be possible to offer two different styles of interview, both expecting to evaluate the same basic skills and offer choice to the candidate.
We have a light take-home portion to our interview. It's a questionnaire that can be finished pretty quickly (30 mins). I wonder where the line is drawn where this can become a burden for some candidates.
I'll have to see if we have metrics on candidates who refuse outright to do the code project (for whatever reason). I know that a fair number agree to do it, and then never submit it, which can be interpreted any number of ways.
Guilty as charged on occasionally never submitting on some rare occasions. In my case, it was always one of the following:
1) An offer from the HR/dev interviewing you to answer questions isn't honored
2) Time conflict with other take-home assessments. This is on both parties to keep in mind that a candidate may have several on his/her plate and one may have to be cut.
3) Related to #1: Incomplete or conflicting requirements and questions regarding it are either answered vaguely or not at all.
4) Sometimes things come up in life and you can't just dedicate the time when you otherwise thought you could.
In all cases where I have to stop, I strive to send an email. I'm only human and sometimes that doesn't happen and I apologize profusely when I realize that's happened because, well, good manners. :)
I think there's a definite blindspot here, and I'm on board with giving people options with how to proceed on demonstrating their skills. Take-home or live-coding? Code samples?
I realize there's, well, let's call it "sensitivity" to making sure a candidate isn't misrepresenting themselves, and code samples are a terrific way for people like that to cheat. But a good way to filter those people out is to have them simply explain their code. In detail. Ask how they'd improve it? Ask them to potentially re-factor it in ES6 if it's written a little more classically.
There are options, and I think it's only fair to present a few so a candidate can pick the one that really lets them show off. :)
"[H]ave them simply explain their code. In detail. Ask how they'd improve it? Ask them to [...] re-factor[.]"
You've just described our interview process. :-)
We've definitely had candidates ask if they could submit an existing code sample instead of taking the test. In the handful of instances where we've agreed, those candidates have all crashed and burned. Hard. It's always a pet project that they've been working on for a while, it's never as good as the candidate thinks it is (even when it's good!), and they have a hard time looking critically at their baby.
Very similar to us. Key to our process behind the scenes is that everyone fills out evaluation forms independently and we're not allowed to talk to one another about anything until everyone has submitted. We then discuss things based on the various feedback surveys (which are done at each possible stage, so a few times per candidate).
This, we think, provides maximum unadulterated wisdom of the crowds, less group-think, and more containerization of possible biases.
Not being able to look critically at your own work is a huge red flag, just because that's a person who's going to get super defensive in even the most casual code review.
I used to be that person, but I learned to accept that every baby, even from the most seasoned programmers on the team, has warts. And it's even OK to admit that one or two projects in any given gig is Rosemary's Baby from head to toe. It happens. The test is simply to see if someone can admit it to a peer.
Yaaas.
I once had a personality test & coding challenge I had to submit together before I went on to the next round. They passed. Was it my personality? Or was it my coding? We may never know.
Dev interviews are sooo broken. I hope this pivots to a more healthy interview process for both candidates and managers.
My favorite code-interview was one where I built a sample app, on my own machine, on the projector, in front of a group of people.
It also helped that I got the job - but honestly I felt very comfortable doing this. They told me what I would be building before I got there, there were no surprises. The interviews leading up to the in-person interview were similar to what you talked about... just a general discussion of my personality and questions around technology. No white boards, no homework, no GitHub required.
I really dig the idea of BYOE. My only concern would be if there's any kind of projector set up so people aren't huddling around you for an insane amount of time. :) But details like that are definitely something that can be discussed prior to the on-site.
I like the idea of the interviewee providing their own environment. I may use that in the future (with a me-provided environment ready as a backup). Thanks!
For sure - Just be sure to tell them ahead of time so they can be prepared. :)
It's best case really. I have keyboard shortcuts I'm used to that aren't on someone else's machine. Therefore it looks like I'm stumbling, but I'm actually not :(
Yeah, I think quite a bit of cooperative planning would have to go into it (which probably has "interview value" in itself) and would have to be custom-tailored on a per-candidate basis, which is fine for me. I could see this being prohibitively difficult for some orgs but this would work well for us. :)
When I interview candidates, I definitely use live coding tests as well as whiteboard tests. However, I probably look for different things than most.
With the live coding test, I put them in front of some tools that they've told me they're very good with. I give them some trivial coding test from the perspective of a business user with vague/conflicting requirements. I don't really care about the code they write (face it, it's probably crap as would be anything I'd do right in front of them rushed and under that kind of pressure). But what I want to take away from it is an answer to these questions:
1) Are they really able to use the tool as they said or do they stumble around in it because they've only kinda used it? (This gives me an indication of whether they're misrepresenting themselves or not, as well as how much I can trust their self-assessment of skills.)
2) When I give them intentionally-conflicting requirements, do they push back and demand more info where it's necessary or do they fall into obvious pitfalls that a simple question could avoid? (There is a right and wrong here for some teams, but other teams can easily manage either of these people with the proper support system.)
3) Are they able to fill in the blanks with common sense assumptions when flushing out what business requirements mean? (There isn't a "right" or "wrong" result here, it's more for gauging their ability to quickly analyze things with critical thinking skills while under pressure. This is something some people are great at and something some people aren't so great at even though they're great developers. It simply depends on what the role is for and if they'll be somebody dealing with structured process or putting out live fires.)
For the whiteboard portion, I usually give them a problem that I don't necessarily expect them to whip up a solution to. That's fine but it's also fine if it's a collaborative process (with me). What I'm looking for are answers to these questions:
1) Can they communicate well enough to explain their thought process? I don't need a wiz who can whip out an answer to all technical problems. I need a teammate who can work through a problem and get to a solution.
2) Can they communicate well enough not to just communicate but to also educate? I often ask them to elaborate on "why" not to challenge them but to educate me. I ask things like, "Pretend I'm a new developer learning ABC. Can you explain to me why XYZ?" For some positions, I like to know if the person may be able to mentor others on the team and this is where I can get a good indication of that.
3) Alternatively, are they somebody who would prefer to work through the problem themselves without leaning on others? This is okay, too. I definitely want to know this before hiring, though. Some positions this is preferable for and some positions the collaboration is preferable for. Both have their strengths and weaknesses (even on the same team)!
The point is, I'm not looking for "the right code" or "the right solution" when I put these tests here. The test isn't the end result. The test is the journey. And that journey tells you more than you could ever deduce from a code analysis of a fabricated problem solved with insufficient input.
BTW: Google, StackOverflow, and even texting friends is allowed during these parts of the interview. I actually encourage them to go ahead and look things up online or find snippets to use if they're struggling with anything that they think they could find online. There's value in seeing how they can use resources to solve already-solved problems rather than hammering through everything.
At my company, we give applicants a week to complete a coding challenge before the final interview. All the interviewers (2-3 at our company) review and score these before the final interview. Yet, as long as the applicant submits code that (a) accomplishes the stated goal and (b) isn't obviously copy/pasted from somewhere else, we will proceed with the final interview.
However, we'll ask for explanations about that code during the final interview. Generally this involves us asking questions about design choices we noted in the review.
We also often have them fix a bug in it (there's always one or two) in front of us. This allows us to see them working on their code, instead of something entirely unfamiliar.
Thanks for the article! Definitely a good read :)
I came across an awesome blog by Aline Lerner about interviews, and I think it's totally relevant. I love her data-driven approach to figuring out interviews.
They're long blog posts, but worth every word. The one that hooked me in is this one: Technical interview performance is kind of arbitrary. Here's the data.