I've been speaking lately to a lot of engineers, and I tend to complain about the current state of interviewing without coming up with a solution for the technical interviews. For that reason, I am excited to introduce a concept I've been working on for 15+ years, observing and researching hiring and interviews. It's not a full solution, but another tool or your toolbelt.
Introducing the FATE Metrics
An aggregate of metrics for engineers and hiring managers to assess their technical interview process.
Photo by Mikail McVerry on Unsplash
The FATE Metrics , which stands for Feedback, Accuracy, Time, and Effort, is a set of metrics designed to benchmark a technical interview process by focusing on the candidate's experience, particularly in software engineering, but applicable across all roles and industries. By improving the hiring process, companies can ensure they’re attracting and retaining the best talent, ultimately shaping the FATE of their organization and the candidate’s career. In this blog post, I will provide an overview of the FATE Metrics and explain how they can benefit both candidates and organizations.
Each metric has a score of 0–10. Where 0 is " Very low" , 10 is " Very high" , and 5 is " Medium".
Feedback (F)
Feedback measures the usefulness of the feedback for rejections, both the company to the candidate and vice-versa. However, we'll focus on the candidate's experience.
An ideal feedback score would include a detailed, personalized response from the hiring manager, helping the candidate understand why they did or did not secure the position. To provide effective feedback, it should be objective, constructive, guided, paced, and encouraging, as discussed in various research papers on effective feedback (see references).
Useful Feedback translates to a high score.
Very low score: The software engineering candidate receives a generic rejection email from a no-reply address with no information on their performance during the interview process.
Medium score: The software engineering candidate receives feedback from the company that they decided not to continue with the process using an email that can be replied to. The candidate replies to the email asking for specific feedback on how they can improve; the company responds that they decided to go with another candidate. The candidate requests more specific feedback, but the company does not reply back.
Very high score: A software engineering candidate receives a personalized email from the hiring manager detailing the strengths and areas for improvement observed during the interview process, with specific examples and guidance for future growth. It doesn't matter if the response is automated as long as it's crafted to the candidate's circumstances. The company asks for feedback from the candidate on how they can improve their own process.
Accuracy (A)
Accuracy evaluates how closely the interview process aligns with the job requirements of the role the candidate is applying for.
A high-accuracy interview process involves simulations or real-life problem-solving exercises that closely mirror the candidate’s future responsibilities rather than generic or unrelated code challenges. A low-accuracy interview is based on irrelevant l33t code and abstract technical questions that can be easily looked up online. High accuracy is based on real-life.
High Accuracy translates to a high score.
Very low score: A software engineering candidate is asked to solve abstract algorithmic problems during the interview, which are unrelated to the daily tasks they would perform in the role. In this kind of interview, interviewers send crazy questions and expect that the candidate would ask the right questions to remove ambiguity to get to the solution of the problem. It gives an unreasonable advantage to low-skill engineers that have trained for this kind of interview or read the book Cracking the Code Interview. Examples are "Given a bus, how many baseballs would you fit in them?" or "Given points x, y and z in a multi-dimensional dataset, what is the shortest distance between x and z?".
Medium score: A software engineer is shortlisted based on their CV by a human and is invited to participate in a coding session with another engineer and asked to solve a domain problem that is representative of the actual work they will be doing. However, the work does not match their skills or the expectations for the role.
Very high score: A software engineering candidate is filtered through self-assessment of competencies and Big 5 assessment, and that information is compared to the team's requirements using the findings of the relevant scientific research on the topic. Once shortlisted, they participate in a pair programming or solo programming session with a member of the team they're applying for. The candidate is given a real-life domain scenario that is representative of the actual work they will be doing in the role, but without the interviewer knowing the solution for the problem.
Examples are "We work in an internal developer tools team, and we've been asked by other devs to design an API to get information about the weather; where would you start?" or "We work in a customer-facing website for payments, we've been asked to implement a gateway to interact with multiple payment providers, where would you start?"
Time (T)
Time refers to the duration of the interview process until both parties reach to an outcome.
An efficient process is short, with minimal rounds and a limited number of people involved. Lengthy, convoluted processes can cause confusion and negatively impact the candidate’s experience.
A quick process translates to a high score.
Very low score: A software engineering candidate goes through multiple rounds of interviews, spanning several weeks or even months, with no clear communication regarding the next steps or timeline to the conclusion. Asking for feedback by email yields no reply.
Medium score: A software engineering candidate goes through several rounds of interviews over the course of a few weeks, with some communication about the next steps and timeline. However, there may be occasional delays or lack of clarity, and the candidate might need to follow up with the company for updates.
Very high score: A software engineering candidate completes a streamlined interview process. After their application, there's one shortlist stage and then one or two more stages to receive a decision within less than a week.
Quicker interviews tend to yield low accuracy if done incorrectly. However, good interviews are quicker and with high accuracy.
Effort (E)
Effort assesses the amount of work required from the candidate during the interview process. A low-effort process might involve answering pre-set questions and making a decision simply by talking to your peers. On another hand, a high-effort process could require take-home assignments or extensive open-ended questions and exercises like HackerRank.
A low-effort process translates to a high score.
Very low score: A software engineering candidate is asked to complete a lengthy HackerRank and take-home assignment with no compensation for their time and effort. Later, they need to travel in person to the company HQ to have a set of 6 interview rounds with 12 different people. The in-person interview spans multiple days.
Medium score: A software engineering candidate goes through a moderately demanding interview process that includes a combination of online assessments and a few rounds of interviews with different team members. The process may require a reasonable amount of effort but is not overly demanding or time-consuming.
Very high score: A software engineering candidate is shortlisted after a skill self-assessment and may get a response straight away with the possibility to talk to the hiring manager if there's a mistake. After being shortlisted, the remote interview process balances the candidate’s effort with the value of the insights gained, such as a well-structured technical interview. The interview with another engineer tests some of the aspects the candidate self-assessed to cross-verify some of the responses but not all, followed by a behavioural interview.
The FATE Metrics provide an objective measurement tool for sensing the quality of the interview process in software engineering and beyond, with the ultimate goal of achieving a 10/10 score in each category.
However, these should be used like the instruments of a plane where it helps you sense the environment, not evaluate it completely, as achieving 10/10 doesn't mean your interview becomes magically perfect. By embracing the FATE Metrics, companies can identify areas where their hiring process may be falling short and make targeted improvements that will benefit not only their organization but also the candidates and, potentially, the industry as a whole.
As the landscape of software engineering and other industries continues to evolve rapidly, attracting and retaining top talent becomes increasingly crucial. By employing the FATE Metrics in the hiring process, companies can ensure they’re providing the best possible experience for candidates, creating a positive impression and setting the stage for a successful working relationship. In turn, this will help to shape the FATE of the organization, the candidate’s career, and even the industry as a whole, promoting growth and innovation in the long run.
If you want me to come over and help you with your interview process, contact me on Twitter, LinkedIn or Github.
Bonus: I plan to publish an archive containing years of interview processes analysed using the FATE metrics. Also, a list of questions in which organizations can self-assess their interview process.
Stay tuned by subscribing! 📻
Thanks for reading. If you have feedback, contact me on Twitter, LinkedIn or Github.
References
- Poston, K. L. (2013). Feedback That Sticks: The Art of Effectively Communicating Neuropsychological Assessment Results. Oxford University Press.
- Costa, Paul T., Jr., and Robert R. McCrae. “Personality in Adulthood: A Six-Year Longitudinal Study of Self-Reports and Spouse Ratings on the NEOPI.” Journal of Personality and Social Psychology, vol. 48, no. 2, 1984, pp. 159–165. JSTOR, www.jstor.org/stable/2095160.
- Dreyfus, Hubert L., and Stuart E. Dreyfus. “A Five-Stage Model of the Mental Activities Involved in Directed Skill Acquisition.” Operations Research Center, University of California, Berkeley, 1980.
Top comments (0)