Startups

To improve close rates for technical interviews, give applicants feedback (good or bad)

Comment

two people standing on edges of Frame with large space in between for copy or illustration
Image Credits: We Are (opens in a new window) / Getty Images

Aline Lerner

Contributor

Aline Lerner is founder and CEO of interviewing.io, where engineers go to practice technical interviewing.

More posts from Aline Lerner

Technical interviews are a black box — candidates usually are told whether they made it to the next round, but they rarely find out why.

Lack of feedback isn’t just frustrating for candidates; it’s also bad for business. Our research shows that 43% of all candidates consistently underrate their technical interview performance, and 25% of all candidates consistently think they failed when they actually passed.

Why do these numbers matter? Because giving instant feedback to successful candidates can do wonders for increasing your close rate.

Giving feedback will not only make candidates you want today more likely to join your team, but it’s also crucial to hiring the people you might want down the road. Technical interview outcomes are erratic, and according to our data, only about 25% of candidates perform consistently from interview to interview.

This means a candidate you reject today might be someone you want to hire in 6 months.

But won’t we get sued?

I surveyed founders, hiring managers, recruiters and labor lawyers to understand why anyone who’s ever gone through interviewer training has been told in no uncertain terms to not give feedback.

The main reason: Companies are scared of being sued.

As it turns out, literally zero companies (at least in the U.S.) have ever been sued by an engineer who received constructive post-interview feedback.

A lot of cases are settled out of court, which makes that data much harder to get, but given what we know, the odds of being sued after giving useful feedback are extremely low.

What about candidates getting defensive?

For every interviewer on our platform, we track two key metrics: candidate experience and interviewer calibration.

The candidate experience score is a measure of how likely someone is to return after talking to a given interviewer. The interviewer calibration score tells us whether a given interviewer is too strict or too lenient, based on how well their candidates do in subsequent, real interviews. If someone continually gives good scores to candidates who fail real interviews, they’re too lenient, and vice versa.

When you put these scores together, you can reason about the value of delivering honest feedback. Below is a graph of the average candidate experience score as a function of interviewer accuracy, representing data from over 1,000 distinct interviewers (comprising about 100,000 interviews):

The best-calibrated interviewers are also the best rated
Image Credits: Aline Lerner

The candidate experience score peaks right at the point where interviewers are neither too strict nor too lenient, but are, in Goldilocks terms, “just right.” It drops off pretty dramatically on either side after that.

Based on our data, we’re confident that if you do it right, candidates won’t get defensive. The benefits of delivering honest feedback greatly outweigh the risks.

The playbook for delivering honest (and sometimes harsh) feedback

The first and most important thing is to not focus on the outcome. Rather, get specific right away — this keeps candidates from getting defensive and sets them up to actually hear and internalize feedback.

Don’t tell a candidate whether they did well or did poorly — just dive into a constructive, detailed performance assessment. Reframing feedback in this way takes some practice, but candidates won’t push you to give them the outcome.

Instead, their attention will be redirected to the details, which will make the pass/fail part much more of an afterthought (and, in some cases, entirely moot). People don’t get defensive because they failed — it’s because they don’t understand why and feel powerless.

Post-interview questions to consider

  • Did they ask enough questions about constraints before getting into coding or before starting to design a system?
  • Go over specific code snippets or portions of their solution — what could they have done better?
  • Could their solution have been more efficient?
  • Did they discuss and reason about trade-offs?
  • Did they make mistakes when discussing time or space complexity? What were those mistakes?
  • Did they make any mistakes when trying to use their programming language of choice idiomatically (e.g., iterating in Python or JavaScript)?
  • For systems design questions, did they jump to suggesting a specific database, load balancer or tool without reasoning why that tool is the right choice for the job?

To answer these questions well and give constructive feedback, it’s critical to take time-stamped notes during the interview. You can always go back to your notes and say, “Hey, you jumped into coding just five minutes into the interview. Typically, you’ll want to spend a few minutes asking questions.”

Specific feedback really does mean being specific. One of the kindest, albeit most labor-intensive, things you can do is walk through their code, point out places where they went astray, and note what they could have done better.

Another useful approach is to share objective benchmarks for a given interview question, both with respect to time and the number of hints given. Skilled interviewers layer complexity: After a candidate successfully solves a question, they’ll change the constraints in real time. If a candidate is blowing through your questions quickly, you may even do this three or four times during the interview.

This means you know exactly how many constraint changes you’ll be able to go through with a low-performing candidate, a mediocre one and someone who’s truly exceptional.

Your candidates don’t know this, though. In fact, people commonly overestimate their performance in interviews because they don’t realize how many layers of complexity a question has.

In this scenario, a candidate may finish the first layer successfully right before time is called and finish believing they did well, when in reality, the interviewer has benchmarked them against people who completed three layers in the same amount of time.

How do you put all of this information to practical use?

Let your candidates know what the benchmarks are for a top-performing candidate at the end of the interview. You could say something like, “In the 45 minutes we spent working on this problem, the strongest performers usually complete the brute-force solution in about 20 minutes, optimize it until it runs in linear time (which takes another 10 minutes), and then, in the last 15 minutes, successfully complete an enhancement where, instead of an array, your input is a stream of integers.”

Also, let them know exactly how many hints are par for the course. Just like with how much time should elapse for different parts of the interview, candidates have no idea what “normal” is when it comes to the number and how detailed the hints are.

For instance, if a candidate needed a hint about which data structure to use, another about which time complexity is associated with that data structure, followed by a hint about a common off-by-one error that comes up, you may want to tell them that the strongest performers usually need a hint about one of those things, but not all three.

The key to communicating benchmarks constructively is, of course, to be as specific as possible with runtimes, space constraints or whatever success metric you’re using.

Some of our interviewers ask candidates to perform detailed self-assessments at the end of the interview before giving them feedback. This is an advanced technique, and if you’re new to giving synchronous feedback, I wouldn’t do it in your first few interviews.

However, once you become comfortable, this approach can be a great way to zero in on the areas that the candidate needs the most help on.

If you do end up taking the self-assessment route, it’s good to ask candidates some leading questions. For instance, for algorithmic interviews, you can ask:

  • How well did you solve the problem and arrive at an optimized solution?
  • How clean was your code?
  • Where did you struggle?

While the candidate responds, take notes and then go through their points together, speaking to each point in detail. For instance, if a candidate rates themselves well on code quality but poorly on their ability to solve the problem, you can agree or disagree and give them benchmarks (as discussed above) for both.

In summary

  • Take detailed notes during the interview, ideally with time stamps, that you can refer to later.
  • Don’t lead with whether they passed or failed. Instead, be specific and constructive right away. This will divert the candidate’s attention away from the outcome and put them in the right headspace to receive feedback.
  • As much as possible, give objective benchmarks for performance. For instance, tell candidates that the strongest performers are usually able to finish part 1 within 20 minutes, part 2 within 10 minutes, and part 3 within 15 minutes, with one hint at most.
  • Once you become comfortable with giving feedback, you can try asking candidates to assess their own performance and then use it as a rubric that you can go down, point by point.

More TechCrunch

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender Solo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient, and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets