Startups

To improve close rates for technical interviews, give applicants feedback (good or bad)

Comment

two people standing on edges of Frame with large space in between for copy or illustration
Image Credits: We Are (opens in a new window) / Getty Images

Aline Lerner

Contributor

Aline Lerner is founder and CEO of interviewing.io, where engineers go to practice technical interviewing.

More posts from Aline Lerner

Technical interviews are a black box — candidates usually are told whether they made it to the next round, but they rarely find out why.

Lack of feedback isn’t just frustrating for candidates; it’s also bad for business. Our research shows that 43% of all candidates consistently underrate their technical interview performance, and 25% of all candidates consistently think they failed when they actually passed.

Why do these numbers matter? Because giving instant feedback to successful candidates can do wonders for increasing your close rate.

Giving feedback will not only make candidates you want today more likely to join your team, but it’s also crucial to hiring the people you might want down the road. Technical interview outcomes are erratic, and according to our data, only about 25% of candidates perform consistently from interview to interview.

This means a candidate you reject today might be someone you want to hire in 6 months.

But won’t we get sued?

I surveyed founders, hiring managers, recruiters and labor lawyers to understand why anyone who’s ever gone through interviewer training has been told in no uncertain terms to not give feedback.

The main reason: Companies are scared of being sued.

As it turns out, literally zero companies (at least in the U.S.) have ever been sued by an engineer who received constructive post-interview feedback.

A lot of cases are settled out of court, which makes that data much harder to get, but given what we know, the odds of being sued after giving useful feedback are extremely low.

What about candidates getting defensive?

For every interviewer on our platform, we track two key metrics: candidate experience and interviewer calibration.

The candidate experience score is a measure of how likely someone is to return after talking to a given interviewer. The interviewer calibration score tells us whether a given interviewer is too strict or too lenient, based on how well their candidates do in subsequent, real interviews. If someone continually gives good scores to candidates who fail real interviews, they’re too lenient, and vice versa.

When you put these scores together, you can reason about the value of delivering honest feedback. Below is a graph of the average candidate experience score as a function of interviewer accuracy, representing data from over 1,000 distinct interviewers (comprising about 100,000 interviews):

The best-calibrated interviewers are also the best rated
Image Credits: Aline Lerner

The candidate experience score peaks right at the point where interviewers are neither too strict nor too lenient, but are, in Goldilocks terms, “just right.” It drops off pretty dramatically on either side after that.

Based on our data, we’re confident that if you do it right, candidates won’t get defensive. The benefits of delivering honest feedback greatly outweigh the risks.

The playbook for delivering honest (and sometimes harsh) feedback

The first and most important thing is to not focus on the outcome. Rather, get specific right away — this keeps candidates from getting defensive and sets them up to actually hear and internalize feedback.

Don’t tell a candidate whether they did well or did poorly — just dive into a constructive, detailed performance assessment. Reframing feedback in this way takes some practice, but candidates won’t push you to give them the outcome.

Instead, their attention will be redirected to the details, which will make the pass/fail part much more of an afterthought (and, in some cases, entirely moot). People don’t get defensive because they failed — it’s because they don’t understand why and feel powerless.

Post-interview questions to consider

  • Did they ask enough questions about constraints before getting into coding or before starting to design a system?
  • Go over specific code snippets or portions of their solution — what could they have done better?
  • Could their solution have been more efficient?
  • Did they discuss and reason about trade-offs?
  • Did they make mistakes when discussing time or space complexity? What were those mistakes?
  • Did they make any mistakes when trying to use their programming language of choice idiomatically (e.g., iterating in Python or JavaScript)?
  • For systems design questions, did they jump to suggesting a specific database, load balancer or tool without reasoning why that tool is the right choice for the job?

To answer these questions well and give constructive feedback, it’s critical to take time-stamped notes during the interview. You can always go back to your notes and say, “Hey, you jumped into coding just five minutes into the interview. Typically, you’ll want to spend a few minutes asking questions.”

Specific feedback really does mean being specific. One of the kindest, albeit most labor-intensive, things you can do is walk through their code, point out places where they went astray, and note what they could have done better.

Another useful approach is to share objective benchmarks for a given interview question, both with respect to time and the number of hints given. Skilled interviewers layer complexity: After a candidate successfully solves a question, they’ll change the constraints in real time. If a candidate is blowing through your questions quickly, you may even do this three or four times during the interview.

This means you know exactly how many constraint changes you’ll be able to go through with a low-performing candidate, a mediocre one and someone who’s truly exceptional.

Your candidates don’t know this, though. In fact, people commonly overestimate their performance in interviews because they don’t realize how many layers of complexity a question has.

In this scenario, a candidate may finish the first layer successfully right before time is called and finish believing they did well, when in reality, the interviewer has benchmarked them against people who completed three layers in the same amount of time.

How do you put all of this information to practical use?

Let your candidates know what the benchmarks are for a top-performing candidate at the end of the interview. You could say something like, “In the 45 minutes we spent working on this problem, the strongest performers usually complete the brute-force solution in about 20 minutes, optimize it until it runs in linear time (which takes another 10 minutes), and then, in the last 15 minutes, successfully complete an enhancement where, instead of an array, your input is a stream of integers.”

Also, let them know exactly how many hints are par for the course. Just like with how much time should elapse for different parts of the interview, candidates have no idea what “normal” is when it comes to the number and how detailed the hints are.

For instance, if a candidate needed a hint about which data structure to use, another about which time complexity is associated with that data structure, followed by a hint about a common off-by-one error that comes up, you may want to tell them that the strongest performers usually need a hint about one of those things, but not all three.

The key to communicating benchmarks constructively is, of course, to be as specific as possible with runtimes, space constraints or whatever success metric you’re using.

Some of our interviewers ask candidates to perform detailed self-assessments at the end of the interview before giving them feedback. This is an advanced technique, and if you’re new to giving synchronous feedback, I wouldn’t do it in your first few interviews.

However, once you become comfortable, this approach can be a great way to zero in on the areas that the candidate needs the most help on.

If you do end up taking the self-assessment route, it’s good to ask candidates some leading questions. For instance, for algorithmic interviews, you can ask:

  • How well did you solve the problem and arrive at an optimized solution?
  • How clean was your code?
  • Where did you struggle?

While the candidate responds, take notes and then go through their points together, speaking to each point in detail. For instance, if a candidate rates themselves well on code quality but poorly on their ability to solve the problem, you can agree or disagree and give them benchmarks (as discussed above) for both.

In summary

  • Take detailed notes during the interview, ideally with time stamps, that you can refer to later.
  • Don’t lead with whether they passed or failed. Instead, be specific and constructive right away. This will divert the candidate’s attention away from the outcome and put them in the right headspace to receive feedback.
  • As much as possible, give objective benchmarks for performance. For instance, tell candidates that the strongest performers are usually able to finish part 1 within 20 minutes, part 2 within 10 minutes, and part 3 within 15 minutes, with one hint at most.
  • Once you become comfortable with giving feedback, you can try asking candidates to assess their own performance and then use it as a rubric that you can go down, point by point.

More TechCrunch

Ahead of the AI safety summit kicking off in Seoul, South Korea later this week, its co-host the United Kingdom is expanding its own efforts in the field. The AI…

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

14 hours ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

3 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

3 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities