AI

Women in AI: Heidy Khlaaf, safety engineering director at Trail of Bits

Comment

Women in AI Heidy Khlaaf
Image Credits: TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Heidy Khlaaf is an engineering director at the cybersecurity firm Trail of Bits. She specializes in evaluating software and AI implementations within “safety critical” systems, like nuclear power plants and autonomous vehicles.

Khlaaf received her computer science PhD from the University College London and her BS in computer science and philosophy from Florida State University. She’s led safety and security audits, provided consultations and reviews of assurance cases and contributed to the creation of standards and guidelines for safety- and security-related applications and their development.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I was drawn to robotics at a very young age and started programming at the age of 15, as I was fascinated with the prospects of using robotics and AI (as they’re inexplicably linked) to automate workloads where they’re most needed. Like in manufacturing, I saw robotics being used to help the elderly — and automate dangerous manual labor in our society. I did, however, receive my PhD in a different subfield of computer science, because I believe that having a strong theoretical foundation in computer science allows you to make educated and scientific decisions into where AI may or may not be suitable, and where pitfalls may be.

What work are you most proud of in the AI field?

Using my strong expertise and background in safety engineering and safety-critical systems to provide context and criticism where needed on the new field of AI “safety.” Although the field of AI safety has attempted to adapt and cite well-established safety and security techniques, various terminology has been misconstrued in its use and meaning. There is a lack of consistent or intentional definitions that do compromise the integrity of the safety techniques the AI community is currently using. I’m particularly proud of “Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems” and “A Hazard Analysis Framework for Code Synthesis Large Language Models” where I deconstruct false narratives about safety and AI evaluations and provide concrete steps on bridging the safety gap within AI.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

Acknowledgment of how little the status quo has changed is not something we discuss often, but I believe is actually important for myself and other technical women to understand our position within the industry and hold a realistic view on the changes required. Retention rates and the ratio of women holding leadership positions has remained largely the same since I joined the field, and that’s over a decade ago. And as TechCrunch has aptly pointed out, despite tremendous breakthroughs and contributions by women within AI, we remain sidelined from conversations that we ourselves have defined. Recognizing this lack of progress helped me understand that building a strong personal community is much more valuable as a source of support rather than relying on DEI initiatives that unfortunately have not moved the needle, given that bias and skepticism toward technical women are still quite pervasive in tech.

What advice would you give to women seeking to enter the AI field?

Not to appeal to authority and to find a line of work that you truly believe in, even if it contradicts popular narratives. Given the power AI labs hold politically and economically at the moment, there is an instinct to take anything AI “thought leaders” state as fact, when it is often the case that many AI claims are marketing speak that overstate the abilities of AI to benefit a bottom line. Yet, I see significant hesitancy, especially among junior women in the field, to vocalize skepticism against claims made by their male peers that cannot be substantiated. Imposter syndrome has a strong hold on women within tech and leads many to doubt their own scientific integrity. But it is more important than ever to challenge claims that exaggerate the capabilities of AI, especially those that are not falsifiable under the scientific method.

What are some of the most pressing issues facing AI as it evolves?

Regardless of the advancements we’ll observe in AI, they will never be the singular solution, technologically or socially, to our issues. Currently there is a trend to shoehorn AI into every possible system, regardless of its effectiveness (or lack thereof) across numerous domains. AI should augment human capabilities rather than replace them, and we are witnessing a complete disregard of AI’s pitfalls and failure modes that are leading to real tangible harm. Just recently, an AI system ShotSpotter recently led to an officer firing at a child.

What are some issues AI users should be aware of?

How truly unreliable AI is. AI algorithms are notoriously flawed with high error rates observed across applications that require precision, accuracy and safety-criticality. The way AI systems are trained embed human bias and discrimination within their outputs that become “de facto” and automated. And this is because the nature of AI systems is to provide outcomes based on statistical and probabilistic inferences and correlations from historical data, and not any type of reasoning, factual evidence or “causation.”

What is the best way to responsibly build AI?

To ensure that AI is developed in a way that protects people’s rights and safety through constructing verifiable claims and hold AI developers accountable to them. These claims should also be scoped to a regulatory, safety, ethical or technical application and must not be falsifiable. Otherwise, there is a significant lack of scientific integrity to appropriately evaluate these systems. Independent regulators should also be assessing AI systems against these claims as currently required for many products and systems in other industries — for example, those evaluated by the FDA. AI systems should not be exempt from standard auditing processes that are well-established to ensure public and consumer protection.

How can investors better push for responsible AI?

Investors should engage with and fund organizations that are seeking to establish and advance auditing practices for AI. Most funding is currently invested in AI labs themselves, with the belief that their safety teams are sufficient for the advancement of AI evaluations. However, independent auditors and regulators are key to public trust. Independence allows the public to trust in the accuracy and integrity of assessments and the integrity of regulatory outcomes.

More TechCrunch

The AI industry moves faster than the rest of the technology sector, which means it outpaces the federal government by several orders of magnitude.

Senate study proposes ‘at least’ $32B yearly for AI programs

The FBI along with a coalition of international law enforcement agencies seized the notorious cybercrime forum BreachForums on Wednesday.  For years, BreachForums has been a popular English-language forum for hackers…

FBI seizes hacking forum BreachForums — again

The announcement signifies a significant shake-up in the streaming giant’s advertising approach.

Netflix to take on Google and Amazon by building its own ad server

It’s tough to say that a $100 billion business finds itself at a critical juncture, but that’s the case with Amazon Web Services, the cloud arm of Amazon, and the…

Matt Garman taking over as CEO with AWS at crossroads

Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show…

Google still hasn’t fixed Gemini’s biased image generator

A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real time for conversational patterns associated with financial scams, has sent…

Google’s call-scanning AI could dial up censorship by default, privacy experts warn

Google’s going all in on AI — and it wants you to know it. During the company’s keynote at its I/O developer conference on Tuesday, Google mentioned “AI” more than…

The top AI announcements from Google I/O

Uber is taking a shuttle product it developed for commuters in India and Egypt and converting it for an American audience. The ride-hail and delivery giant announced Wednesday at its…

Uber has a new way to solve the concert traffic problem

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

Google is preparing to launch a new system to help address the problem of malware on Android. Its new live threat detection service leverages Google Play Protect’s on-device AI to…

Google takes aim at Android malware with an AI-powered live threat detection service

Users will be able to access the AR content by first searching for a location in Google Maps.

Google Maps is getting geospatial AR content later this year

The heat pump startup unveiled its first products and revealed details about performance, pricing and availability.

Quilt heat pump sports sleek design from veterans of Apple, Tesla and Nest

The space is available from the launcher and can be locked as a second layer of authentication.

Google’s new Private Space feature is like Incognito Mode for Android

Gemini, the company’s family of generative AI models, will enhance the smart TV operating system so it can generate descriptions for movies and TV shows.

Google TV to launch AI-generated movie descriptions

When triggered, the AI-powered feature will automatically lock the device down.

Android’s new Theft Detection Lock helps deter smartphone snatch and grabs

The company said it is increasing the on-device capability of its Google Play Protect system to detect fraudulent apps trying to breach sensitive permissions.

Google adds live threat detection and screen-sharing protection to Android

This latest release, one of many announcements from the Google I/O 2024 developer conference, focuses on improved battery life and other performance improvements, like more efficient workout tracking.

Wear OS 5 hits developer preview, offering better battery life

For years, Sammy Faycurry has been hearing from his registered dietitian (RD) mom and sister about how poorly many Americans eat and their struggles with delivering nutritional counseling. Although nearly…

Dietitian startup Fay has been booming from Ozempic patients and emerges from stealth with $25M from General Catalyst, Forerunner

Apple is bringing new accessibility features to iPads and iPhones, designed to cater to a diverse range of user needs.

Apple announces new accessibility features for iPhone and iPad users

TechCrunch Disrupt, our flagship startup event held annually in San Francisco, is back on October 28-30 — and you can expect a bustling crowd of thousands of startup enthusiasts. Exciting…

Startup Blueprint: TC Disrupt 2024 Builders Stage agenda sneak peek!

Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the…

Anthropic hires Instagram co-founder as head of product

Seven orgs so far have signed on to standardize the way data is collected and shared.

Venture orgs form alliance to standardize data collection

As cloud adoption continues to surge toward the $1 trillion mark in annual spend, we’re seeing a wave of enterprise startups gaining traction with customers and investors for tools to…

Alkira connects with $100M for a solution that connects your clouds

Charging has long been the Achilles’ heel of electric vehicles. One startup thinks it has a better way for apartment dwelling EV drivers to charge overnight.

Orange Charger thinks a $750 outlet will solve EV charging for apartment dwellers

So did investors laugh them out of the room when they explained how they wanted to replace Quickbooks? Kind of.

Embedded accounting startup Layer secures $2.3M toward goal of replacing QuickBooks

While an increasing number of companies are investing in AI, many are struggling to get AI-powered projects into production — much less delivering meaningful ROI. The challenges are many. But…

Weka raises $140M as the AI boom bolsters data platforms

PayHOA, a previously bootstrapped Kentucky-based startup that offers software for self-managed homeowner associations (HOAs), is an example of how real-world problems can translate into opportunity. It just raised a $27.5…

Meet PayHOA, a profitable and once-bootstrapped SaaS startup that just landed a $27.5M Series A

Restaurant365, which offers a restaurant management suite, has raised a hot $175M from ICONIQ Growth, KKR and L Catterton.

Restaurant365 orders in $175M at $1B+ valuation to supersize its food service software stack 

Venture firm Shilling has launched a €50M fund to support growth-stage startups in its own portfolio and to invest in startups everywhere else. 

Portuguese VC firm Shilling launches €50M opportunity fund to back growth-stage startups

Chang She, previously the VP of engineering at Tubi and a Cloudera veteran, has years of experience building data tooling and infrastructure. But when She began working in the AI…

LanceDB, which counts Midjourney as a customer, is building databases for multimodal AI