AI

Making AI trustworthy: Can we overcome black-box hallucinations?

Comment

Square Black Box Mockup on dark background. 3d rendering
Image Credits: Customdesigner (opens in a new window) / Getty Images

Mike Capps

Contributor

Dr. Mike Capps is CEO and co-founder of ethical AI startup Diveplane and former president of Epic Games.

Like most engineers, as a kid I could answer elementary school math problems by just filling in the answers.

But when I didn’t “show my work,” my teachers would dock points; the right answer wasn’t worth much without an explanation. Yet, those lofty standards for explainability in long division somehow don’t seem to apply to AI systems, even those making crucial, life-impacting decisions.

The major AI players that fill today’s headlines and feed stock market frenzies — OpenAI, Google, Microsoft — operate their platforms on black-box models. A query goes in one side and an answer spits out the other side, but we have no idea what data or reasoning the AI used to provide that answer.

Most of these black-box AI platforms are built on a decades-old technology framework called a “neural network.” These AI models are abstract representations of the vast amounts of data on which they are trained; they are not directly connected to training data. Thus, black-box AIs infer and extrapolate based on what they believe to be the most likely answer, not actual data.

Sometimes this complex predictive process spirals out of control and the AI “hallucinates.” By nature, black-box AI is inherently untrustworthy because it cannot be held accountable for its actions. If you can’t see why or how the AI makes a prediction, you have no way of knowing if it used false, compromised, or biased information or algorithms to come to that conclusion.

While neural networks are incredibly powerful and here to stay, there is another under-the-radar AI framework gaining prominence: instance-based learning (IBL). And it’s everything neural networks are not. IBL is AI that users can trust, audit, and explain. IBL traces every single decision back to the training data used to reach that conclusion.

IBL can explain every decision because the AI does not generate an abstract model of the data, but instead makes decisions from the data itself. And users can audit AI built on IBL, interrogating it to find out why and how it made decisions, and then intervening to correct mistakes or bias.

This all works because IBL stores training data (“instances”) in memory and, aligned with the principles of “nearest neighbors,” makes predictions about new instances given their physical relationship to existing instances. IBL is data-centric, so individual data points can be directly compared against each other to gain insight into the dataset and the predictions. In other words, IBL “shows its work.”

The potential for such understandable AI is clear. Companies, governments, and any other regulated entities that want to deploy AI in a trustworthy, explainable, and auditable way could use IBL AI to meet regulatory and compliance standards. IBL AI will also be particularly useful for any applications where bias allegations are rampant — hiring, college admissions, legal cases, and so on.

Companies are using IBL in the wild today. My company has built a commercial IBL framework used by customers such as large financial institutions to detect anomalies across customer data and generate auditable synthetic data that complies with the EU’s General Data Protection Regulation (GDPR).

Of course, IBL is not without challenges. The main limiting factor for IBL is scalability, which was also a challenge that neural networks faced for 30 years until modern computing technology made them feasible. With IBL, each piece of data must be queried, cataloged, and stored in memory, which becomes harder as the dataset grows.

However, researchers are creating fast-query systems based on advances in information theory to significantly speed up this process. This state-of-the-art technology has enabled IBL to directly compete with the computational feasibility of neural networks.

Despite these challenges, the potential for IBL is clear. As more and more companies seek safe, explainable, and auditable AI, black-box neural networks will no longer cut it. So, if you run a company — whether a small startup or a larger enterprise — here are some practical tips to start deploying IBL today:

Adopt an agile and open mindset

With IBL, it works best to explore your data for the insights it can give you, rather than assigning it a particular task, such as “predict the optimal price” of an item. Keep an open mind and let IBL guide your learnings. IBL may tell you that it can’t predict an optimal price very well from a given dataset but can predict the times of day people make the most purchases, or how they contact your company, and what items they are most likely to buy.

IBL is an agile AI framework that requires collaborative communication between decision-makers and data science teams — not the usual “toss a question over the transom, wait for your answer” that we see in many organizations deploying AI today.

Think “less is more” for AI models

In traditional black-box AI, a single model is trained and optimized for a single task, such as classification. In a large enterprise, this might mean there are thousands of AI models to manage, which is both expensive and unwieldy. In contrast, IBL enables versatile, multitask analysis. For example, a single IBL model can be used for supervised learning, anomalies detection, and synthetic data generation, while still providing full explainability.

This means IBL users can build and maintain fewer models, enabling a leaner, more adaptable AI toolbox. So if you’re adopting IBL, you need programmers and data scientists, but you don’t need to invest in tons of PhDs with AI experience.

Mix up your AI tool set

Neural networks are great for any applications that don’t need to be explained or audited. But when AI is helping companies make big decisions, such as whether to spend millions of dollars on a new product or complete a strategic acquisition, it must be explainable. And even when AI is used to make smaller decisions, such as whether to hire a candidate or give someone a promotion, explainability is key. No one wants to hear they missed out on a promotion based on an inexplicable, black-box decision.

And companies will soon face litigation in these types of instances. Choose your AI frameworks based on the application; go with neural nets if you just want fast data ingestion and quick decision-making, and use IBL when you need trustworthy, explainable, and auditable decisions.

Instance-based learning is not a new technology. Over the last two decades, computer scientists have developed IBL in parallel with neural networks, but IBL has received less public attention. Now IBL is gaining new notice amid today’s AI arms race. IBL has proven it can scale while maintaining explainability — a welcome alternative to hallucinating neural nets that spew out false and unverifiable information.

With so many companies blindly adopting neural network–based AI, the next year will undoubtedly see many data leaks and lawsuits over bias and misinformation claims.

Once the mistakes made by black-box AI begin hitting companies’ reputations — and bottom lines! — I expect that slow-and-steady IBL will have its moment in the sun. We all learned the importance of “showing our work” in elementary school, and we can certainly demand that same rigor from AI that decides the paths of our lives.

More TechCrunch

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads. Designed as an independent appeals board that hears cases and then makes precedent-setting content…

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine