AI

Are AI models doomed to always hallucinate?

Comment

Text to video concept, text-to-video by generative AI. Language model technology. Cyborg hand holding vdo generated by artificial intelligence.
Image Credits: Ole_CNX (opens in a new window) / Getty Images

Large language models (LLMs) like OpenAI’s ChatGPT all suffer from the same problem: they make stuff up.

The mistakes range from strange and innocuous — like claiming that the Golden Gate Bridge was transported across Egypt in 2016 — to highly problematic, even dangerous.

A mayor in Australia recently threatened to sue OpenAI because ChatGPT mistakenly claimed he pleaded guilty in a major bribery scandal. Researchers have found that LLM hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers. And LLMs frequently give bad mental health and medical advice, like that wine consumption can “prevent cancer.”

This tendency to invent “facts” is a phenomenon known as hallucination, and it happens because of the way today’s LLMs — and all generative AI models, for that matter — are developed and trained.

Training models

Generative AI models have no real intelligence — they’re statistical systems that predict words, images, speech, music or other data. Fed an enormous number of examples, usually sourced from the public web, AI models learn how likely data is to occur based on patterns, including the context of any surrounding data.

For example, given a typical email ending in the fragment “Looking forward…”, an LLM might complete it with “… to hearing back” — following the pattern of the countless emails it’s been trained on. It doesn’t mean the LLM is looking forward to anything.

“The current framework of training LLMs involves concealing, or ‘masking,’ previous words for context” and having the model predict which word should follow this context, Sebastian Berns, a Ph.D. researchers at Queen Mary University of London, told TechCrunch in an email interview. “This is conceptually similar to using predictive text in iOS and continually pressing one of the suggested next words.”

This probability-based approach works remarkably well at scale — for the most part. But while the range of words and their probabilities are likely to result in text that makes sense, it’s far from certain.

The emerging types of language models and why they matter

LLMs can generate something that’s grammatically correct but nonsensical, for instance — like the claim about the Golden Gate. Or they can spout mistruths, propagating inaccuracies in their training data. Or they can conflate different sources of information, including fictional sources, even if those sources clearly contradict each other.

It’s not malicious on the LLMs’ part. They don’t have malice, and the concepts of true and false are meaningless to them. They’ve simply learned to associate certain words or phrases with certain concepts, even if those associations aren’t accurate.

“‘Hallucinations’ are connected to the inability of an LLM to estimate the uncertainty of its own prediction,” Berns said. “An LLM is typically trained to always produce an output, even when the input is very different from the training data. A standard LLM does not have any way of knowing if it’s capable of reliably answering a query or making a prediction.”

Solving hallucination

The question is, can hallucination be solved? It depends on what you mean by “solved.”

Vu Ha, an applied researcher and engineer at the Allen Institute for Artificial Intelligence, asserts that LLMs “do and will always hallucinate.” But he also believes there are concrete ways to reduce — albeit not eliminate — hallucinations, depending on how an LLM is trained and deployed. 

“Consider a question answering system,” Ha said via email. “It’s possible to engineer it to have high accuracy by curating a high-quality knowledge base of questions and answers, and connecting this knowledge base with an LLM to provide accurate answers via a retrieval-like process.”

Ha illustrated the difference between an LLM with a “high-quality” knowledge base to draw on versus one with less careful data curation. He ran the question “Who are the authors of the Toolformer paper?” (Toolformer is an AI model trained by Meta) through Microsoft’s LLM-powered Bing Chat and Google’s Bard. Bing Chat correctly listed all eight Meta co-authors, while Bard misattributed the paper to researchers at Google and Hugging Face.

“Any deployed LLM-based system will hallucinate. The real question is if the benefits outweigh the negative outcome caused by hallucination,” Ha said. In other words, if there’s no obvious harm done by a model — the model gets a date or name wrong once in a while, say — but it’s otherwise helpful, then it might be worth the trade-off. “It’s a question of maximizing expected utility of the AI,” he added.

Age of AI: Everything you need to know about artificial intelligence

Berns pointed out another technique that had been used with some success to reduce hallucinations in LLMs: reinforcement learning from human feedback (RLHF). Introduced by OpenAI in 2017, RLHF involves training an LLM, then gathering additional information to train a “reward” model and fine-tuning the LLM with the reward model via reinforcement learning.

In RLHF, a set of prompts from a predefined dataset are passed through an LLM to generate new text. Then, human annotators are used to rank the outputs from the LLM in terms of their overall “helpfulness” — data that’s used to train the reward model. The reward model, which at this point can take in any text and assign it a score of how well humans perceive it, is then used to fine-tune the LLM’s generated responses.

OpenAI leveraged RLHF to train several of its models, including GPT-4. But even RLHF isn’t perfect, Berns warned.

“I believe the space of possibilities is too large to fully ‘align’ LLMs with RLHF,” Berns said. “Something often done in the RLHF setting is training a model to produce an ‘I don’t know’ answer [to a tricky question], primarily relying on human domain knowledge and hoping the model generalizes it to its own domain knowledge. Often it does, but it can be a bit finicky.”

Alternative philosophies

Assuming hallucination isn’t solvable, at least not with today’s LLMs, is that a bad thing? Berns doesn’t think so, actually. Hallucinating models could fuel creativity by acting as a “co-creative partner,” he posits — giving outputs that might not be wholly factual but that contain some useful threads to tug on nonetheless. Creative uses of hallucination can produce outcomes or combinations of ideas that might not occur to most people.

“‘Hallucinations’ are a problem if generated statements are factually incorrect or violate any general human, social or specific cultural values — in scenarios where a person relies on the LLM to be an expert,” he said. “But in creative or artistic tasks, the ability to come up with unexpected outputs can be valuable. A human recipient might be surprised by a response to a query and therefore be pushed into a certain direction of thoughts which might lead to the novel connection of ideas.”

Ha argued that the LLMs of today are being held to an unreasonable standard — humans “hallucinate” too, after all, when we misremember or otherwise misrepresent the truth. But with LLMs, he believes we experience a cognitive dissonance because the models produce outputs that look good on the surface but contain errors upon further inspection.

“Simply put, LLMs, just like any AI techniques, are imperfect and thus make mistakes,” he said. “Traditionally, we’re OK with AI systems making mistakes since we expect and accept imperfections. But it’s more nuanced when LLMs make mistakes.”

Indeed, the answer may well not lie in how generative AI models work at the technical level. Insofar as there’s a “solution” to hallucination today, treating models’ predictions with a skeptical eye seems to be the best approach.

More TechCrunch

Former Autonomy chief executive Dr Mike Lynch issued a statement Thursday following his acquittal of criminal charges, ending a 13-year legal battle with Hewlett-Packard which became one of Silicon Valley’s…

Autonomy’s Mike Lynch acquitted after US fraud trial brought by HP

Featured Article

What Snowflake isn’t saying about its customer data breaches

As another Snowflake customer confirms a data breach, the cloud data company says its position “remains unchanged.”

3 hours ago
What Snowflake isn’t saying about its customer data breaches

Investor demand has been so strong for Rippling’s shares that it is letting former employees particpate in its tender offer. With one exception.

Rippling bans former employees who work at competitors like Deel and Workday from its tender offer stock sale

It turns out the space industry has a lot of ideas on how to improve NASA’s $11 billion, 15-year plan to collect and return samples from Mars. Seven of these…

NASA puts $10M down on Mars sample return proposals from Blue Origin, SpaceX and others

Featured Article

In 2024, many Y Combinator startups only want tiny seed rounds — but there’s a catch

When Bowery Capital general partner Loren Straub started talking to a startup from the latest Y Combinator accelerator batch a few months ago, she thought it was strange that the company didn’t have a lead investor for the round it was raising. Even stranger, the founders didn’t seem to be…

10 hours ago
In 2024, many Y Combinator startups only want tiny seed rounds — but there’s a catch

The keynote will be focused on Apple’s software offerings and the developers that power them, including the latest versions of iOS, iPadOS, macOS, tvOS, visionOS and watchOS.

Watch Apple kick off WWDC 2024 right here

Welcome to Startups Weekly — Haje’s weekly recap of everything you can’t miss from the world of startups. Anna will be covering for him this week. Sign up here to…

Startups Weekly: Ups, downs, and silver linings

HSBC and BlackRock estimate that the Indian edtech giant Byju’s, once valued at $22 billion, is now worth nothing.

BlackRock has slashed the value of stake in Byju’s, once worth $22 billion, to zero

Apple is set to board the runaway locomotive that is generative AI at next week’s World Wide Developer Conference. Reports thus far have pointed to a partnership with OpenAI that…

Apple’s generative AI offering might not work with the standard iPhone 15

LinkedIn has confirmed it will no longer allow advertisers to target users based on data gleaned from their participation in LinkedIn Groups. The move comes more than three months after…

LinkedIn to limit targeted ads in EU after complaint over sensitive data use

Founders: Need plans this weekend? What better way to spend your time than applying to this year’s Startup Battlefield 200 at TechCrunch Disrupt. With Monday’s deadline looming, this is a…

Startup Battlefield 200 applications due Monday

The company is in the process of building a gigawatt-scale factory in Kentucky to produce its nickel-hydrogen batteries.

Novel battery manufacturer EnerVenue is raising $515M, per filing

Meta is quietly rolling out a new “Communities” feature on Messenger, the company confirmed to TechCrunch. The feature is designed to help organizations, schools and other private groups communicate in…

Meta quietly rolls out Communities on Messenger

Featured Article

Siri and Google Assistant look to generative AI for a new lease on life

Voice assistants in general are having an existential moment, and generative AI is poised to be the logical successor.

17 hours ago
Siri and Google Assistant look to generative AI for a new lease on life

Education software provider PowerSchool is being taken private by investment firm Bain Capital in a $5.6 billion deal.

Bain to take K-12 education software provider PowerSchool private in $5.6B deal

Shopify has acquired Threads.com, the Sequoia-backed Slack alternative, Threads said on its website. The companies didn’t disclose the terms of the deal but said that the Threads.com team will join…

Shopify acquires Threads (no, not that one)

Featured Article

Bangladeshi police agents accused of selling citizens’ personal information on Telegram

Two senior police officials in Bangladesh are accused of collecting and selling citizens’ personal information to criminals on Telegram.

1 day ago
Bangladeshi police agents accused of selling citizens’ personal information on Telegram

Carta, a once-high-flying Silicon Valley startup that loudly backed away from one of its businesses earlier this year, is working on a secondary sale that would value the company at…

Carta’s valuation to be cut by $6.5 billion in upcoming secondary sale

Boeing’s Starliner spacecraft has successfully delivered two astronauts to the International Space Station, a key milestone in the aerospace giant’s quest to certify the capsule for regular crewed missions.  Starliner…

Boeing’s Starliner overcomes leaks and engine trouble to dock with ‘the big city in the sky’

Rivian needs to sell its new revamped vehicles at a profit in order to sustain itself long enough to get to the cheaper mass market R2 SUV on the road.

Rivian’s path to survival is now remarkably clear

Featured Article

What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

Apple is hoping to make WWDC 2024 memorable as it finally spells out its generative AI plans.

1 day ago
What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

As WWDC 2024 nears, all sorts of rumors and leaks have emerged about what iOS 18 and its AI-powered apps and features have in store.

What to expect from Apple’s AI-powered iOS 18 at WWDC 2024

Apple’s annual list of what it considers the best and most innovative software available on its platform is turning its attention to the little guy.

Apple’s Design Awards highlight indies and startups

Meta launched its Meta Verified program today along with other features, such as the ability to call large businesses and custom messages.

Meta rolls out Meta Verified for WhatsApp Business users in Brazil, India, Indonesia and Colombia

Last year, during the Q3 2023 earnings call, Mark Zuckerberg talked about leveraging AI to have business accounts respond to customers for purchase and support queries. Today, Meta announced AI-powered…

Meta adds AI-powered features to WhatsApp Business app

TikTok is testing streaks that are similar to Snapchat’s in order to boost engagement, including how long people stay on the app.

TikTok is testing Snapchat-like streaks

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Your usual…

Inside Fisker’s collapse and robotaxis come to more US cities

New York-based Revel has made a lot of pivots since initially launching in 2018 as a dockless e-moped sharing service. The BlackRock-backed startup briefly stepped into the e-bike subscription business.…

Revel to lay off 1,000 staff ride-hail drivers, saying they’d rather be contractors anyway

Google says apps offering AI features will have to prevent the generation of restricted content.

Google Play cracks down on AI apps after circulation of apps for making deepfake nudes

The British retailers association also takes aim at Amazon’s “Buy Box,” claiming that Amazon manipulated which retailers were selected for the coveted placement.

Amazon slammed with £1.1B data abuse lawsuit from UK retailers