AI

OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk

Comment

Sam Altman OpenAI DSC02881
Image Credits: TechCrunch

Make way for yet another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures — from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to name a few — have added their names to a statement urging global attention on existential AI risk.

The statement, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS), seeks to equate AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk.

Here’s their (intentionally brief) statement in full:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Per a short explainer on CAIS’ website the statement has been kept “succinct” because those behind it are concerned to avoid their message about “some of advanced AI’s most severe risks” being drowned out by discussion of other “important and urgent risks from AI” which they nonetheless imply are getting in the way of discussion about extinction-level AI risk.

EU lawmakers back transparency and safety rules for generative AI

However we have actually heard the self-same concerns being voiced loudly and multiple times in recent months, as AI hype has surged off the back of expanded access to generative AI tools like OpenAI’s ChatGPT and DALL-E — leading to a surfeit of headline-grabbing discussion about the risk of “superintelligent” killer AIs. (Such as this one, from earlier this month, where statement-signatory Hinton warned of the “existential threat” of AI taking control. Or this one, from just last week, where Altman called for regulation to prevent AI destroying humanity.)

There was also the open letter signed by Elon Musk (and scores of others) back in March which called for a six-month pause on development of AI models more powerful than OpenAI’s GPT-4 to allow time for shared safety protocols to be devised and applied to advanced AI — warning over risks posed by “ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”.

So, in recent months, there has actually been a barrage of heavily publicized warnings over AI risks that don’t exist yet.

This drumbeat of hysterical headlines has arguably distracted attention from deeper scrutiny of existing harms. Such as the tools’ free use of copyrighted data to train AI systems without permission or consent (or payment); or the systematic scraping of online personal data in violation of people’s privacy; or the lack of transparency from AI giants vis-a-vis the data used to train these tools. Or, indeed, baked in flaws like disinformation (“hallucination”) and risks like bias (automated discrimination). Not to mention AI-driven spam! And the environmental toll of the energy expended to train these AI monsters.

It’s certainly notable that after a meeting last week between the UK prime minister and a number of major AI execs, including Altman and Hassabis, the government appears to be shifting tack on AI regulation — with a sudden keen interest in existential risk, per the Guardian’s reporting.

Talk of existential AI risk also distracts attention from problems related to market structure and dominance, as Jenna Burrell, director of research at Data & Society, pointed out in this recent Columbia Journalism Review article reviewing media coverage of ChatGPT — where she argued we need to move away from focusing on red herrings like AI’s potential “sentience” to covering how AI is further concentrating wealth and power.

So of course there are clear commercial motivations for AI giants to want to route regulatory attention into the far-flung theoretical future, with talk of an AI-driven doomsday — as a tactic to draw lawmakers’ minds away from more fundamental competition and antitrust considerations in the here and now. (And data exploitation as a tool to concentrate market power is nothing new.)

Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now.

OpenAI was a notable non-signatory to the aforementioned (Musk signed) open letter but a number of its employees are backing the CAIS-hosted statement (while Musk apparently is not). So the latest statement appears to offer an (unofficial) commercially self-serving reply by OpenAI (et al) to Musk’s earlier attempt to hijack the existential AI risk narrative in his own interests (which no longer favor OpenAI leading the AI charge).

Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape “democratic processes for steering AI”, as Altman put it. So the company is actively positioning itself (and applying its investors’ wealth) to influence the shape of any future mitigation guardrails, alongside ongoing in-person lobbying efforts targeting international regulators. Altman also recently made public threats that OpenAI’s tool could be pulled out of Europe if draft EU AI rules aren’t watered down to exclude its tech.

Elsewhere, some signatories of the earlier letter have simply been happy to double up on another publicity opportunity — inking their name to both (hi Tristan Harris!).

But who is CAIS? There’s limited public information about the organization hosting this message. However it is certainly involved in lobbying policymakers, at its own admission. Its website says its mission is “to reduce societal-scale risks from AI” and claims it’s dedicated to encouraging research and field-building to this end, including funding research — as well as having a stated policy advocacy role.

An FAQ on the website offers limited information about who is financially backing it (saying its funded by private donations). While, in answer to an FAQ question asking “is CAIS an independent organization”, it offers a brief claim to be “serving the public interest”:

CAIS is a nonprofit organization entirely supported by private contributions. Our policies and research directions are not determined by individual donors, ensuring that our focus remains on serving the public interest.

We’ve reached out to CAIS with questions.

In a Twitter thread accompanying the launch of the statement, CAIS’ director, Dan Hendrycks, expands on the aforementioned statement explainer — naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI… not just the risk of extinction”.

“These are all important risks that need to be addressed,” he also suggests, downplaying concerns policymakers have limited bandwidth to address AI harms by arguing: “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’ From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”

The thread also credits David Krueger, an assistant professor of Computer Science at the University of Cambridge, with coming up with the idea to have a single-sentence statement about AI risk and “jointly” helping with its development.

Sam Altman’s big European tour

1,100+ notable signatories just signed an open letter asking ‘all AI labs to immediately pause for at least 6 months’

More TechCrunch

Featured Article

DEI backlash: Stay up-to-date on the latest legal and corporate challenges

It’s clear that this year will be a turning point for DEI.

5 hours ago
DEI backlash: Stay up-to-date on the latest legal and corporate challenges

The keynote will be focused on Apple’s software offerings and the developers that power them, including the latest versions of iOS, iPadOS, macOS, tvOS, visionOS and watchOS.

Watch Apple kick off WWDC 2024 right here

Hello and welcome back to TechCrunch Space. Unfortunately, Boeing’s Starliner launch was delayed yet again, this time due to issues with one of the three redundant computers used by United…

TechCrunch Space: China’s victory

The court ruling said that Fearless Fund’s Strivers Grant likely violates the Civil Rights Act of 1866, which bans the use of race in contracts.

An appeals court rules that VC Fearless Fund cannot issue grants to Black women, but the fight continues

Instagram Threads is rolling out the ability for users to signal which sort of posts they wanted to see more or less of by swiping.

You can now customize your For You feed on Threads using swipes

The Japanese billionaire who commissioned SpaceX for a private mission around the moon on a Starship rocket has abruptly canceled the project, citing ongoing uncertainties around when the launch vehicle…

Japanese billionaire pulls plug on private ‘dearMoon’ lunar Starship mission

Malicious actors are abusing generative AI music tools to create homophobic, racist, and propagandic songs — and publishing guides instructing others how to do so. According to ActiveFence, a service…

People are using AI music generators to create hateful songs

As WWDC 2024 nears, all sorts of rumors and leaks have emerged about what iOS 18 and its AI-powered apps and features have in store.

What to expect from Apple’s AI-powered iOS 18 at WWDC

Dallas is the second city that Cruise is easing its way back into after pulling its entire U.S. fleet late last year.

GM’s Cruise is testing robotaxis in Dallas again

Featured Article

After raising $100M, AI fintech LoanSnap is being sued, fined, evicted

The company has been sued by at least seven creditors, including Wells Fargo.

9 hours ago
After raising $100M, AI fintech LoanSnap is being sued, fined, evicted

Featured Article

Sonos Ace review: A high-priced contender

The Ace are a contender in a crowded market, but they’re still in search of that magic bullet to truly let them stand out from the pack.

9 hours ago
Sonos Ace review: A high-priced contender

The change would see Instagram becoming more like the free version of YouTube, which requires users to view ads before and in the middle of watching videos.

Instagram confirms test of ‘unskippable’ ads

Commerce platform Shopify has acquired Checkout Blocks, allowing Shopify Plus merchants to make no-code customizations in their checkout to enhance customer experience and potentially boost sales.  Checkout Blocks, which debuted…

Shopify acquires Checkout Blocks, a checkout customization app

After the Digital Markets Act (DMA) forced Apple to allow third-party app stores for iOS in Europe, several developers have launched alternative stores, like the AltStore and MacPaw’s Setapp (currently…

Aptoide launches its alternative iOS game store in the EU

Time is relentless and, right now, it’s no friend to procrastination-prone early-stage startup founders. The application window for Startup Battlefield 200 (SB 200) at TechCrunch Disrupt 2024 slams shut in…

One week left: Apply to TC Disrupt Startup Battlefield 200

Cloudera, the once high-flying Hadoop startup, raised $1 billion and went public in 2018 before being acquired by private equity for $5.3 billion in 2021. Today, the company announced that…

Cloudera acquires Verta to bring some AI chops to its data platform

The global spend management sector is experiencing a tailwind of sorts. North America is arguably the biggest market in this space, but spend management companies have seen demand rise across…

Spend management startup SiFi raises $10M to grow further in Saudi Arabia

Neural Concept lets designers model how components will perform before they can be manufactured.

Swiss startup Neural Concept raises $27M to cut EV design time to 18 months

The StrictlyVC roadtrip continues! Coming off of sold-out events in London, Los Angeles, and San Francisco, we’re heading to Washington, D.C. for a cozy-vc-packed, evening at the Woolly Mammoth Theatre…

Don’t miss StrictlyVC in DC next week

X will now allow users to post consensually produced NSFW content as long as it is prominently labeled as such.

X tweaks rules to formally allow adult content

Ashby consolidates existing talent acquisition tools and leans heavily on AI to automate the more repetitive steps in the recruitment pipeline.

Ashby injects recruiting with a dose of AI

Spotify has announced it’s hiking subscriptions for customers in the U.S., the second such price increase in the space of a year. The music-streaming giant reports that premium pricing will…

Spotify to increase premium pricing in the US to $11.99 per month

Monzo has announced its 2024 financial results, revealing its first full-year pre-tax profit. The company also confirmed that it’s in the early stages of expanding into the broader European market…

UK neobank Monzo reports first full (pre-tax) profit, prepares for EU expansion with Dublin hub

Featured Article

Inside Apple’s efforts to build a better recycling robot

Last week, TechCrunch paid a visit to Apple’s Austin, Texas, manufacturing facilities. Since 2013, the company has built its Mac Pro desktop about 20 minutes north of downtown. The 400,000-square-foot facility sits in a maze of industry parks, a quick trip south from the company’s in-progress corporate campus. In recent years, the capital city has…

18 hours ago
Inside Apple’s efforts to build a better recycling robot

Early attempts at making dedicated hardware to house artificial intelligence smarts have been criticized as, well, a bit rubbish. But here’s an AI gadget-in-the-making that’s all about rubbish, literally: Finnish…

Binit is bringing AI to trash

Temasek has previously invested in Lenskart, and this new funding follows a $500 million investment by the Abu Dhabi Investment Authority last year.

Temasek, Fidelity buy $200M stake in Lenskart at $5B valuation

Less than one year after its iOS launch, French startup ten ten has gone viral with a walkie talkie app that allows teens to send voice messages to their close…

French startup ten ten reinvents the walkie-talkie

Featured Article

Unicorn-rich VC Wesley Chan owes his success to a Craigslist job washing lab beakers

While all of Wesley Chan’s success has been well-documented over the years, his personal journey…not so much. Chan spoke to TechCrunch about the ways his life impacts how he invests in startups.

1 day ago
Unicorn-rich VC Wesley Chan owes his success to a Craigslist job washing lab beakers

Presumptive Republican presidential nominee Donald Trump now has an account on the short-form video app that he once tried to ban. Trump’s TikTok account, which launched on Saturday night, features…

Trump takes off on TikTok

With fewer than 400,000 inhabitants, Iceland receives more than its fair share of tourists — and of venture capital.

Iceland’s startup scene is all about making the most of the country’s resources