AI

OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk

Comment

Sam Altman OpenAI DSC02881
Image Credits: TechCrunch

Make way for yet another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures — from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to name a few — have added their names to a statement urging global attention on existential AI risk.

The statement, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS), seeks to equate AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk.

Here’s their (intentionally brief) statement in full:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Per a short explainer on CAIS’ website the statement has been kept “succinct” because those behind it are concerned to avoid their message about “some of advanced AI’s most severe risks” being drowned out by discussion of other “important and urgent risks from AI” which they nonetheless imply are getting in the way of discussion about extinction-level AI risk.

EU lawmakers back transparency and safety rules for generative AI

However we have actually heard the self-same concerns being voiced loudly and multiple times in recent months, as AI hype has surged off the back of expanded access to generative AI tools like OpenAI’s ChatGPT and DALL-E — leading to a surfeit of headline-grabbing discussion about the risk of “superintelligent” killer AIs. (Such as this one, from earlier this month, where statement-signatory Hinton warned of the “existential threat” of AI taking control. Or this one, from just last week, where Altman called for regulation to prevent AI destroying humanity.)

There was also the open letter signed by Elon Musk (and scores of others) back in March which called for a six-month pause on development of AI models more powerful than OpenAI’s GPT-4 to allow time for shared safety protocols to be devised and applied to advanced AI — warning over risks posed by “ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”.

So, in recent months, there has actually been a barrage of heavily publicized warnings over AI risks that don’t exist yet.

This drumbeat of hysterical headlines has arguably distracted attention from deeper scrutiny of existing harms. Such as the tools’ free use of copyrighted data to train AI systems without permission or consent (or payment); or the systematic scraping of online personal data in violation of people’s privacy; or the lack of transparency from AI giants vis-a-vis the data used to train these tools. Or, indeed, baked in flaws like disinformation (“hallucination”) and risks like bias (automated discrimination). Not to mention AI-driven spam! And the environmental toll of the energy expended to train these AI monsters.

It’s certainly notable that after a meeting last week between the UK prime minister and a number of major AI execs, including Altman and Hassabis, the government appears to be shifting tack on AI regulation — with a sudden keen interest in existential risk, per the Guardian’s reporting.

Talk of existential AI risk also distracts attention from problems related to market structure and dominance, as Jenna Burrell, director of research at Data & Society, pointed out in this recent Columbia Journalism Review article reviewing media coverage of ChatGPT — where she argued we need to move away from focusing on red herrings like AI’s potential “sentience” to covering how AI is further concentrating wealth and power.

So of course there are clear commercial motivations for AI giants to want to route regulatory attention into the far-flung theoretical future, with talk of an AI-driven doomsday — as a tactic to draw lawmakers’ minds away from more fundamental competition and antitrust considerations in the here and now. (And data exploitation as a tool to concentrate market power is nothing new.)

Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now.

OpenAI was a notable non-signatory to the aforementioned (Musk signed) open letter but a number of its employees are backing the CAIS-hosted statement (while Musk apparently is not). So the latest statement appears to offer an (unofficial) commercially self-serving reply by OpenAI (et al) to Musk’s earlier attempt to hijack the existential AI risk narrative in his own interests (which no longer favor OpenAI leading the AI charge).

Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape “democratic processes for steering AI”, as Altman put it. So the company is actively positioning itself (and applying its investors’ wealth) to influence the shape of any future mitigation guardrails, alongside ongoing in-person lobbying efforts targeting international regulators. Altman also recently made public threats that OpenAI’s tool could be pulled out of Europe if draft EU AI rules aren’t watered down to exclude its tech.

Elsewhere, some signatories of the earlier letter have simply been happy to double up on another publicity opportunity — inking their name to both (hi Tristan Harris!).

But who is CAIS? There’s limited public information about the organization hosting this message. However it is certainly involved in lobbying policymakers, at its own admission. Its website says its mission is “to reduce societal-scale risks from AI” and claims it’s dedicated to encouraging research and field-building to this end, including funding research — as well as having a stated policy advocacy role.

An FAQ on the website offers limited information about who is financially backing it (saying its funded by private donations). While, in answer to an FAQ question asking “is CAIS an independent organization”, it offers a brief claim to be “serving the public interest”:

CAIS is a nonprofit organization entirely supported by private contributions. Our policies and research directions are not determined by individual donors, ensuring that our focus remains on serving the public interest.

We’ve reached out to CAIS with questions.

In a Twitter thread accompanying the launch of the statement, CAIS’ director, Dan Hendrycks, expands on the aforementioned statement explainer — naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI… not just the risk of extinction”.

“These are all important risks that need to be addressed,” he also suggests, downplaying concerns policymakers have limited bandwidth to address AI harms by arguing: “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’ From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”

The thread also credits David Krueger, an assistant professor of Computer Science at the University of Cambridge, with coming up with the idea to have a single-sentence statement about AI risk and “jointly” helping with its development.

Sam Altman’s big European tour

1,100+ notable signatories just signed an open letter asking ‘all AI labs to immediately pause for at least 6 months’

More TechCrunch

Hello and welcome back to TechCrunch Space. Happy belated Mother’s Day! Want to reach out with a tip? Email Aria at aria.techcrunch@gmail.com or send me a message on Signal at…

Apple devoted a full event to iPad last Tuesday, roughly a month out from WWDC. From the invite artwork to the polarizing ad spot, Apple was clear — the event…

Apple iPad Pro M4 vs. iPad Air M2: Reviewing which is right for most

Terri Burns, a former partner at GV, is venturing into a new chapter of her career by launching her own venture firm called Type Capital. 

GV’s youngest partner has launched her own firm

The decision to go monochrome was probably a smart one, considering the candy-colored alternatives that seem to want to dazzle and comfort you.

ChatGPT’s new face is a black hole

Apple and Google announced on Monday that iPhone and Android users will start seeing alerts when it’s possible that an unknown Bluetooth device is being used to track them. The…

Apple and Google agree on standard to alert people when unknown Bluetooth devices may be tracking them

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: Watch here

A human safety operator will be behind the wheel during this phase of testing, according to the company.

GM’s Cruise ramps up robotaxi testing in Phoenix

OpenAI announced a new flagship generative AI model on Monday that they call GPT-4o — the “o” stands for “omni,” referring to the model’s ability to handle text, speech, and…

OpenAI debuts GPT-4o ‘omni’ model now powering ChatGPT

Featured Article

The women in AI making a difference

As a part of a multi-part series, TechCrunch is highlighting women innovators — from academics to policymakers —in the field of AI.

5 hours ago
The women in AI making a difference

The expansion of Polar Semiconductor’s facility would enable the company to double its U.S. production capacity of sensor and power chips within two years.

White House proposes up to $120 million to help fund Polar Semiconductor’s chip facility expansion

In 2021, Google kicked off work on Project Starline, a corporate-focused teleconferencing platform that uses 3D imaging, cameras and a custom-designed screen to let people converse with someone as if…

Google’s 3D video conferencing platform, Project Starline, is coming in 2025 with help from HP

Over the weekend, Instagram announced it is expanding its creator marketplace to 10 new countries — this marketplace connects brands with creators to foster collaboration. The new regions include South…

Instagram expands its creator marketplace to 10 new countries

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

Four-year-old Mexican BNPL startup Aplazo facilitates fractionated payments to offline and online merchants even when the buyer doesn’t have a credit card.

Aplazo is using buy now, pay later as a stepping stone to financial ubiquity in Mexico

We received countless submissions to speak at this year’s Disrupt 2024. After carefully sifting through all the applications, we’ve narrowed it down to 19 session finalists. Now we need your…

Vote for your Disrupt 2024 Audience Choice favs

Co-founder and CEO Bowie Cheung, who previously worked at Uber Eats, said the company now has 200 customers.

Healthy growth helps B2B food e-commerce startup Pepper nab $30 million led by ICONIQ Growth

Booking.com has been designated a gatekeeper under the EU’s DMA, meaning the firm will be regulated under the bloc’s market fairness framework.

Booking.com latest to fall under EU market power rules

Featured Article

‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Estate is an invite-only website that has helped hundreds of attackers make thousands of phone calls aimed at stealing account passcodes, according to its leaked database.

10 hours ago
‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Squarespace is being taken private in an all-cash deal that values the company on an equity basis at $6.6 billion.

Permira is taking Squarespace private in a $6.9 billion deal

AI-powered tools like OpenAI’s Whisper have enabled many apps to make transcription an integral part of their feature set for personal note-taking, and the space has quickly flourished as a…

Buy Me a Coffee’s founder has built an AI-powered voice note app

Airtel, India’s second-largest telco, is partnering with Google Cloud to develop and deliver cloud and GenAI solutions to Indian businesses.

Google partners with Airtel to offer cloud and GenAI products to Indian businesses

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to…

Women in AI: Rep. Dar’shun Kendrick wants to pass more AI legislation

We took the pulse of emerging fund managers about what it’s been like for them during these post-ZERP, venture-capital-winter years.

A reckoning is coming for emerging venture funds, and that, VCs say, is a good thing

It’s been a busy weekend for union organizing efforts at U.S. Apple stores, with the union at one store voting to authorize a strike, while workers at another store voted…

Workers at a Maryland Apple store authorize strike

Alora Baby is not just aiming to manufacture baby cribs in an environmentally friendly way but is attempting to overhaul the whole lifecycle of a product

Alora Baby aims to push baby gear away from the ‘landfill economy’

Bumble founder and executive chair Whitney Wolfe Herd raised eyebrows this week with her comments about how AI might change the dating experience. During an onstage interview, Bloomberg’s Emily Chang…

Go on, let bots date other bots

Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news. This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for…

Why Apple’s ‘Crush’ ad is so misguided

The U.K. AI Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and…

UK agency releases tools to test AI model safety

AI startup Runway’s second annual AI Film Festival showcased movies that incorporated AI tech in some fashion, from backgrounds to animations.

At the AI Film Festival, humanity triumphed over tech