Government & Policy

Is ChatGPT a ‘virus that has been released into the wild’?

Comment

illustration of human brain
Image Credits: Just_Super

More than three years ago, this editor sat down with Sam Altman for a small event in San Francisco soon after he’d left his role as the president of Y Combinator to become CEO of the AI company he co-founded in 2015 with Elon Musk and others, OpenAI.

At the time, Altman described OpenAI’s potential in language that sounded outlandish to some. Altman said, for example, that the opportunity with artificial general intelligence — machine intelligence that can solve problems as well as a human — is so great that if OpenAI managed to crack it, the outfit could “maybe capture the light cone of all future value in the universe.” He said that the company was “going to have to not release research” because it was so powerful. Asked if OpenAI was guilty of fear-mongering — Musk has repeatedly called all organizations developing AI to be regulated — Altman talked about the dangers of not thinking about “societal consequences” when “you’re building something on an exponential curve.”

The audience laughed at various points of the conversation, not certain how seriously to take Altman. No one is laughing now, however. While machines are not yet as intelligent as people, the tech that OpenAI has since released is taking many aback (including Musk), with some critics fearful that it could be our undoing, especially with more sophisticated tech reportedly coming soon.

Indeed, though heavy users insist it’s not so smart, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals across a range of industries are trying to process the implications. Educators, for example, wonder how they’ll be able to distinguish original writing from the algorithmically generated essays they are bound to receive — and that can evade anti-plagiarism software.

Paul Kedrosky isn’t an educator per se. He’s an economist, venture capitalist and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” But he is among those who are suddenly worried about our collective future, tweeting yesterday: “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Wrote Kedrosky, “I obviously feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions.”

We talked with him yesterday about some of his concerns, and why he thinks OpenAI is driving what he believes is the “most disruptive change the U.S. economy has seen in 100 years,” and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT came out last Wednesday. What triggered your reaction on Twitter?

PK: I’ve played with these conversational user interfaces and AI services in the past and this obviously is a huge leap beyond. And what troubled me here in particular is the casual brutality of it, with massive consequences for a host of different activities. It’s not just the obvious ones, like high school essay writing, but across pretty much any domain where there’s a grammar — [meaning] an organized way of expressing yourself. That could be software engineering, high school essays, legal documents. All of them are easily eaten by this voracious beast and spit back out again without compensation to whatever was used for training it.

I heard from a colleague at UCLA who told me they have no idea what to do with essays at the end of the current term, where they’re getting hundreds per course and thousands per department, because they have no idea anymore what’s fake and what’s not. So to do this so casually — as someone said to me earlier today — is reminiscent of the so-called [ethical] white hat hacker who finds a bug in a widely used product, then informs the developer before the broader public knows so the developer can patch their product and we don’t have mass devastation and power grids going down. This is the opposite, where a virus has been released into the wild with no concern for the consequences.

It does feel like it could eat up the world.

Some might say, ‘Well, did you feel the same way when automation arrived in auto plants and auto workers were put out of work? Because this is a kind of broader phenomenon.’ But this is very different. These specific learning technologies are self catalyzing; they’re learning from the requests. So robots in a manufacturing plant, while disruptive and creating incredible economic consequences for the people working there, didn’t then turn around and start absorbing everything going inside the factory, moving across sector by sector, whereas that’s exactly not only what we can expect but what you should expect.

Musk left OpenAI partly over disagreements about the company’s development, he said in 2019, and he has been talking about AI as an existential threat for a long time. But people carped that he didn’t know what he’s talking about. Now we’re confronting this powerful tech and it’s not clear who steps in to address it.

I think it’s going to start out in a bunch of places at once, most of which will look really clumsy, and people will [then] sneer because that’s what technologists do. But too bad, because we’ve walked ourselves into this by creating something with such consequentiality. So in the same way that the FTC demanded that people running blogs years ago [make clear they] have affiliate links and make money from them, I think at a trivial level, people are going to be forced to make disclosures that ‘We wrote none of this. This is all machine generated.’ [Editor’s note: OpenAI says it’s working on a way to “watermark” AI-generated content, along with other “provenance techniques.”]

I also think we’re going to see new energy for the ongoing lawsuit against Microsoft and OpenAI over copyright infringement in the context of our in-training, machine learning algorithms. I think there’s going to be a broader DMCA issue here with respect to this service.

And I think there’s the potential for a [massive] lawsuit and settlement eventually with respect to the consequences of the services, which, you know, will probably take too long and not help enough people, but I don’t see how we don’t end up in [this place] with respect to these technologies.

What’s the thinking at MIT?

Andy McAfee and his group over there are more sanguine and have a more orthodox view out there that anytime we see disruption, other opportunities get created, people are mobile, they move from place to place and from occupation to occupation, and we shouldn’t be so hidebound that we think this particular evolution of technology is the one around which we can’t mutate and migrate. And I think that’s broadly true.

But the lesson of the last five years in particular has been these changes can take a long time. Free trade, for example, is one of those incredibly disruptive, economy-wide experiences, and we all told ourselves as economists looking at this that the economy will adapt, and people in general will benefit from lower prices. What no one anticipated was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can anticipate and predict what the consequences will be, but [we can’t].

You talked about high school and college essay writing. One of our kids has already asked — theoretically! — if it would be plagiarism to use ChatGPT to author a paper.

The purpose of writing an essay is to prove that you can think, so this short circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t let people have homework assignments because we no longer know whether they’re cheating or not, that means that everything has to happen in the classroom and must be supervised. There can’t be anything we take home. More stuff must be done orally, and what does that mean? It means school just became much more expensive, much more artisanal, much smaller and at the exact time that we’re trying to do the opposite. The consequences for higher education are devastating in terms of actually delivering a service anymore.

What do you think of the idea of universal basic income, or enabling everyone to participate in the gains from AI?

I’m much less strong a proponent than I was pre COVID. The reason is that COVID, in a sense, was an experiment with a universal basic income. We paid people to stay home, and they came up with QAnon. So I’m really nervous about what happens whenever people don’t have to hop in a car, drive somewhere, do a job they hate and come home again, because the devil finds work for idle hands, and there’ll be a lot of idle hands and a lot of deviltry.

More TechCrunch

Call Arc can help answer immediate and small questions, according to the company. 

Arc Search’s new Call Arc feature lets you ask questions by ‘making a phone call’

After multiple delays, Apple and the Paris area transportation authority rolled out support for Paris transit passes in Apple Wallet. It means that people can now use their iPhone or…

Paris transit passes now available in iPhone’s Wallet app

Redwood Materials, the battery recycling startup founded by former Tesla co-founder JB Straubel, will be recycling production scrap for batteries going into General Motors electric vehicles.  The company announced Thursday…

Redwood Materials is partnering with Ultium Cells to recycle GM’s EV battery scrap

A new startup called Auggie is aiming to give parents a single platform where they can shop for products and connect with each other. The company’s new app, which launched…

Auggie’s new app helps parents find community and shop

Andrej Safundzic, Alan Flores Lopez and Leo Mehr met in a class at Stanford focusing on ethics, public policy and technological change. Safundzic — speaking to TechCrunch — says that…

Lumos helps companies manage their employees’ identities — and access

Remark trains AI models on human product experts to create personas that can answer questions with the same style of their human counterparts.

Remark puts thousands of human product experts into AI form

ZeroPoint claims to have solved compression problems with hyper-fast, low-level memory compression that requires no real changes to the rest of the computing system.

ZeroPoint’s nanosecond-scale memory compression could tame power-hungry AI infrastructure

In 2021, Roi Ravhon, Asaf Liveanu and Yizhar Gilboa came together to found Finout, an enterprise-focused toolset to help manage and optimize cloud costs. (We covered the company’s launch out…

Finout lands cash to grow its cloud spend management platform

On the heels of raising $102 million earlier this year, Bugcrowd is making good on its promise to use some of that funding to make acquisitions to strengthen its security…

Bugcrowd, the crowdsourced white-hat hacker platform, acquires Informer to ramp up its security chops

Google is preparing to build what will be the first subsea fibre optic cable connecting the continents of Africa and Australia. The news comes as the major cloud hyperscalers battle…

Google to build first subsea fibre optic cable connecting Africa with Australia

The Kia EV3 — the new all-electric compact SUV revealed Thursday — illustrates a growing appetite among global automakers to bring generative AI into their vehicles.  The automaker said the…

The new Kia EV3 will have an AI assistant with ChatGPT DNA

Bing, Microsoft’s search engine, isn’t working properly right now. At first, we noticed it wasn’t possible to perform a web search at all. Now it seems search results are loading…

Bing’s API is down, taking Microsoft Copilot, DuckDuckGo and ChatGPT’s web search feature down too

If you thought autonomous driving was just for cars, think again. The so-called ‘autonomous navigation’ market — where ships steer themselves guided by AI, resulting in fuel and time savings…

Autonomous shipping startup Orca AI tops up with $23M led by OCV Partners and MizMaa Ventures

The best known mycoprotein is probably Quorn, a meat substitute that’s fast approaching its 40th birthday. But Finnish biotech startup Enifer is cooking up something even older: Its proprietary single-cell…

Meet the Finnish biotech startup bringing a long lost mycoprotein to your plate

Silo, a Bay Area food supply chain startup, has hit a rough patch. TechCrunch has learned that the company on Tuesday laid off roughly 30% of its staff, or north…

Food supply chain software maker Silo lays off ~30% of staff amid M&A discussions

Featured Article

Meta’s new AI council is composed entirely of white men

Meanwhile, women and people of color are disproportionately impacted by irresponsible AI.

17 hours ago
Meta’s new AI council is composed entirely of white men

If you’ve ever wanted to apply to Y Combinator, here’s some inside scoop on how the iconic accelerator goes about choosing companies.

Garry Tan has revealed his ‘secret sauce’ for getting into Y Combinator

Indian ride-hailing startup BluSmart has started operating in Dubai, TechCrunch has exclusively learned and confirmed with its executive. The move to Dubai, which has been rumored for months, could help…

India’s BluSmart is testing its ride-hailing service in Dubai

Under the envisioned framework, both candidate and issue ads would be required to include an on-air and filed disclosure that AI-generated content was used.

FCC proposes all AI-generated content in political ads must be disclosed

Want to make a founder’s day, week, month, and possibly career? Refer them to Startup Battlefield 200 at Disrupt 2024! Applications close June 10 at 11:59 p.m. PT. TechCrunch’s Startup…

Refer a founder to Startup Battlefield 200 at Disrupt 2024

Social networking startup and X competitor Bluesky is officially launching DMs (direct messages), the company announced on Wednesday. Later, Bluesky plans to “fully support end-to-end encrypted messaging down the line,”…

Bluesky now has DMs

The perception in Silicon Valley is that every investor would love to be in business with Peter Thiel. But the venture capital fundraising environment has become so difficult that even…

Peter Thiel-founded Valar Ventures raised a $300 million fund, half the size of its last one

Featured Article

Spyware found on US hotel check-in computers

Several hotel check-in computers are running a remote access app, which is leaking screenshots of guest information to the internet.

20 hours ago
Spyware found on US hotel check-in computers

Gavet has had a rocky tenure at Techstars and her leadership was the subject of much controversy.

Techstars CEO Maëlle Gavet is out

The struggle isn’t universal, however.

Connected fitness is adrift post-pandemic

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the first months of 2024. Smaller-sized…

22 hours ago
A comprehensive list of 2024 tech layoffs

HoundDog actually looks at the code a developer is writing, using both traditional pattern matching and large language models to find potential issues.

HoundDog.ai helps developers prevent personal information from leaking

The changes are designed to enhance the consumer experience of using Google Pay and make it a more competitive option against other payment methods.

Google Pay will now display card perks, BNPL options and more

Few figures in the tech industry have earned the storied reputation of Vinod Khosla, founder and partner at Khosla Ventures. For over 40 years, he has been at the center…

Vinod Khosla is coming to Disrupt to discuss how AI might change the future