Biotech & Health

Not Just Another Discussion About Whether AI Is Going To Destroy Us

Comment

Image Credits:

An AI roundtable discussion is a staple of the tech journalism circus — usually framed with a preamble about dystopic threats to human existence from the inexorable rise of ‘super intelligence machines’. Just add a movie still from The Terminator.

What typically results from such a set-up is a tangled back and forth of viewpoints and anecdotes, where a coherent definition of AI fails to be an emergent property of the assembled learned minds. Nor is there clear consensus about what AI might mean for the future of humanity. After all, how can even the most well-intentioned groupthink predict the outcome of an unknown unknown?

None of this is surprising, given we humans don’t even know what human intelligence is. Thinking ourselves inside the metallic shell of ‘machine consciousness’ — whatever that might mean — is about as fruitful as trying to imagine what our thoughts might be if our own intelligence were embodied inside the flesh of a pear, rather the fleshy forms we do inhabit. Or if our consciousness existed fleetingly in liquid paint during the moment of animation by an artist’s intention. Philosophers can philosophize about the implications of AI, sure (and of course they do). But only an idiot would claim to know.

The panel discussion I attended this week at London’s hyper-trendy startup co-working hub Second Home trod plenty of this familiar ground. So I won’t rehash the usual arguments. Rather, and as some might argue making more like a machine — in the sense of acting like an algorithm trained to surface novelty from a mixed data dump — I’ve compiled a list (below) of some of the more interesting points that did emerge as panelists were asked to consider whether AI is “a force for good” (or not).

I’ve also listed some promising avenues for (narrow) AI mentioned by participants. So where they see potential for learning algorithms to solve problems humans might otherwise find tricky to crack — and also where those use-cases can be broadly considered socially beneficial, in an effort to steer the AI narrative away from bloodthirsty robots.

The last list is a summary of more grounded perceived threats/risks, i.e. those that don’t focus on the stereotypical doomsday scenario of future ‘superintelligent machines’ judging humans a waste of planetary space, but which are again focused on risks associated with the kind of narrow but proliferating — in terms of applications and usage — ‘AI’ we do already have.

One more point before switching to bullets and soundbites: the most concise description of (narrow) AI that emerged during the hour long discussion came from Tractable founder Alexandre Dalyac, who summed it up thus: “Algorithms compared to humans can usually tend to solve scale, speed or accuracy issues.”

So there you have it: AI, it’s all about scale, speed and accuracy. Not turning humans into liquid soap. But if you do want to concern yourself with where machine intelligence is headed, then thinking about how algorithmic scale, speed and accuracy — applied over more and more aspects of human lives — will impact and shape the societies we live in is certainly a question worth pondering.

Panelists

  • Calum Chace, author of ‘Surviving AI’
  • Dan Crow, CTO Songkick
  • Alexandre Dalyac, founder, Tractable
  • Dr Yasemin J Erden, Lecturer/Programme Director Philosophy, St Mary’s University
  • Martina King, CEO, Featurespace
  • Ben Medlock, founder, SwiftKey
  • Martin Mignot, Principal, Index Ventures
  • Jun Wang, Reader, Computer Science, UCL & Co-founder, CTO, MediaGamma

Discussion points of above average interest:

  • Should AI research be open source by default? How can we be expected to control and regulate the social impact of increasingly clever computing when the largest entities involved in AI fields like deep learning are commercial companies such as Google that do not divulge their proprietary algorithms?

“If the future of humanity is at stake should they be forced to open source it? Or how can we control what’s happening there?” asked Mignot. “I don’t think anyone knows what Google is doing. That’s one of the issues, that’s one of the worries we should have.”

A movement to open source machine learning-related research could also be a way to lessen public fears about the future impact of AI technologies, added Jun.

  • Will it be the case that the more generalist our machines become, the less capable and/or reliable for a particular task — and arguably, therefore, the less safe overall? Is that perhaps the trade-off when you try to make machines think outside a (narrow) box?

“One of the interesting philosophical questions is whether your ability to do a particular task with absolute focus — and reduce the false positives, increase the safety — actually requires a narrow form of intelligence. And at the point where our machines start to become more general, and sort of inherently more human-like, whether necessarily that introduces a reduction in safety,” posited Medlock.

“I can imagine that the kind of flexibility of the human brain, the plasticity to respond to so many different scenarios requires a reduction in specific abilities to do particular tasks. I think that’s going to be one of the interesting things that will emerge as we start to develop AGI [artificial general intelligence] — whether actually it becomes useful for a very different set of reasons to narrow AI.”

“I don’t think artificial intelligence in itself is what I would be concerned about, it’s more artificial stupidity. It’s the stupidity that comes with either a narrow focus, or a misunderstanding of the broader issues,” added Erden. “The difficulty in trying to establish all the little details that make up the context in which individual specific tasks happen.

“Once you try to ask individual programs to do very big things, and they need therefore to take into account lots of issues, then it becomes much more difficult.”

  • Should core questions of safety or wider ethical worries about machine-powered decision-making usurping human judgment be society’s biggest concern as learning algorithms proliferate? Can you even separate safety from ethics at that fuzzy juncture?

“The guys who built the Web put it up and out there and didn’t really think about the ethics at all. Didn’t think about putting those tools into the hands of people who would use those tools negatively, instead of positively. And I think we can take those lessons and apply them to new technologies,” argued King.

“A good example for the Web would be people believing that the laws of California were appropriate to everywhere around the world. And they aren’t, and they weren’t, and actually it took those Web companies a huge amount of time — and it was peer group pressure, lobby groups and so on — in order to get those organizations to behave actually appropriately for the laws of those individual countries they were operating in.”

They don’t care about us, they don’t care about anything. They don’t know they exist. But they can do us damage, or they can provide benefits and we need to thinking about how to make them safe.

“I’m a bit puzzled that people talk about AI ethics,” added Chace. “Machines may well be moral beings at some point but at the moment it’s not about ethics, it’s about safety. It’s about making sure that as AIs get more and more powerful that they are safe for humans. They don’t care about us, they don’t care about anything. They don’t know they exist. But they can do us damage, or they can provide benefits and we need to thinking about how to make them safe.”

  • Will society benefit from the increased efficiency of learning algorithms or will wealth be increasingly concentrated in the hands of (increasingly) few individuals?

“I’d suggest… whenever AI comes in, even potentially to replace labour, it’s genuinely because it’s an efficiency gain — so creating more. But then perhaps the way to think about it is how this efficiency gain is distributed. So if it’s concentrated in the hands of the owners perhaps that tends to be not of good value to society. But if the benefits accrue to society at large that’s potentially better,” said Dalyac.

“For example something that we’re working on is automating a task in the visual assessment of insurance claims. And the benefit of that would be to lower insurance premiums for car insurance… so this would be a case where the people who are usually employed to do this would find themselves out of work, so that might involve maybe 400 people in this country. But as a result you have 50 million people that benefit.”

  • Should something akin to the ‘philosophy of AI’ be taught in schools? Given we’re encouraging kids to learn coding, what about contextualizing that knowledge by also teaching them to think about the social impacts of increasingly clever and powerful decision-making machines?

“Should it be a discipline at school where students would learn about AI?” asked Mignot. “Could it be interesting to have classes around one step further. Once you know how to code a computer in a binary language, what does it mean to create an intelligent device?

“I think that would help a lot with the discussion because today coders don’t really understand the limitations and the potential of technology. What does it mean to be a machine that can learn by itself and make decisions? It’s so abstract as a concept that I think for people who are not working in the field it’s either too opaque to even consider, or really scary.”

  • Is the umbrella term ‘artificial intelligence’ actually an impediment to public awareness and understanding of myriad developments and (potential) benefits associated with algorithms that can adapt based on data input?

“We’re asking people to understand something that we’ve not really understood ourselves, or classified at least. So, when we’re talking about smartphones we’re not really talking about AI, we’re talking about some clever computing. We’re talking about some very interesting programming and the possibility that this programming can learn and adapt but in very, very simple ways,” said Erden.

“When you describe it like that to people I don’t think they’re either scared by it or fail to understand it. But if you describe this under the umbrella term of AI you promise too much, you disappoint a lot and you also confuse people… What’s wrong with saying ‘clever computing’? What’s wrong with saying ‘clever programming’? What’s wrong with saying ‘computational intelligence’?”

  • Is IBM’s ‘cognitive computing’ tech, Watson — purportedly branching out from playing Jeopardy to applying its algorithmic chops to very different fields, such as predictive medicine — more a case of clever marketing, than an example of an increasingly broad AI?

“I would say that if you take a look at the papers you’ll realize that Watson might just be pure branding. All it is is a very large team of researchers that have done really well on a single task, and have said ‘hey let’s call it Watson’, and let’s make it this ‘super intelligent being’, so the next time they ask us to do something intelligent we’ll get the same researchers, or similar researchers to work on something else,” argued Dalyac.

“We’re looking at automating the assessment of damage on cars, and there’s a paper by IBM Watson in 2012 which, to be honest, uses very, very old school AI — and AI that I can say for sure has nothing to do with winning at Jeopardy,” he added.

Promising applications for learning algorithms cited during the roundtable:

  • Helping websites weed out algorithmically generated ad clicks (the irony!)
  • Analyzing gamblers’ patterns of play to identify problematic tipping points
  • Monitoring skin lesions more effectively by using change point detection
  • Creating social AIs that can interact with autistic kids to reduce feelings of isolation
  • Tackling the complexity of language translation by using statistical approaches to improve machine translation
  • Putting sensors on surgical tools to model (and replicate) the perfect operation
  • Using data from motion sensors to predict when a frail elderly person might be at the risk of falling by analyzing behavioral patterns

Some near-term concerns about the proliferation of machine learning plus big data:  

  • How to regulate and control increasingly powerful and sophisticated data processing across borders where different laws might apply?
  • How to protect user privacy from predictive algorithms and ensure informed consent of data processing?

“Over the last decade or so the use of data has largely been something that happens below the surface. And users’ data gets passed around and fed to targeting networks and I think, and to some degree I hope, there will be a change over the next ten years or so where partly people become aware that the data that is collected, that characterizes the things they do, their likes and interests, that that’s an asset that actually is theirs to own and control,” argued Medlock.

“Moving towards consumers thinking about data a little bit like a currency in the same way that they use and own their own money, and that they’re able to make decisions about where they share that data… Moving the processing, manipulation and storage of data from the murky depths, to something that people are at least aware of and can make decisions about intentionally.”

  • How to respond to the accumulation of massive amounts of data — and the predictive insights that data can yield — in the hands of an increasingly powerful handful of technology companies?

“That will continue to be a challenge, for governments, for industry, for academia. We’re not going to solve that one quickly but there are a lot of people thinking hard about that,” said Crow. “If you look at some of the regulatory stuff that’s happening, certainly in the EU and starting to happen in the US as well, I think you are seeing people at least understanding there’s a concern there now.

“And that this is an area where government needs to play an effective role. I don’t think we know exactly what that looks like yet — I don’t think we’ve finished that discussion. But at least a discussion is happening now and I think that’s really important.”

  • How to avoid algorithmic efficiencies destroying jobs and concentrating more and more wealth in the hands of fewer and fewer individuals?

A survey of U.K. users conducted by SwiftKey ahead of the panel discussion found that fear of jobs being made redundant by advances in AI was of concern to the majority (52 per cent) of respondents. While just a third (36 per cent) said they want to see AI having a bigger role in society — implying that two-thirds would prefer checks and balances on the proliferation of machine learning technologies.

Bottom line, if increasing algorithmic efficiency is destroying more jobs than it’s creating then massive social re-structuring is inevitable. So human brains seeking to ask questions about who benefits from such accelerated change, and what kind of society people want to live in, is surely just prudent due diligence — not to mention the very definition of (biological) intelligence.

More TechCrunch

The company’s autonomous vehicles have had a number of misadventures lately, involving driving into construction sites.

Waymo’s robotaxis under investigation after crashes and traffic mishaps

Sona, a workforce management platform for frontline employees, has raised $27.5 million in a Series A round of funding. More than two-thirds of the U.S. workforce are reportedly in frontline…

Sona, a frontline workforce management platform, raises $27.5M with eyes on US expansion

Uber Technologies announced Tuesday that it will buy the Taiwan unit of Delivery Hero’s Foodpanda for $950 million in cash. The deal is part of Uber Eats’ strategy to expand…

Uber to acquire Foodpanda’s Taiwan unit from Delivery Hero for $950M in cash 

Paris-based Blisce has become the latest VC firm to launch a fund dedicated to climate tech. It plans to raise as much as €150M (about $162M).

Paris-based VC firm Blisce launches climate tech fund with a target of $160M

Maad, a B2B e-commerce startup based in Senegal, has secured $3.2 million debt-equity funding to bolster its growth in the western Africa country and to explore fresh opportunities in the…

Maad raises $3.2M seed amid B2B e-commerce sector turbulence in Africa

The fresh funds were raised from two investors who transferred the capital into a special purpose vehicle, a legal entity associated with the OpenAI Startup Fund.

OpenAI Startup Fund raises additional $5M

Accel has invested in more than 200 startups in the region to date, making it one of the more prolific VCs in this market.

Accel has a fresh $650M to back European early-stage startups

Kyle Vogt, the former founder and CEO of self-driving car company Cruise, has a new VC-backed robotics startup focused on household chores. Vogt announced Monday that the new startup, called…

Cruise founder Kyle Vogt is back with a robot startup

When Keith Rabois announced he was leaving Founders Fund to return to Khosla Ventures in January, it came as a shock to many in the venture capital ecosystem — and…

From Miles Grimshaw to Eva Ho, venture capitalists continue to play musical chairs

On the heels of OpenAI announcing the latest iteration of its GPT large language model, its biggest rival in generative AI in the U.S. announced an expansion of its own.…

Anthropic is expanding to Europe and raising more money

If you’re looking for a Starliner mission recap, you’ll have to wait a little longer, because the mission has officially been delayed.

TechCrunch Space: You rock(et) my world, moms

Apple devoted a full event to iPad last Tuesday, roughly a month out from WWDC. From the invite artwork to the polarizing ad spot, Apple was clear — the event…

Apple iPad Pro M4 vs. iPad Air M2: Reviewing which is right for most

Terri Burns, a former partner at GV, is venturing into a new chapter of her career by launching her own venture firm called Type Capital. 

GV’s youngest partner has launched her own firm

The decision to go monochrome was probably a smart one, considering the candy-colored alternatives that seem to want to dazzle and comfort you.

ChatGPT’s new face is a black hole

Apple and Google announced on Monday that iPhone and Android users will start seeing alerts when it’s possible that an unknown Bluetooth device is being used to track them. The…

Apple and Google agree on standard to alert people when unknown Bluetooth devices may be tracking them

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: Watch here

A human safety operator will be behind the wheel during this phase of testing, according to the company.

GM’s Cruise ramps up robotaxi testing in Phoenix

OpenAI announced a new flagship generative AI model on Monday that they call GPT-4o — the “o” stands for “omni,” referring to the model’s ability to handle text, speech, and…

OpenAI debuts GPT-4o ‘omni’ model now powering ChatGPT

Featured Article

The women in AI making a difference

As a part of a multi-part series, TechCrunch is highlighting women innovators — from academics to policymakers —in the field of AI.

19 hours ago
The women in AI making a difference

The expansion of Polar Semiconductor’s facility would enable the company to double its U.S. production capacity of sensor and power chips within two years.

White House proposes up to $120M to help fund Polar Semiconductor’s chip facility expansion

In 2021, Google kicked off work on Project Starline, a corporate-focused teleconferencing platform that uses 3D imaging, cameras and a custom-designed screen to let people converse with someone as if…

Google’s 3D video conferencing platform, Project Starline, is coming in 2025 with help from HP

Over the weekend, Instagram announced that it is expanding its creator marketplace to 10 new countries — this marketplace connects brands with creators to foster collaboration. The new regions include…

Instagram expands its creator marketplace to 10 new countries

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

Four-year-old Mexican BNPL startup Aplazo facilitates fractionated payments to offline and online merchants even when the buyer doesn’t have a credit card.

Aplazo is using buy now, pay later as a stepping stone to financial ubiquity in Mexico

We received countless submissions to speak at this year’s Disrupt 2024. After carefully sifting through all the applications, we’ve narrowed it down to 19 session finalists. Now we need your…

Vote for your Disrupt 2024 Audience Choice favs

Co-founder and CEO Bowie Cheung, who previously worked at Uber Eats, said the company now has 200 customers.

Healthy growth helps B2B food e-commerce startup Pepper nab $30 million led by ICONIQ Growth

Booking.com has been designated a gatekeeper under the EU’s DMA, meaning the firm will be regulated under the bloc’s market fairness framework.

Booking.com latest to fall under EU market power rules

Featured Article

‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Estate is an invite-only website that has helped hundreds of attackers make thousands of phone calls aimed at stealing account passcodes, according to its leaked database.

24 hours ago
‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Squarespace is being taken private in an all-cash deal that values the company on an equity basis at $6.6 billion.

Permira is taking Squarespace private in a $6.9 billion deal