Biotech & Health

Not Just Another Discussion About Whether AI Is Going To Destroy Us

Comment

Image Credits:

An AI roundtable discussion is a staple of the tech journalism circus — usually framed with a preamble about dystopic threats to human existence from the inexorable rise of ‘super intelligence machines’. Just add a movie still from The Terminator.

What typically results from such a set-up is a tangled back and forth of viewpoints and anecdotes, where a coherent definition of AI fails to be an emergent property of the assembled learned minds. Nor is there clear consensus about what AI might mean for the future of humanity. After all, how can even the most well-intentioned groupthink predict the outcome of an unknown unknown?

None of this is surprising, given we humans don’t even know what human intelligence is. Thinking ourselves inside the metallic shell of ‘machine consciousness’ — whatever that might mean — is about as fruitful as trying to imagine what our thoughts might be if our own intelligence were embodied inside the flesh of a pear, rather the fleshy forms we do inhabit. Or if our consciousness existed fleetingly in liquid paint during the moment of animation by an artist’s intention. Philosophers can philosophize about the implications of AI, sure (and of course they do). But only an idiot would claim to know.

The panel discussion I attended this week at London’s hyper-trendy startup co-working hub Second Home trod plenty of this familiar ground. So I won’t rehash the usual arguments. Rather, and as some might argue making more like a machine — in the sense of acting like an algorithm trained to surface novelty from a mixed data dump — I’ve compiled a list (below) of some of the more interesting points that did emerge as panelists were asked to consider whether AI is “a force for good” (or not).

I’ve also listed some promising avenues for (narrow) AI mentioned by participants. So where they see potential for learning algorithms to solve problems humans might otherwise find tricky to crack — and also where those use-cases can be broadly considered socially beneficial, in an effort to steer the AI narrative away from bloodthirsty robots.

The last list is a summary of more grounded perceived threats/risks, i.e. those that don’t focus on the stereotypical doomsday scenario of future ‘superintelligent machines’ judging humans a waste of planetary space, but which are again focused on risks associated with the kind of narrow but proliferating — in terms of applications and usage — ‘AI’ we do already have.

One more point before switching to bullets and soundbites: the most concise description of (narrow) AI that emerged during the hour long discussion came from Tractable founder Alexandre Dalyac, who summed it up thus: “Algorithms compared to humans can usually tend to solve scale, speed or accuracy issues.”

So there you have it: AI, it’s all about scale, speed and accuracy. Not turning humans into liquid soap. But if you do want to concern yourself with where machine intelligence is headed, then thinking about how algorithmic scale, speed and accuracy — applied over more and more aspects of human lives — will impact and shape the societies we live in is certainly a question worth pondering.

Panelists

  • Calum Chace, author of ‘Surviving AI’
  • Dan Crow, CTO Songkick
  • Alexandre Dalyac, founder, Tractable
  • Dr Yasemin J Erden, Lecturer/Programme Director Philosophy, St Mary’s University
  • Martina King, CEO, Featurespace
  • Ben Medlock, founder, SwiftKey
  • Martin Mignot, Principal, Index Ventures
  • Jun Wang, Reader, Computer Science, UCL & Co-founder, CTO, MediaGamma

Discussion points of above average interest:

  • Should AI research be open source by default? How can we be expected to control and regulate the social impact of increasingly clever computing when the largest entities involved in AI fields like deep learning are commercial companies such as Google that do not divulge their proprietary algorithms?

“If the future of humanity is at stake should they be forced to open source it? Or how can we control what’s happening there?” asked Mignot. “I don’t think anyone knows what Google is doing. That’s one of the issues, that’s one of the worries we should have.”

A movement to open source machine learning-related research could also be a way to lessen public fears about the future impact of AI technologies, added Jun.

  • Will it be the case that the more generalist our machines become, the less capable and/or reliable for a particular task — and arguably, therefore, the less safe overall? Is that perhaps the trade-off when you try to make machines think outside a (narrow) box?

“One of the interesting philosophical questions is whether your ability to do a particular task with absolute focus — and reduce the false positives, increase the safety — actually requires a narrow form of intelligence. And at the point where our machines start to become more general, and sort of inherently more human-like, whether necessarily that introduces a reduction in safety,” posited Medlock.

“I can imagine that the kind of flexibility of the human brain, the plasticity to respond to so many different scenarios requires a reduction in specific abilities to do particular tasks. I think that’s going to be one of the interesting things that will emerge as we start to develop AGI [artificial general intelligence] — whether actually it becomes useful for a very different set of reasons to narrow AI.”

“I don’t think artificial intelligence in itself is what I would be concerned about, it’s more artificial stupidity. It’s the stupidity that comes with either a narrow focus, or a misunderstanding of the broader issues,” added Erden. “The difficulty in trying to establish all the little details that make up the context in which individual specific tasks happen.

“Once you try to ask individual programs to do very big things, and they need therefore to take into account lots of issues, then it becomes much more difficult.”

  • Should core questions of safety or wider ethical worries about machine-powered decision-making usurping human judgment be society’s biggest concern as learning algorithms proliferate? Can you even separate safety from ethics at that fuzzy juncture?

“The guys who built the Web put it up and out there and didn’t really think about the ethics at all. Didn’t think about putting those tools into the hands of people who would use those tools negatively, instead of positively. And I think we can take those lessons and apply them to new technologies,” argued King.

“A good example for the Web would be people believing that the laws of California were appropriate to everywhere around the world. And they aren’t, and they weren’t, and actually it took those Web companies a huge amount of time — and it was peer group pressure, lobby groups and so on — in order to get those organizations to behave actually appropriately for the laws of those individual countries they were operating in.”

They don’t care about us, they don’t care about anything. They don’t know they exist. But they can do us damage, or they can provide benefits and we need to thinking about how to make them safe.

“I’m a bit puzzled that people talk about AI ethics,” added Chace. “Machines may well be moral beings at some point but at the moment it’s not about ethics, it’s about safety. It’s about making sure that as AIs get more and more powerful that they are safe for humans. They don’t care about us, they don’t care about anything. They don’t know they exist. But they can do us damage, or they can provide benefits and we need to thinking about how to make them safe.”

  • Will society benefit from the increased efficiency of learning algorithms or will wealth be increasingly concentrated in the hands of (increasingly) few individuals?

“I’d suggest… whenever AI comes in, even potentially to replace labour, it’s genuinely because it’s an efficiency gain — so creating more. But then perhaps the way to think about it is how this efficiency gain is distributed. So if it’s concentrated in the hands of the owners perhaps that tends to be not of good value to society. But if the benefits accrue to society at large that’s potentially better,” said Dalyac.

“For example something that we’re working on is automating a task in the visual assessment of insurance claims. And the benefit of that would be to lower insurance premiums for car insurance… so this would be a case where the people who are usually employed to do this would find themselves out of work, so that might involve maybe 400 people in this country. But as a result you have 50 million people that benefit.”

  • Should something akin to the ‘philosophy of AI’ be taught in schools? Given we’re encouraging kids to learn coding, what about contextualizing that knowledge by also teaching them to think about the social impacts of increasingly clever and powerful decision-making machines?

“Should it be a discipline at school where students would learn about AI?” asked Mignot. “Could it be interesting to have classes around one step further. Once you know how to code a computer in a binary language, what does it mean to create an intelligent device?

“I think that would help a lot with the discussion because today coders don’t really understand the limitations and the potential of technology. What does it mean to be a machine that can learn by itself and make decisions? It’s so abstract as a concept that I think for people who are not working in the field it’s either too opaque to even consider, or really scary.”

  • Is the umbrella term ‘artificial intelligence’ actually an impediment to public awareness and understanding of myriad developments and (potential) benefits associated with algorithms that can adapt based on data input?

“We’re asking people to understand something that we’ve not really understood ourselves, or classified at least. So, when we’re talking about smartphones we’re not really talking about AI, we’re talking about some clever computing. We’re talking about some very interesting programming and the possibility that this programming can learn and adapt but in very, very simple ways,” said Erden.

“When you describe it like that to people I don’t think they’re either scared by it or fail to understand it. But if you describe this under the umbrella term of AI you promise too much, you disappoint a lot and you also confuse people… What’s wrong with saying ‘clever computing’? What’s wrong with saying ‘clever programming’? What’s wrong with saying ‘computational intelligence’?”

  • Is IBM’s ‘cognitive computing’ tech, Watson — purportedly branching out from playing Jeopardy to applying its algorithmic chops to very different fields, such as predictive medicine — more a case of clever marketing, than an example of an increasingly broad AI?

“I would say that if you take a look at the papers you’ll realize that Watson might just be pure branding. All it is is a very large team of researchers that have done really well on a single task, and have said ‘hey let’s call it Watson’, and let’s make it this ‘super intelligent being’, so the next time they ask us to do something intelligent we’ll get the same researchers, or similar researchers to work on something else,” argued Dalyac.

“We’re looking at automating the assessment of damage on cars, and there’s a paper by IBM Watson in 2012 which, to be honest, uses very, very old school AI — and AI that I can say for sure has nothing to do with winning at Jeopardy,” he added.

Promising applications for learning algorithms cited during the roundtable:

  • Helping websites weed out algorithmically generated ad clicks (the irony!)
  • Analyzing gamblers’ patterns of play to identify problematic tipping points
  • Monitoring skin lesions more effectively by using change point detection
  • Creating social AIs that can interact with autistic kids to reduce feelings of isolation
  • Tackling the complexity of language translation by using statistical approaches to improve machine translation
  • Putting sensors on surgical tools to model (and replicate) the perfect operation
  • Using data from motion sensors to predict when a frail elderly person might be at the risk of falling by analyzing behavioral patterns

Some near-term concerns about the proliferation of machine learning plus big data:  

  • How to regulate and control increasingly powerful and sophisticated data processing across borders where different laws might apply?
  • How to protect user privacy from predictive algorithms and ensure informed consent of data processing?

“Over the last decade or so the use of data has largely been something that happens below the surface. And users’ data gets passed around and fed to targeting networks and I think, and to some degree I hope, there will be a change over the next ten years or so where partly people become aware that the data that is collected, that characterizes the things they do, their likes and interests, that that’s an asset that actually is theirs to own and control,” argued Medlock.

“Moving towards consumers thinking about data a little bit like a currency in the same way that they use and own their own money, and that they’re able to make decisions about where they share that data… Moving the processing, manipulation and storage of data from the murky depths, to something that people are at least aware of and can make decisions about intentionally.”

  • How to respond to the accumulation of massive amounts of data — and the predictive insights that data can yield — in the hands of an increasingly powerful handful of technology companies?

“That will continue to be a challenge, for governments, for industry, for academia. We’re not going to solve that one quickly but there are a lot of people thinking hard about that,” said Crow. “If you look at some of the regulatory stuff that’s happening, certainly in the EU and starting to happen in the US as well, I think you are seeing people at least understanding there’s a concern there now.

“And that this is an area where government needs to play an effective role. I don’t think we know exactly what that looks like yet — I don’t think we’ve finished that discussion. But at least a discussion is happening now and I think that’s really important.”

  • How to avoid algorithmic efficiencies destroying jobs and concentrating more and more wealth in the hands of fewer and fewer individuals?

A survey of U.K. users conducted by SwiftKey ahead of the panel discussion found that fear of jobs being made redundant by advances in AI was of concern to the majority (52 per cent) of respondents. While just a third (36 per cent) said they want to see AI having a bigger role in society — implying that two-thirds would prefer checks and balances on the proliferation of machine learning technologies.

Bottom line, if increasing algorithmic efficiency is destroying more jobs than it’s creating then massive social re-structuring is inevitable. So human brains seeking to ask questions about who benefits from such accelerated change, and what kind of society people want to live in, is surely just prudent due diligence — not to mention the very definition of (biological) intelligence.

More TechCrunch

Zen Educate, an online marketplace that connects schools with teachers, has raised $37 million in a Series B round of funding. The raise comes amid a growing teacher shortage crisis…

Zen Educate raises $37M and acquires Aquinas Education as it tries to address the teacher shortage

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine.”

Scarlett Johansson says that OpenAI approached her to use her voice

A new self-driving truck — manufactured by Volvo and loaded with autonomous vehicle tech developed by Aurora Innovation — could be on public highways as early as this summer.  The…

Aurora and Volvo unveil self-driving truck designed for a driverless future

The European venture capital firm raised its fourth fund as fund as climate tech “comes of age.”

ETF Partners raises €284M for climate startups that will be effective quickly — not 20 years down the road

Copilot, Microsoft’s brand of generative AI, will soon be far more deeply integrated into the Windows 11 experience.

Microsoft wants to make Windows an AI operating system, launches Copilot+ PCs

Hello and welcome back to TechCrunch Space. For those who haven’t heard, the first crewed launch of Boeing’s Starliner capsule has been pushed back yet again to no earlier than…

TechCrunch Space: Star(side)liner

When I attended Automate in Chicago a few weeks back, multiple people thanked me for TechCrunch’s semi-regular robotics job report. It’s always edifying to get that feedback in person. While…

These 81 robotics companies are hiring

The top vehicle safety regulator in the U.S. has launched a formal probe into an April crash involving the all-electric VinFast VF8 SUV that claimed the lives of a family…

VinFast crash that killed family of four now under federal investigation

When putting a video portal in a public park in the middle of New York City, some inappropriate behavior will likely occur. The Portal, the vision of Lithuanian artist and…

NYC-Dublin real-time video portal reopens with some fixes to prevent inappropriate behavior

Longtime New York-based seed investor, Contour Venture Partners, is making progress on its latest flagship fund after lowering its target. The firm closed on $42 million, raised from 64 backers,…

Contour Venture Partners, an early investor in Datadog and Movable Ink, lowers the target for its fifth fund

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

2 days ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’