AI

Only humans, not computers, can learn or predict

Comment

Image Credits: razum (opens in a new window) / Shutterstock (opens in a new window)

Joab Rosenberg

Contributor

Joab Rosenberg is the former deputy head analyst for the Israeli government and CEO of Epistema.

Nature magazine announced in late January that a computer designed by Google’s DeepMind defeated a human master in the ancient Chinese board game, “Go.” This impressive achievement once again raised the expectations for a predicted future in which computers will have artificial intelligence, with major media outlets worldwide touting this anticipated future.

One of the major questions raised in response to DeepMind’s achievement is what are the outer limits, if any, of intelligent machines? In November of last year, Dr. Kira Radinsky, a computer scientist and “machine learning” expert, argued in the Israeli newspaper “Ha’aretz” that computers will be able to accurately predict the outcome of the Israeli-Palestinian conflict. Feed the computer enough data on a number of “parallel universes,” she wrote, and the computer will be capable of observing the implications of each of these universes and then find patterns, allowing predictions to be made about the future of the conflict.

While this argument, in theory, sounds plausible, computers are not “creative,” do not “learn” and cannot “predict.” Computers can only be tasked with making inductive predictions based on past experiences. They can then seek complex correlations in the databases in order to present them as “Actionable Insights.”

No matter which side of the debate one falls on, DeepMind’s achievement requires us to reexamine in a more accurate way what learning and prediction actually mean.

There are two main obstacles that prevent machines from learning and predicting in the way humans do: Firstly, as mentioned above, because computers can only be tasked with making “inductive” predictions based on past experiences, the future they predict will always be a continuation of the past behavior of the actors whose behavior they are examining and trying to predict.

What that means is that the predictive powers of computers will work nicely in cases where reality does not change dramatically. However, it will fail in any case where there are dramatic, unpredictable, changes in the future.

Secondly, it is well known that correlation does not equal causation. While computers may be very good at finding correlations with high statistical levels of confidence, they also can’t judge whether or not the correlations are real or ridiculous.

For example, the website Spurious Correlations presents such correlations, citing (with a very high level of confidence) the correlation between U.S. government spending on science and the number of suicides by hanging. The more data computers collect, the more spurious the correlations are that can be found. Only human agents, however, because they have the ability to understand and grasp meaning, can distinguish between meaningful and meaningless correlations.

Additionally, humans, unlike computers, have a unique capacity to not only learn from the past, but to also invent a new future, giving us the ability to imagine a future that does not yet exist. For example, technical inventions demonstrate humanity’s capacity to invent a future that is intrinsically different from historical experience. Only humans could have dreamt up the complex technologies that we have come to take for granted. Computers, on the other hand, do not posses any capacity to imagine a different future.

Given the fact that humans do have this inherent capacity and ability to imagine and create, future changes in markets or geopolitical conditions (that are mostly due to human actions) cannot be predicted based simply on past events.

When factored into something as complex as the Israeli-Palestinian conflict, the war against ISIS, the futures markets or the financial industry, the human element can significantly swing the outcome — and the predictions of a computer will fail to identify the new situation. If one wants to predict future human behavior, human analysts must be deployed to study the data and get to the right conclusions. Computers will not suffice.

For example, if Abu Mazen gives up his demand for the right of return for the refugees, while it may run contrary to public opinion, as well as represent a complete betrayal of all his previous statements and beliefs (hence it will not be predictable from something like Facebook sentiment analysis), given the fact that he has the free will to do so, he can reverse direction, effectively changing the course of the entire discussion.

This brings to mind Ariel Sharon’s reversal on his longstanding insistence that he would not withdraw Jewish settlers from the Gaza Strip, something that he ended up doing in the summer of 2005. Machines do not have the capacity to predict such radical deviances from what is expected to occur, while human analysts will portray different scenarios and argumentations in favor and against varied outcomes.

The machine versus human debate has actually divided big data analytics experts into two camps. The first camp is led by “machine learning” and “predictive analytics” experts who argue for a future in which computers will possess real “artificial intelligence,” while the second camp argues that only human analysts can reliably make conclusions based on the vast amounts of data collected and stored by humanity.

The most prominent company promoting such a view is Palantir, a $25 billion company founded by PayPal alumni. Palantir is developing big data analytics software whose main purpose is to assist human analysts in studying big data. Similarly, in his book Zero to One, venture capitalist Peter Thiel states that “while computers can find patterns that elude humans, they don’t know how to compare sources or how to interpret complex behaviors. Actionable insights can only come from a human analyst.”

The author of this article stands firmly within this camp, arguing that human capabilities far transcend anything computers can achieve.

More TechCrunch

Google says it’s developed a new family of generative AI models “fine-tuned” for learning: LearnLM. A collaboration between Google’s DeepMind AI research division and Google Research, LearnLM models — built…

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google reveals plans for upgrading AI in the real world through Gemini Live at Google I/O 2024

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning

Welcome to TechCrunch Fintech! This week, we look at the drama around TabaPay deciding to not buy Synapse’s assets, as well as stocks dropping for a couple of fintechs, Monzo raising…

Inside TabaPay’s drama-filled decision to abandon its plans to buy Synapse’s assets

The person who claimed to have stolen the physical addresses of 49 million Dell customers appears to have taken more data from a different Dell portal, TechCrunch has learned. The…

Threat actor scraped Dell support tickets, including customer phone numbers