AI

Beyond Siri: The AI revolution coming from the web

Comment

Image Credits: Aniwhite (opens in a new window) / Shutterstock (opens in a new window)

Eric Poindessault

Contributor

Eric Poindessault is founder and CEO at Biggerpan.

From HAL in 2001: A Space Odyssey to Samantha in Spike Jonze’s Her, for decades we have been obsessed with the idea that artificial intelligence powered-computers will one day be able to interact with people, follow spoken instructions and make decisions independently, like a human would.

Since Siri hit our screens on the iPhone 4, Google, Facebook, Amazon, Microsoft and Baidu have entered the playing field, too. But while each new generation brings its lot of interesting new features or use cases, they are still far off from the AI representations in movies. It’s hard to imagine anyone engaging in a romantic relationship with Siri, or NASA putting Alexa in control of a spacecraft just yet.

Movies have set the bar pretty high, and we are still waiting for such ubiquitous voice-controlled AI assistants to enter the real world. However, the era of an intelligent assistant that can really help us in our day-to-day lives is certainly closer than we may think.

Passive assistants that rely on user input

When talking about intelligent personal assistants, people tend to think about Siri, Cortana or Amazon Echo. More tech-savvy users might also have heard of Viv from the founders of Siri, Facebook’s recent contribution M and other messaging-based AI tools like Operator or Magic. However, while these new tools have been getting a lot of hype lately, when it comes to their use cases, most of them are still stuck in the realm of glorified Q&As.

Over the last couple of years there have been a number of signals that tech forerunners are serious about taking AI to the next level. Google acquired the London-based deep learning research group DeepMind for half a billion dollars in 2014, IBM acquired AlchemyAPI last year and Apple recently made two AI acquisitions in just four days.

These acquisitions came after recent breakthroughs in GPU-accelerated deep-learning techniques have made it possible to reach exceptional improvements in pattern recognition, with great applications for speech recognition and computer vision. According to Tim Tuttle, founder and CEO of Expect Labs, within the next two years machines should be able to follow spoken instructions even better than humans do.

While arguably the ability to communicate through our senses is at the essence of human intelligence, it is only one ingredient of the essential recipe. Professor Stuart Russell from U.C. Berkeley identifies six major capabilities that a computer needs to pass the Total Turing test: natural language processing, knowledge representation, automated reasoning, machine learning, computer vision and robotics.

In simpler terms, these are the building blocks of artificial general intelligence, and speech recognition is just one aspect. Speaking can be a convenient alternative to tapping on a keyboard when your hands are busy, but then your voice is just a medium, and it doesn’t always shine as the best input method. How many times have you started asking Siri something and ended up typing your own query on Google?

Context is the hard part

The team behind Viv believes they can build a better personal assistant, using advanced deep-learning techniques to make machines teach themselves how to solve problems. While they’re understandably keeping their secret sauce close to their chest, the information disclosed so far implies that it requires some human guidance in order to build use cases. In the same way a human can learn to solve a problem using clues given by someone who knows how to solve it, they guide the AI to find its own methods to solve problems.

However, the comparison with humans stops here, because, unlike machines, we have the ability to autonomously build on top of our knowledge by contextualizing problems and finding original solutions to solve them. We naturally “connect the dots” to find answers and make decisions, while current AI implementations often fail to associate problems with something essential and very hard to formalize: the context that surrounds them.

Context is what gives AI the ability to form more intelligent decisions rather than solely relying on well-defined input instructions. As such, it links the past, present and future to solve sophisticated problems. Professor Patrick Brézillon from the University of Paris argues: “In Artificial Intelligence, the lack of explicit representation of context is one of the reasons of the failures of many Knowledge-Based Systems.” There is indeed a lot to grasp.

Teaching computers about context in human behavior is a colossal task; people are not always predictable and the variety of situations is basically endless. At a personal level, using machine-learning techniques to understand someone’s way of dealing with social interactions and decision making would involve countless hours of user input. That could probably be performed by observing you 24 hours a day but, since mind reading doesn’t exist yet, you would also need to be expressing your reasoning out loud so the machine could learn to think like you.

Harnessing the power of the Internet

Machine learning requires a lot of data. For natural language processing, the data is generally collected in a corpus, which is a large structured set of texts that can be used to train the AI. To give you an idea of just how large data sets can be, when Watson beat the human champion of Jeopardy, it had previously ingested the entire Wikipedia database.

What is interesting about the IBM Watson story is that the corpus it was fed required no prior structuring, which means Watson was able to use the data without human supervision. Now what if M had a similar training, with the further goals of being able to converse and perform elaborate tasks? What would the model be and where could we find the appropriate data?

The Internet contains millions of hours of talks, videos, books, data and everything that would allow for neural networks to build intelligence. You want to teach a machine about love? Feed it Romeo and Juliet and other romance novels. About business? Plug it into The Wall Street Journal’s news feed. DeepMind recently gave us a taste of what could be done, teaching language to their AI with a database of more than 300,000 articles from CNN and the Daily Mail.

The data is right there, and for now we only seem to be scratching the surface. But another wave of progress in machine learning will soon allow us to make even more sense of the exabytes of information the Web contains, and such advancement will mark a huge step in the evolution toward an Artificial Superintelligence.

Aside from scraping the oceans of unstructured data available online, a much closer future that could make our lives easier is already at hand. Every day humans make hundreds of decisions online, and every time we click on a link, those clicks are recorded by advertising and analytics companies across multiple websites. Imagine if that information was instead used by an AI dedicated to understanding your browsing preferences as you navigate and collect the relevant information in your browsing, along with that of millions of other users, to determine patterns out of the data.

It would not only be able to offer you a more personalized and contextualized experience of the Web, it would also understand your intent better and be able to anticipate your needs before you even express them! And voilà, an AI personal assistant that could really lighten the load using technology that already exists.

Researchers predict we’ll have to wait at least another decade before we can truly experience the greatness of ubiquitous human-like intelligence. Meanwhile, the Internet is arriving at a stage where it is gathering all the necessary ingredients for a huge leap in terms of AI, and already has a lot of bots, scrapers, analytics and other APIs harvesting our online data of which we could take advantage.

So let’s disconnect images in the movies from the real progress that’s happening before our eyes, and realize that the greatest chance we have of creating an AI that can really make our lives easier is by harnessing something we use every day.

More TechCrunch

AWS has confirmed its European “sovereign cloud” will go live by the end of 2025, enabling greater data residency for the region.

AWS confirms will launch European ‘sovereign cloud’ in Germany by 2025, plans €7.8B investment over 15 years

Go Digit, an Indian insurance startup, has raised $141 million from investors including Goldman Sachs, ADIA, and Morgan Stanley as part of its IPO.

Indian insurance startup Go Digit raises $141M from anchor investors ahead of IPO

Peakbridge intends to invest in between 16 and 20 companies, investing around $10 million in each company. It has made eight investments so far.

Food VC Peakbridge has new $187M fund to transform future of food, like lab-made cocoa

For over six decades, the nonprofit has been active in the financial services sector.

Accion’s new $152.5M fund will back financial institutions serving small businesses globally

Meta’s newest social network, Threads, is starting its own fact-checking program after piggybacking on Instagram and Facebook’s network for a few months.

Threads finally starts its own fact-checking program

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people