AI

The next stop on the road to revolution is ambient intelligence

Comment

Image Credits:

Gary Grossman

Contributor

Gary Grossman is a futurist and public relations and communications marketing executive with Edelman.

It’s easy to see a rainbow when it’s in the distance, but more difficult to discern when you are in its midst. Though it’s still early days, we’re now in the midst of the fourth industrial revolution. Klaus Schwab, the founder of the World Economic Forum, says the impending “transformation will be unlike anything humankind has experienced before.”

Digital technologies now surround us, with many people having multiple devices for business and personal use. When combined with the Internet of Things and its assortment of embedded sensors and connected devices in the home, the enterprise and the world at large, we will have created a digital intelligence network that transcends all that has gone before. Some have referred to this as a “third wave” of computing, where technology gains the ability to sense, predict and respond to our needs and is being integrated into our natural behaviors.

Regardless of definition, we are witnessing an explosion of digital technologies and intelligence. Digital progress is advancing across multiple technologies and seemingly speeding up at an exponential rate. The next stop on the road to the fourth revolution is ambient computing or ambient intelligence, where we continuously interface to the always-on, interconnected world of things. The Internet, then, becomes an Internet of experience, a place where we will dialog with ambient intelligence, or digital intelligence everywhere.

Ambient is generally defined as “surrounding on all sides.” Ambient intelligence is born of digital interconnectedness to produce information and services that enhance our lives. This is enabled by the dynamic combination of mobile computing platforms, the cloud and big data, neural networks and deep learning using graphics processing units (GPUs) to produce artificial intelligence (AI).

An example travel scenario set 10-15 years into the future outlined in Information Week describes arriving in San Francisco. Upon exiting the plane, a traveler will get a message that says, “Welcome to San Francisco. Please go to the curb after picking up your bag.” When at the curb, a self-driving car will meet them and, once inside, advise that the destination is the Marriott hotel.

A recent story notes that computing is on its way to becoming a sea of background data processing that bears little or no relation to the familiar world of PCs and servers. “We will talk, and the world will answer.” We have more than a hint of this with current implementations of Siri, Cortana and Echo. Using natural language processing and AI, these devices understand what we are asking and supply us with useful information.

In the case of Amazon’s Echo, it can do a lot more than answer a question, including keep track of a shopping list and place orders on Amazon.com, book an Uber ride, control a thermostat and other household appliances, tell you transit schedules, start a seven-minute workout routine, read recipes and do math. Most recently, it can even call a plumber and share medical advice. How long before we see homes and businesses with an Echo-like device in every room?

Futurist and WIRED founding executive editor Kevin Kelly believes that one day in the not too distant future, digital intelligence will flow like electricity and be seen as a utility, or “IQ as a service.” Enterprising people will be able to buy AI much like we do electricity and use what we need. Google, Amazon, IBM and Microsoft have started providing this capability, making it easy to access portions of their AI software.

In Kelly’s view, a winning formula for the next wave of startups is to take something that already exists and add AI to turn it into something more. Self-driving cars are likely the best example of this to date. In citing ambient computing as one of the top technology trends, Deloitte says that products now often embed intelligence as a competitive necessity.

Much of AI is built upon the voluminous amount of data — so-called big data — being collected through search, apps and the Internet of Things. These data provide the opportunity for neural networks to learn what people do, how they respond and their interests, providing the basis for deep learning-derived personalized information and services based on increasingly educated guesses within any given context.

In Shots of Awe, philosopher and technologist Jason Silva says AI is simply the outsourcing of cognition to machines, amplifying the most powerful force in the universe, which is intelligence. He adds there’s no reason to fear this, it’s just evolution.

Another emerging ambient intelligence application is bots, including those recently announced by Facebook. An example is a new personalized news bot created by TechCrunch that uses machine learning to serve up recommended stories from the site. Another article gives the example of a food-ordering bot that will take an order, acknowledge it and pass the order on to an e-commerce system, along with a user’s credentials to approve payment.

A basic implementation would behave and operate much like interactive voice response services over the phone. The article notes that “more complex bots will take advantage of the explosion of machine learning-powered AI systems” to help refine understanding of user context. Accurately parsing the language and appearing to understand the context a person is in will make a bot seem more natural, more like interacting with an actual person, and become an intelligent and ambient part of day-to-day life.

Google is combining voice search and Google Now, its predictive service that shows users information, before they actually go searching for it, in the hopes of creating an omniscient assistant, ready to step in and fulfill any request, including those you haven’t yet thought about. This is being positioned as a universal digital assistant. A recent story describes a Google Bluetooth-enabled lapel pin prototype equipped with a microphone and activated through a simple tap, similar to the communicator on Star Trek.

What is clear is that our AI-powered assistants will increasingly manage our digital activities and address increasingly complex questions and situations. We don’t know what devices are coming, whether lapel pins, augmented reality visors or something else, but we know they’re coming. We are fully within the rainbow of digitally driven change. Will these make life better or somehow easier? We will definitely be more guided by the technology, relegating mundane tasks to ambient intelligence.

Connecting the technologies and crossing the boundaries necessary to provide seamless, transparent and persistent experiences in context will take time to realize. This is all a part of the ambient intelligence future where technology fades into the fabric of daily life, becoming both more pervasive and less overt, present wherever you are and always accessible. It’s still early days, but we’re already living in it, and the speed of advance appears to be accelerating. The revolution won’t be far behind.

More TechCrunch

Meta’s newest social network, Threads is starting its own fact-checking program after piggybacking on Instagram and Facebook’s network for a few months. Instagram head Adam Mosseri noted that the company…

Threads finally starts its own fact-checking program

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost