Media & Entertainment

The Information Age is over; welcome to the Experience Age

Comment

Image Credits:

Mike Wadhera

Contributor

Mike Wadhera is the founder of Teleport.

More posts from Mike Wadhera

Twenty-five years after the introduction of the World Wide Web, the Information Age is coming to an end. Thanks to mobile screens and Internet everywhere, we’re now entering what I call the “Experience Age.”

When was the last time you updated your Facebook status? Maybe you no longer do? It’s been reported that original status updates by Facebook’s 1.6 billion users are down 21 percent.

The status box is an icon of the Information Age, a period dominated by desktop computers and a company’s mission to organize all the world’s information. The icons of the Experience Age look much different, and are born from micro-computers, mobile sensors and high-speed connectivity.

The death of the status box is a small part of a larger shift away from information moving toward experience. What’s driving this shift? In short, the changing context of our online interactions, shaped by our connected devices.

You are not your profile

To illustrate how this is playing out, think of Facebook and Snapchat.

Facebook is an Information Age native. Along with other social networks of its generation, Facebook was built on a principle of the desktop era —  accumulation.

Accumulation manifests in a digital profile where my identity is the sum of all the information I’ve saved —  text, photos, videos, web pages. (Evan Spiegel explored this first in a 2015 YouTube video titled What is Snapchat?). In the Information Age we represented ourselves with this digital profile.

But mobile has changed how we view digital identity. With a connected camera televising our life in-the-moment, accumulated information takes a back seat to continual self-expression. The “virtual self” is becoming less evident. I may be the result of everything I’ve done, but I’m not the accumulation of it. Snapchat is native to this new reality.

Many people think Snapchat is all about secrecy, but the real innovation of Snapchat’s ephemeral messages isn’t that they self-destruct. It’s that they force us to break the accumulation habit we brought over from desktop computing. The result is that the profile is no longer the center of the social universe. In the Experience Age you are not a profile. You are simply you.

Show, don’t tell

The central idea of the Experience Age is this —  I’ll show you my point of view, you give me your attention. I hear you yelling, “That’s always been the story of social!” And it has. But what’s changed is that the stories we tell each other now begin and end visually, making the narrative more literal than ever.

In the Information Age, the start of communication was information. On Facebook you type into a status box, add metadata such as your location and select from a hierarchy of emotions for how you feel. This information-first approach is also visible in Facebook’s feedback mechanisms —  six pre-selected reactions with threaded commenting.

1-_vd8jgPzXGgmdm99QXV0FA

By contrast, Snapchat always starts with the camera. Feedback is sent passively —  swiping up on your story reveals which friends watched your snaps. In the Experience Age, the primary input is visual and the dominant feedback is attention.

Today the feedback loop connecting sharing and attention starts and ends on mobile; in the future, it could start with contact lenses and end in VR, for example.

The experience stack

This reality frames Facebook’s recent investments, which bring live video, 360-degree cameras and VR as products all into a single portfolio. But Facebook isn’t the only tech giant looking ahead and seeing how all these technologies might line up. By now you’ve likely heard of Magic Leap, the super stealth AR startup with a $4.5 billion valuation funded by the likes of Google and Alibaba.

A global arms race is underway, and is beginning to create a layering of technologies I like to call the experience stack.

1-3ejc56hs_JzeMfW6kSNSuQ
The experience stack.

At the bottom is Layer 0, the real world. The full stack is in service of capturing and communicating real-world moments. Reality is its foundation.

As you move up, the layers transition from physical to logical. At the top is the application layer made up of products like Snapchat Live and Periscope. Tomorrow’s products will be even more immersive. Take for example the relaunch of Sean Parker’s Airtime and Magic Leap’s A New Morning.

The experience stack will drive new products to market faster as each layer can grow independently, while at the same time benefiting from advancements in the layers below. An example of this phenomenon is high-speed 3G enabling Apple’s App Store, which together advanced mobile as a whole. The best products of the Experience Age will be timely new applications that leverage step-change advancements in bottom layers. Given that some layers are still nascent, tremendous opportunity is ahead.

Our online and offline identities are converging, the stories we tell each other now start and end visually and investments at every layer of a new stack are accelerating the development of experience-driven products. Taken together, these trends have cracked open the door for a new golden age of technology.

It’s an exciting time to be building.

More TechCrunch

Mobile app developers, including Patreon and Grammarly, are already integrating with Gemini Nano, its smallest AI model, the company announced during its I/O developer keynote on Tuesday. The companies, along…

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning