AI

Snap upgrades its AR Shopping features with real-time pricing, more product details

Comment

A picture taken on October 1, 2019 in Lille shows the logo of mobile app Snapchat displayed on a tablet.
Image Credits: Denis Charlet/AFP (opens in a new window) / Getty Images

Snapchat is upgrading its AR shopping experience today with updates to both to the Shopping Lenses inside its social app as well as to the analytics shared with Snap’s brand and retail partners. For consumers, the AR shopping features become more practical to use, as they’ll now display key product information, updated in real-time, like the current product pricing and color details, alongside product descriptions and unique links to make purchases on new Lens Product Cards that appear as users virtually try on products.

For brands, the updates are focused on providing more data about how their AR shopping features are performing.

These analytics are now provided in real time as the AR Shopping Lenses are linked directly to the company’s product catalog, Snap explains. This allows brands and marketers to gain more immediate insights that can help them with their R&D plans, as well as to determine which types of products are resonating best with Snapchat’s younger, Gen Z and millennial audience. Brands can then incorporate that data when making decisions about ad targeting and future product development. Snap notes the data can also help the brands optimize the delivery of their Shopping Lens to the app’s users who are most likely to make a purchase.

Before today’s public launch, Snap beta-tested the new AR Lens with a handful of brands, including Ulta Beauty and MAC Cosmetics. Using these “catalog-powered” Shopping Lenses, as they’re called, Ulta reported $6 million in incremental purchases on Snapchat and over 30 million product try-ons within a two-week time period. MAC, meanwhile, saw 1.3 million try-ons at a cost of 0.31 cents per product trial and reported a 17x higher lift in purchases among women, 2.4x lift in brand awareness and 9x light in purchase intent.

Users can swipe and use gestures to move through different AR product options when shopping with AR, in order to see what makeup, clothing, jewelry and accessories would look like on themselves. The experience is meant to offer a tech-powered alternative to trying on items in a retailer’s store, which serves a growing number of brands that are online-only without a brick-and-mortar footprint, as well as traditional retailers looking to reach a younger audience that frequently shops online.

Image Credits: Snap

Snap is also making the AR Shopping Lenses easier to create. Although it launched a free web creation tool last year for brands building AR experiences, it hadn’t included easy-to-use AR shopping templates. Now, the company promises AR Shopping Lens creation can be accomplished in just two minutes with a few clicks. This is made possible through an update to the Lens Web Builder, but initially, only for beauty brands. In the months ahead, Snap will offer similar updates to other shopping categories. In the meantime, any brand can continue to use Snap’s Lens Studio to build their AR lenses.

The company has been heavily investing in its AR shopping business in recent months, with the goal of making shopping a deeper part of the overall Snapchat experience. Last year, it upgraded its computer vision-powered Scan product, which lets users “scan” an item with their phone’s camera to learn more about the product — like an article of clothing. The feature can also be used to scan ingredients, then be connected to recipes where they can be used. It also spent $124 million to acquire Fit Analytics, a startup that helps consumers find from online retailers clothes and shoes that properly fit. Last October, Snap also announced the launch of a creative studio, Arcadia, which helps commercialize its AR tech further by helping brands develop AR experiences that can be used across platforms, including web and apps outside of Snapchat itself.

Image Credits: Snap

AR try-on seems to resonate with Snap’s younger demographic, according to data Snap has shared. In a beta test with 30+ brands, Snapchat users virtually tried on products over 250 million times. AR try-on has also led to 2.4x higher purchase intent and a +14% sales lift over video spend, the data indicates. Including non-shopping AR features, over 200 million Snapchat users engage with AR on the app daily.

“Augmented reality is changing the way we shop, play, and learn, and transforming how businesses tell their stories and sell their products,” said Snap’s chief business officer, Jeremi Gorman, in a statement. “Starting today, our revamped AR Shopping Lenses will mean a more engaging experience for our Snapchat community, and enable a faster, easier way to build Lenses for businesses — directly linking Lenses to existing product catalogs for real-time analytics and R&D about which products resonate with Gen Z and Millennial audiences.”

The AR improvements could also give Snap a way to better compete with larger social media companies, which haven’t as heavily focused on AR-enabled shopping in favor of other experiences, like live video shopping, influencer marketing powered by brand deals, and in-app shopping experiences which allow users to transact from within the social app they’re using, like Shop on Instagram. And Snap also hopes its Lens targeting features will help the company boost its ad revenues — figures that have been impacted by Apple’s new privacy features, causing Snap to miss on Wall Street’s revenue expectations during last quarter’s earnings. It brought in $1.07 billion in revenue in Q3 2021, versus the $1.10 billion forecast, sending the stock down 22% after earnings were reported. Snap will report its Q4 and full-year 2021 earnings on February 3, 2022.

Brands considering a live-shopping strategy must lean on influencers

More TechCrunch

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals