AI

Meta to expand labelling of AI-generated imagery in election packed year

Comment

Facebook and Meta logos
Image Credits: Chesnot / Getty Images

Meta is expanding the labelling of AI-generated imagery on its social media platforms, Facebook, Instagram and Threads, to cover some synthetic imagery that’s been created using rivals’ generative AI tools — at least where rivals are using what it couches as “industry standard indicators” that the content is AI-generated and which Meta is able to detect.

The development means the social media giant expects to be labelling more AI-generated imagery circulating on its platforms going forward. But it’s also not putting figures on any of this stuff — i.e. how much synthetic vs authentic content is routinely being pushed at users — so how significant a move this might be in the fight against AI-fuelled dis- and misinformation (in a massive year for elections, globally) is unclear.

Meta says it already detects and labels “photorealistic images” that have been created with its own “Imagine with Meta” generative AI tool, which launched last December. But, up to now, it hasn’t been labelling synthetic imagery created using other company’s tools. So this is the (baby) step it’s announcing today.

“[W]e’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI,” wrote Meta president, Nick Clegg, in a blog post announcing the expansion of labelling. “Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads.”

Per Clegg, Meta will be rolling out expanded labelling “in the coming months”; and applying labels in “all languages supported by each app”.

Meta launches a standalone AI-powered image generator

A spokesman for Meta could not provide a more specific timeline; nor any details on which orders markets will be getting the extra labels when we asked for more. But Clegg’s post suggests the rollout will be gradual — “through the next year” — and could see Meta focusing on election calendars around the world to inform decisions about when and where to launch the expanded labelling in different markets.

“We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” he wrote. “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our own approach going forward.”

Meta’s approach to labelling AI-generated imagery relies upon detection powered by both visible marks that are applied to synthetic images by its generative AI tech and “invisible watermarks” and metadata the tool also embeds with file images. It’s these same sorts of signals, embedded by rivals’ AI image-generating tools, that Meta’s detection tech will be looking for, per Clegg — who notes it’s been working with other AI companies, via forums like the Partnership on AI, with the aim of developing common standards and best practices for identifying generative AI.

His blog post doesn’t spell out the extent of others’ efforts towards this end. But Clegg implies Meta will — in the coming 12 months — be able to detect AI-generated imagery from tools made by Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock, as well as its own AI image tools.

What about AI-generated video and audio?

When it comes to AI-generated videos and audio, Clegg suggests it’s generally still too challenging to detect these kind of fakes — because marking and watermarking has yet to be adopted at enough scale for detection tools to do a good job. Additionally, such signals can be stripped out, through editing and further media manipulation.

“[I]t’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers. So we’re pursuing a range of options,” he wrote. “We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks.

“For example, Meta’s AI Research lab FAIR recently shared research on an invisible watermarking technology we’re developing called Stable Signature. This integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can’t be disabled.”

Given the gap between what’s technically possible on the AI generation versus detection side, Meta is changing its policy to require users who post “photorealistic” AI-generated video or “realistic-sounding” audio to inform it that the content is synthetic — and Clegg says it’s reserving the right to label the content if it deems it “particularly high risk of materially deceiving the public on a matter of importance”.

If the user fails to make this manual disclosure they could face penalties — under Meta’s existing Community Standards. (So account suspensions, bans etc.)

“Our Community Standards apply to everyone, all around the world and to all types of content, including AI-generated content,” Meta’s spokesman told us when asked what type of sanctions users who fail to make a disclosure could face.

While Meta is keenly heaping attention on the risks around AI-generated fakes, it’s worth remembering that manipulation of digital media is nothing new and misleading people at scale doesn’t require fancy generative AI tools. Access to a social media account and more basic media editing skills are all it can take to make a fake that goes viral.

On this front, a recent decision by the Oversight Board, a Meta-established content review body — which looked at its decision not to remove an edited video of president Biden with his granddaughter which had been manipulated to falsely suggest inappropriate touching — urged the tech giant to rewrite what it described as “incoherent” policies when it comes to faked videos. The Board specifically called out Meta’s focus on AI-generated content in this context.

“As it stands, the policy makes little sense,” wrote Oversight Board co-chair Michael McConnell. “It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.”

Asked whether, in light of the Board’s review, Meta is looking at expanding its policies to ensure non-AI-related content manipulation risks are not being ignored, its spokesman declined to answer, saying only: “Our response to this decision will be shared on our transparency centre within the 60 day window.”

LLMs as a content moderation tool

Clegg’s blog post also discusses the (so far “limited”) use of generative AI by Meta as a tool for helping it enforce its own policies — and the potential for GenAI to take up more of the slack here, with the Meta president suggesting it may turn to large language models (LLMs) to support its enforcement efforts during moments of “heightened risk”, such as elections.

“While we use AI technology to help enforce our policies, our use of generative AI tools for this purpose has been limited. But we’re optimistic that generative AI could help us take down harmful content faster and more accurately. It could also be useful in enforcing our policies during moments of heightened risk, like elections,” he wrote.

“We’ve started testing Large Language Models (LLMs) by training them on our Community Standards to help determine whether a piece of content violates our policies. These initial tests suggest the LLMs can perform better than existing machine learning models. We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it doesn’t violate our policies. This frees up capacity for our reviewers to focus on content that’s more likely to break our rules.”

So we now have Meta experimenting with generative AI as a supplement to its standard AI-powered content moderation efforts in a bid to reduce the volume of toxic content that gets pumped into the eyeballs and brains of overworked human content reviewers, with all the trauma risks that entails.

AI alone couldn’t fix Meta’s content moderation problem — whether AI plus GenAI can do it seems doubtful. But it might help the tech giant extract greater efficiencies at a time when the tactic of outsourcing toxic content moderation to low paid humans is facing legal challenges across multiple markets.

Clegg’s post also notes that AI-generated content on Meta’s platforms is “eligible to be fact-checked by our independent fact-checking partners” — and may, therefore, also be labelled as debunked (i.e. in addition to being labelled as AI-generated; or “Imagined by AI”, as Meta’s current GenAI image labels have it). Which, frankly, sounds increasingly confusing for users trying to navigate the credibility of stuff they see on its social media platforms — where a piece of content may get multiple signposts applied to it, just one label, or none at all.

Clegg also avoids any discussion of the chronic asymmetry between the availability of human fact-checkers, a resource that’s typically provided by nonprofit entities which have limited time and money to debunk essentially limitless digital fakes; and all sorts of malicious actors with access to social media platforms, fuelled by myriad incentives and funders, who are able to weaponize increasingly widely available and powerful AI tools (including those Meta itself is building and providing to fuel its content-dependent business) to massively scale disinformation threats.

Without solid data on the prevalence of synthetic vs authentic content on Meta’s platforms, and without data on how effective its AI fake detection systems actually are, there’s little we can conclude — beyond the obvious: Meta is feeling under pressure to be seen to be doing something in a year when election-related fakes will, undoubtedly, command a lot of publicity.

Oversight Board calls on Meta to rewrite ‘incoherent’ rules against faked videos

From AI Assistant to image restyler: Meta’s new AI features

More TechCrunch

When Alex Ewing was a kid growing up in Purcell, Oklahoma, he knew how close he was to home based on which billboards he could see out the car window.…

OneScreen.ai brings startup ads to billboards and NYC’s subway

SpaceX’s massive Starship rocket could take to the skies for the fourth time on June 5, with the primary objective of evaluating the second stage’s reusable heat shield as the…

SpaceX sent Starship to orbit — the next launch will try to bring it back

Eric Lefkofsky knows the public listing rodeo well and is about to enter it for a fourth time. The serial entrepreneur, whose net worth is estimated at nearly $4 billion,…

Billionaire Groupon founder Eric Lefkofsky is back with another IPO: AI health tech Tempus

TechCrunch Disrupt showcases cutting-edge technology and innovation, and this year’s edition will not disappoint. Among thousands of insightful breakout session submissions for this year’s Audience Choice program, five breakout sessions…

You’ve spoken! Meet the Disrupt 2024 breakout session audience choice winners

Check Point is the latest security vendor to fix a vulnerability in its technology, which it sells to companies to protect their networks.

Zero-day flaw in Check Point VPNs is ‘extremely easy’ to exploit

Though Spotify never shared official numbers, it’s likely that Car Thing underperformed or was just not worth continued investment in today’s tighter economic market.

Spotify offers Car Thing refunds as it faces lawsuit over bricking the streaming device

The studies, by researchers at MIT, Ben-Gurion University, Cambridge and Northeastern, were independently conducted but complement each other well.

Misinformation works, and a handful of social ‘supersharers’ sent 80% of it in 2020

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Okay, okay…

Tesla shareholder sweepstakes and EV layoffs hit Lucid and Fisker

In a series of posts on X on Thursday, Paul Graham, the co-founder of startup accelerator Y Combinator, brushed off claims that OpenAI CEO Sam Altman was pressured to resign…

Paul Graham claims Sam Altman wasn’t fired from Y Combinator

In its three-year history, EthonAI has amassed some fairly high-profile customers including Siemens and chocolate-maker Lindt.

AI manufacturing startup funding is on a tear as Switzerland’s EthonAI raises $16.5M

Don’t miss out: TechCrunch Disrupt early-bird pricing ends in 48 hours! The countdown is on! With only 48 hours left, the early-bird pricing for TechCrunch Disrupt 2024 will end on…

Ticktock! 48 hours left to nab your early-bird tickets for Disrupt 2024

Biotech startup Valar Labs has built a tool that accurately predicts certain treatment outcomes, potentially saving precious time for patients.

Valar Labs debuts AI-powered cancer care prediction tool and secures $22M

Archer Aviation is partnering with ride-hailing and parking company Kakao Mobility to bring electric air taxi flights to South Korea starting in 2026, if the company can get its aircraft…

Archer, Kakao Mobility partner to bring electric air taxis to South Korea in 2026

Space startup Basalt Technologies started in a shed behind a Los Angeles dentist’s office, but things have escalated quickly: Soon it will try to “hack” a derelict satellite and install…

Basalt plans to ‘hack’ a defunct satellite to install its space-specific OS

As a teen model, Katrin Kaurov became financially independent at a young age. Aleksandra Medina, whom she met at NYU Abu Dhabi, also learned to manage money early on. The…

Former teen model co-created app Frich to help Gen Z be more realistic about finances

Can AI help you tell your story? That’s the idea behind a startup called Autobiographer, which leverages AI technology to engage users in meaningful conversations about the events in their…

Autobiographer’s app uses AI to help you tell your life story

AI-powered summaries of web pages are a feature that you will find in many AI-centric tools these days. The next step for some of these tools is to prepare detailed…

Perplexity AI’s new feature will turn your searches into shareable pages

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

Battery recycling startups have emerged in Europe in a bid to tap into the next big opportunity in the EV market: battery waste.  Among them is Cylib, a German-based startup…

Cylib wants to own EV battery recycling in Europe

Amazon has received approval from the U.S. Federal Aviation Administration (FAA) to fly its delivery drones longer distances, the company announced on Thursday. Amazon says it can now expand its…

Amazon gets FAA approval to expand US drone deliveries

With Plannin, creators can tell their audience about their latest trip, which hotels they liked and post photos of their travels.

Former Priceline execs debut Plannin, a booking platform that uses travel influencers to help plan trips

Amazon is rolling out its AI voice search feature to Alexa, which lets it answer open-ended questions about content.

Amazon is rolling out AI voice search to Fire TV devices

Redpanda has already integrated Benthos into its own service and has made it the core technology of its new Redpanda Connect service.

Redpanda acquires Benthos to expand its end-to-end streaming data platform

It’s a lofty goal to take on legacy payments infrastructure, however, Forward’s model has an advantage by shifting the economics back to SaaS companies.

Fintech startup Forward grabs $16M to take on Stripe, lead future of integrated payments

Fertility remains a pressing concern around the world — birthrates are down in many countries, and infertility rates (that is, the inability to conceive) are up. Rhea, a Singapore- and…

Rhea reaps $10M more led by Thiel

Microsoft, Meta, Intel, AMD and others have formed a new group to design next-gen interconnects for AI accelerator hardware.

Tech giants form an industry group to help develop next-gen AI chip components

With JioFinance, the Indian tycoon Mukesh Ambani is making his boldest consumer-facing move yet into financial services.

Ambani’s Reliance fires opening salvo in fintech battle, launches JioFinance app

Salespeople live and die by commissions. It’s no surprise, then, that Salesforce paid a premium to buy a platform that simplifies managing commissions.

Filing shows Salesforce paid $419M to buy Spiff in February

YoLa Fresh works with over a thousand retailers across Morocco and records up to $1 million in gross merchandise volume.

YoLa Fresh, a GrubMarket for Morocco, digs up $7M to connect farmers with food sellers

Instagram is expanding the scope of its “Limits” tool specifically for teenagers that would let them restrict unwanted interactions with people.

Instagram now lets teens limit interactions to their ‘Close Friends’ group to combat harassment