AI

Meta to expand labelling of AI-generated imagery in election packed year

Comment

Facebook and Meta logos
Image Credits: Chesnot / Getty Images

Meta is expanding the labelling of AI-generated imagery on its social media platforms, Facebook, Instagram and Threads, to cover some synthetic imagery that’s been created using rivals’ generative AI tools — at least where rivals are using what it couches as “industry standard indicators” that the content is AI-generated and which Meta is able to detect.

The development means the social media giant expects to be labelling more AI-generated imagery circulating on its platforms going forward. But it’s also not putting figures on any of this stuff — i.e. how much synthetic vs authentic content is routinely being pushed at users — so how significant a move this might be in the fight against AI-fuelled dis- and misinformation (in a massive year for elections, globally) is unclear.

Meta says it already detects and labels “photorealistic images” that have been created with its own “Imagine with Meta” generative AI tool, which launched last December. But, up to now, it hasn’t been labelling synthetic imagery created using other company’s tools. So this is the (baby) step it’s announcing today.

“[W]e’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI,” wrote Meta president, Nick Clegg, in a blog post announcing the expansion of labelling. “Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads.”

Per Clegg, Meta will be rolling out expanded labelling “in the coming months”; and applying labels in “all languages supported by each app”.

Meta launches a standalone AI-powered image generator

A spokesman for Meta could not provide a more specific timeline; nor any details on which orders markets will be getting the extra labels when we asked for more. But Clegg’s post suggests the rollout will be gradual — “through the next year” — and could see Meta focusing on election calendars around the world to inform decisions about when and where to launch the expanded labelling in different markets.

“We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” he wrote. “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our own approach going forward.”

Meta’s approach to labelling AI-generated imagery relies upon detection powered by both visible marks that are applied to synthetic images by its generative AI tech and “invisible watermarks” and metadata the tool also embeds with file images. It’s these same sorts of signals, embedded by rivals’ AI image-generating tools, that Meta’s detection tech will be looking for, per Clegg — who notes it’s been working with other AI companies, via forums like the Partnership on AI, with the aim of developing common standards and best practices for identifying generative AI.

His blog post doesn’t spell out the extent of others’ efforts towards this end. But Clegg implies Meta will — in the coming 12 months — be able to detect AI-generated imagery from tools made by Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock, as well as its own AI image tools.

What about AI-generated video and audio?

When it comes to AI-generated videos and audio, Clegg suggests it’s generally still too challenging to detect these kind of fakes — because marking and watermarking has yet to be adopted at enough scale for detection tools to do a good job. Additionally, such signals can be stripped out, through editing and further media manipulation.

“[I]t’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers. So we’re pursuing a range of options,” he wrote. “We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks.

“For example, Meta’s AI Research lab FAIR recently shared research on an invisible watermarking technology we’re developing called Stable Signature. This integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can’t be disabled.”

Given the gap between what’s technically possible on the AI generation versus detection side, Meta is changing its policy to require users who post “photorealistic” AI-generated video or “realistic-sounding” audio to inform it that the content is synthetic — and Clegg says it’s reserving the right to label the content if it deems it “particularly high risk of materially deceiving the public on a matter of importance”.

If the user fails to make this manual disclosure they could face penalties — under Meta’s existing Community Standards. (So account suspensions, bans etc.)

“Our Community Standards apply to everyone, all around the world and to all types of content, including AI-generated content,” Meta’s spokesman told us when asked what type of sanctions users who fail to make a disclosure could face.

While Meta is keenly heaping attention on the risks around AI-generated fakes, it’s worth remembering that manipulation of digital media is nothing new and misleading people at scale doesn’t require fancy generative AI tools. Access to a social media account and more basic media editing skills are all it can take to make a fake that goes viral.

On this front, a recent decision by the Oversight Board, a Meta-established content review body — which looked at its decision not to remove an edited video of president Biden with his granddaughter which had been manipulated to falsely suggest inappropriate touching — urged the tech giant to rewrite what it described as “incoherent” policies when it comes to faked videos. The Board specifically called out Meta’s focus on AI-generated content in this context.

“As it stands, the policy makes little sense,” wrote Oversight Board co-chair Michael McConnell. “It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.”

Asked whether, in light of the Board’s review, Meta is looking at expanding its policies to ensure non-AI-related content manipulation risks are not being ignored, its spokesman declined to answer, saying only: “Our response to this decision will be shared on our transparency centre within the 60 day window.”

LLMs as a content moderation tool

Clegg’s blog post also discusses the (so far “limited”) use of generative AI by Meta as a tool for helping it enforce its own policies — and the potential for GenAI to take up more of the slack here, with the Meta president suggesting it may turn to large language models (LLMs) to support its enforcement efforts during moments of “heightened risk”, such as elections.

“While we use AI technology to help enforce our policies, our use of generative AI tools for this purpose has been limited. But we’re optimistic that generative AI could help us take down harmful content faster and more accurately. It could also be useful in enforcing our policies during moments of heightened risk, like elections,” he wrote.

“We’ve started testing Large Language Models (LLMs) by training them on our Community Standards to help determine whether a piece of content violates our policies. These initial tests suggest the LLMs can perform better than existing machine learning models. We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it doesn’t violate our policies. This frees up capacity for our reviewers to focus on content that’s more likely to break our rules.”

So we now have Meta experimenting with generative AI as a supplement to its standard AI-powered content moderation efforts in a bid to reduce the volume of toxic content that gets pumped into the eyeballs and brains of overworked human content reviewers, with all the trauma risks that entails.

AI alone couldn’t fix Meta’s content moderation problem — whether AI plus GenAI can do it seems doubtful. But it might help the tech giant extract greater efficiencies at a time when the tactic of outsourcing toxic content moderation to low paid humans is facing legal challenges across multiple markets.

Clegg’s post also notes that AI-generated content on Meta’s platforms is “eligible to be fact-checked by our independent fact-checking partners” — and may, therefore, also be labelled as debunked (i.e. in addition to being labelled as AI-generated; or “Imagined by AI”, as Meta’s current GenAI image labels have it). Which, frankly, sounds increasingly confusing for users trying to navigate the credibility of stuff they see on its social media platforms — where a piece of content may get multiple signposts applied to it, just one label, or none at all.

Clegg also avoids any discussion of the chronic asymmetry between the availability of human fact-checkers, a resource that’s typically provided by nonprofit entities which have limited time and money to debunk essentially limitless digital fakes; and all sorts of malicious actors with access to social media platforms, fuelled by myriad incentives and funders, who are able to weaponize increasingly widely available and powerful AI tools (including those Meta itself is building and providing to fuel its content-dependent business) to massively scale disinformation threats.

Without solid data on the prevalence of synthetic vs authentic content on Meta’s platforms, and without data on how effective its AI fake detection systems actually are, there’s little we can conclude — beyond the obvious: Meta is feeling under pressure to be seen to be doing something in a year when election-related fakes will, undoubtedly, command a lot of publicity.

Oversight Board calls on Meta to rewrite ‘incoherent’ rules against faked videos

From AI Assistant to image restyler: Meta’s new AI features

More TechCrunch

China has closed a third state-backed investment fund to bolster its semiconductor industry and reduce reliance on other nations, both for using and for manufacturing wafers — prioritizing what is…

China’s $47B semiconductor fund puts chip sovereignty front and center

Apple’s annual list of what it considers the best and most innovative software available on its platform is turning its attention to the little guy.

Apple’s Design Awards nominees highlight indies and startups, largely ignore AI (except for Arc)

The spyware maker’s founder, Bryan Fleming, said pcTattletale is “out of business and completely done,” following a data breach.

Spyware maker pcTattletale shutters after data breach

AI models are always surprising us, not just in what they can do, but what they can’t, and why. An interesting new behavior is both superficial and revealing about these…

AI models have favorite numbers, because they think they’re people

On Friday, Pal Kovacs was listening to the long-awaited new album from rock and metal giants Bring Me The Horizon when he noticed a strange sound at the end of…

Rock band’s hidden hacking-themed website gets hacked

Jan Leike, a leading AI researcher who earlier this month resigned from OpenAI before publicly criticizing the company’s approach to AI safety, has joined OpenAI rival Anthropic to lead a…

Anthropic hires former OpenAI safety lead to head up new team

Welcome to TechCrunch Fintech! This week, we’re looking at the long-term implications of Synapse’s bankruptcy on the fintech sector, Majority’s impressive ARR milestone, and more!  To get a roundup of…

The demise of BaaS fintech Synapse could derail the funding prospects for other startups in the space

YouTube’s free Playables don’t directly challenge the app store model or break Apple’s rules. However, they do compete with the App Store’s free games.

YouTube’s free games catalog ‘Playables’ rolls out to all users

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the first months of 2024. Smaller-sized…

9 hours ago
A comprehensive list of 2024 tech layoffs

OpenAI has formed a new committee to oversee “critical” safety and security decisions related to the company’s projects and operations. But, in a move that’s sure to raise the ire…

OpenAI’s new safety committee is made up of all insiders

Time is running out for tech enthusiasts and entrepreneurs to secure their early-bird tickets for TechCrunch Disrupt 2024! With only four days left until the May 31 deadline, now is…

Early bird gets the savings — 4 days left for Disrupt sale

AI may not be up to the task of replacing Google Search just yet, but it can be useful in more specific contexts — including handling the drudgery that comes…

Skej’s AI meeting scheduling assistant works like adding an EA to your email

Faircado has built a browser extension that suggests pre-owned alternatives for ecommerce listings.

Faircado raises $3M to nudge people to buy pre-owned goods

Tumblr, the blogging site acquired twice, is launching its “Communities” feature in open beta, the Tumblr Labs division has announced. The feature offers a dedicated space for users to connect…

Tumblr launches its semi-private Communities in open beta

Remittances from workers in the U.S. to their families and friends in Latin America amounted to $155 billion in 2023. With such a huge opportunity, banks, money transfer companies, retailers,…

Félix Pago raises $15.5 million to help Latino workers send money home via WhatsApp

Google said today it’s adding new AI-powered features such as a writing assistant and a wallpaper creator and providing easy access to Gemini chatbot to its Chromebook Plus line of…

Google adds AI-powered features to Chromebook

The dynamic duo behind the Grammy Award–winning music group the Chainsmokers, Alex Pall and Drew Taggart, are set to bring their entrepreneurial expertise to TechCrunch Disrupt 2024. Known for their…

The Chainsmokers light up Disrupt 2024

The deal will give LumApps a big nest egg to make acquisitions and scale its business.

LumApps, the French ‘intranet super app,’ sells majority stake to Bridgepoint in a $650M deal

Featured Article

More neobanks are becoming mobile networks — and Nubank wants a piece of the action

Nubank is taking its first tentative steps into the mobile network realm, as the NYSE-traded Brazilian neobank rolls out an eSIM (embedded SIM) service for travelers. The service will give customers access to 10GB of free roaming internet in more than 40 countries without having to switch out their own existing physical SIM card or…

16 hours ago
More neobanks are becoming mobile networks — and Nubank wants a piece of the action

Infra.Market, an Indian startup that helps construction and real estate firms procure materials, has raised $50M from MARS Unicorn Fund.

MARS doubles down on India’s Infra.Market with new $50M investment

Small operations can lose customers by not offering financing, something the Berlin-based startup wants to change.

Cloover wants to speed solar adoption by helping installers finance new sales

India’s Adani Group is in discussions to venture into digital payments and e-commerce, according to a report.

Adani looks to battle Reliance, Walmart in India’s e-commerce, payments race, report says

Ledger, a French startup mostly known for its secure crypto hardware wallets, has started shipping new wallets nearly 18 months after announcing the latest Ledger Stax devices. The updated wallet…

Ledger starts shipping its high-end hardware crypto wallet

A data protection taskforce that’s spent over a year considering how the European Union’s data protection rulebook applies to OpenAI’s viral chatbot, ChatGPT, reported preliminary conclusions Friday. The top-line takeaway…

EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

Here’s a shoutout to LatAm early-stage startup founders! We want YOU to apply for the Startup Battlefield 200 at TechCrunch Disrupt 2024. But you’d better hurry — time is running…

LatAm startups: Apply to Startup Battlefield 200

The countdown to early-bird savings for TechCrunch Disrupt, taking place October 28–30 in San Francisco, continues. You have just five days left to save up to $800 on the price…

5 days left to get your early-bird Disrupt passes

Venture investment into Spanish startups also held up quite well, with €2.2 billion raised across some 850 funding rounds.

Spanish startups reached €100 billion in aggregate value last year

Featured Article

Onyx Motorbikes was in trouble — and then its 37-year-old owner died

James Khatiblou, the owner and CEO of Onyx Motorbikes, was watching his e-bike startup fall apart.  Onyx was being evicted from its warehouse in El Segundo, near Los Angeles. The company’s unpaid bills were stacking up. Its chief operating officer had abruptly resigned. A shipment of around 100 CTY2 dirt bikes from Chinese supplier Suzhou…

1 day ago
Onyx Motorbikes was in trouble — and then its 37-year-old owner died

Featured Article

Iyo thinks its GenAI earbuds can succeed where Humane and Rabbit stumbled

Iyo represents a third form factor in the push to deliver standalone generative AI devices: Bluetooth earbuds.

1 day ago
Iyo thinks its GenAI earbuds can succeed where Humane and Rabbit stumbled

Arati Prabhakar, profiled as part of TechCrunch’s Women in AI series, is director of the White House Office of Science and Technology Policy.

Women in AI: Arati Prabhakar thinks it’s crucial to get AI ‘right’