Media & Entertainment

Facebook again under fire for spreading illegal content

Comment

An investigation by a British newspaper into child sexual abuse content and terrorist propaganda being shared on Facebook has once again drawn critical attention to how the company handles complaints about offensive and extremist content being shared on its platform.

And, indeed, how Facebook’s algorithmically driven user generated content sharing platform apparently encourages the spread of what can also be illegal material.

In a report published today, The Times newspaper accuses Facebook of publishing child pornography after one of its reporters created a fake profile and was quickly able to find offensive and potentially illegal content on the site — including pedophilic cartoons; a video that apparently shows a child being violently abused; and various types of terrorist propaganda including a beheading video made by an ISIS supporter, and comments celebrating a recent attack against Christians in Egypt.

The Times says it reported the content to Facebook but in most instances was apparently told the imagery and videos did not violate the site’s community standards. (Although, when it subsequently contacted the platform identifying itself as The Times newspaper it says some of pedophilic cartoons that had been kept up by moderators were subsequently removed.)

Facebook says it has since removed all the content reported by the newspaper.

A draft law in Germany is proposing to tackle exactly this issue — using the threat of large fines for social media platforms that fail to quickly take down illegal content after a complaint. Ministers in the German cabinet backed the proposed law earlier this month, which could be adopted in the current legislative period.

And where one European government is heading, others in the region might well be moved to follow. The UK government, for example, has once again been talking tougher on social platforms and terrorism, following a terror attack in London last month — with the Home Secretary putting pressure on companies including Facebook to build tools to automate the flagging up and taking down of terrorist propaganda.

The Times says its reporter created a Facebook profile posing as an IT professional in his thirties and befriending more than 100 supporters of ISIS while also joining groups promoting “lewd or pornographic” images of children. “It did not take long to come across dozens of objectionable images posted by a mix of jihadists and those with a sexual interest in children,” it writes.

The Times showed the material it found to a UK QC, Julian Knowles, who told it that in his view many of the images and videos are likely to be illegal — potentially breaching UK indecency laws, and the Terrorism Act 2006 which outlaws speech and publications that directly or indirectly encourage terrorism.

“If someone reports an illegal image to Facebook and a senior moderator signs off on keeping it up, Facebook is at risk of committing a criminal offense because the company might be regarded as assisting or encouraging its publication and distribution,” Knowles told the newspaper.

Last month Facebook faced similar accusations over its content moderation system, after a BBC investigation looked at how the site responded to reports of child exploitation imagery, and also found the site failed to remove the vast majority of reported imagery. Last year the news organization also found that closed Facebook groups were being used by pedophiles to share images of child exploitation.

Facebook declined to provide a spokesperson to be interviewed about The Times report, but in an emailed statement Justin Osofsky, VP global operations, told us: “We are grateful to The Times for bringing this content to our attention. We have removed all of these images, which violate our policies and have no place on Facebook. We are sorry that this occurred. It is clear that we can do better, and we’ll continue to work hard to live up to the high standards people rightly expect of Facebook.”

Facebook says it employs “thousands” of human moderators, distributed in offices around the world (such as Dublin for European content) to ensure 24/7 availability. However given the platform has close to 2 billion monthly active users (1.86BN MAUs at the end of 2016, to be exact) this is very obviously just the tiniest drop in the ocean of content being uploaded to the site every second of every day.

Human moderation clearly cannot scale to review so much content without there being far more human moderators employed by Facebook — a move it clearly wants to resist, given the costs involved (Facebook’s entire company headcount only totals just over 17,000 staff).

Facebook has implemented Microsoft’s Photo DNA technology, which scans all uploads for known images of child abuse. However tackling all types of potentially problematic content is a very hard problem to try to fix with engineering; one that is not easily automated, given it requires individual judgement calls based on context as well as the specific content, while also potentially factoring in differences in legal regimes in different regions, and differing cultural attitudes.

CEO Mark Zuckerberg recently publicly discussed the issue — writing that “one of our greatest opportunities to keep people safe” is “building artificial intelligence to understand more quickly and accurately what is happening across our community”.

But he also conceded that Facebook needs to “do more”, and cautioned that an AI fix for content moderation is “years” out.

“Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization. This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide,” he wrote in February, before going on to emphasize that “protecting individual security and liberty” is also a core plank of Facebook’s community philosophy — which underscores the tricky ‘free speech vs offensive speech’ balancing act the social media giant continues to try to pull off.

In the end, illegal speech may be the driving force that catalyzes a substantial change to Facebook’s moderating processes — by providing harder red lines where it feels forced to act (even if defining what constitutes illegal speech in a particular region vs what is merely abusive and/or offensive entails another judgement challenge).

One factor is inescapable: Facebook has ultimately agreed that all of the problem content identified via various different high profile media investigations does indeed violate its community standards, and does not belong on its platform. Which rather begs the question why was it not taken down when it was first reported? Either that’s systemic failure of its moderating system — or rank hypocrisy at the corporate level.

The Times says it has reported its findings to the UK’s Metropolitan Police and the National Crime Agency. It’s unclear whether Facebook will face criminal prosecution in the UK for refusing to remove potentially illegal terrorist and child exploitation content.

The newspaper also calls out Facebook for algorithmically promoting some of the offensive material — by suggesting that users join particular groups or befriend profiles that had published it.

On that front features on Facebook such as ‘Pages You Might Known’ automatically suggest additional content a user might be interested on, based on factors such as mutual friends, work and education information, networks you’re part of and contacts that have been imported — but also many other undisclosed factors and signals.

And just as Facebook’s New Feed machine learning algorithms have been accused of favoring and promoting fake news clickbait, the underlying workings of its algorithmic processes for linking people and interests look to be being increasingly pulled into the firing line over how they might be accidentally aiding and abetting criminal acts.

More TechCrunch

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. Over the past eight years,…

Fisker collapsed under the weight of its founder’s promises

What is AI? We’ve put together this non-technical guide to give anyone a fighting chance to understand how and why today’s AI works.

WTF is AI?

President Joe Biden has vetoed H.J.Res. 109, a congressional resolution that would have overturned the Securities and Exchange Commission’s current approach to banks and crypto. Specifically, the resolution targeted the…

President Biden vetoes crypto custody bill

Featured Article

Industries may be ready for humanoid robots, but are the robots ready for them?

How large a role humanoids will play in that ecosystem is, perhaps, the biggest question on everyone’s mind at the moment.

17 hours ago
Industries may be ready for humanoid robots, but are the robots ready for them?

VCs are clamoring to invest in hot AI companies, willing to pay exorbitant share prices for coveted spots on their cap tables. Even so, most aren’t able to get into…

VCs are selling shares of hot AI companies like Anthropic and xAI to small investors in a wild SPV market

The fashion industry has a huge problem: Despite many returned items being unworn or undamaged, a lot, if not the majority, end up in the trash. An estimated 9.5 billion…

Deal Dive: How (Re)vive grew 10x last year by helping retailers recycle and sell returned items

Tumblr officially shut down “Tips,” an opt-in feature where creators could receive one-time payments from their followers.  As of today, the tipping icon has automatically disappeared from all posts and…

You can no longer use Tumblr’s tipping feature 

Generative AI improvements are increasingly being made through data curation and collection — not architectural — improvements. Big Tech has an advantage.

AI training data has a price tag that only Big Tech can afford

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: Can we (and could we ever) trust OpenAI?

Jasper Health, a cancer care platform startup, laid off a substantial part of its workforce, TechCrunch has learned.

General Catalyst-backed Jasper Health lays off staff

Featured Article

Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Live Nation says its Ticketmaster subsidiary was hacked. A hacker claims to be selling 560 million customer records.

2 days ago
Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Featured Article

Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

An autonomous pod. A solid-state battery-powered sports car. An electric pickup truck. A convertible grand tourer EV with up to 600 miles of range. A “fully connected mobility device” for young urban innovators to be built by Foxconn and priced under $30,000. The next Popemobile. Over the past eight years, famed vehicle designer Henrik Fisker…

2 days ago
Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected “unauthorized access” to Spaces, Hugging…

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Featured Article

Hacked, leaked, exposed: Why you should never use stalkerware apps

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger.

2 days ago
Hacked, leaked, exposed: Why you should never use stalkerware apps

The design brief was simple: each grind and dry cycle had to be completed before breakfast. Here’s how Mill made it happen.

Mill’s redesigned food waste bin really is faster and quieter than before

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the past week, which cracked on the poor quality and outright misinformation that arose…

Google admits its AI Overviews need work, but we’re all helping it beta test

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In…

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

The product, which ZeroMark calls a “fire control system,” has two components: a small computer that has sensors, like lidar and electro-optical, and a motorized buttstock.

a16z-backed ZeroMark wants to give soldiers guns that don’t miss against drones

The RAW Dating App aims to shake up the dating scheme by shedding the fake, TikTok-ified, heavily filtered photos and replacing them with a more genuine, unvarnished experience. The app…

Pitch Deck Teardown: RAW Dating App’s $3M angel deck

Yes, we’re calling it “ThreadsDeck” now. At least that’s the tag many are using to describe the new user interface for Instagram’s X competitor, Threads, which resembles the column-based format…

‘ThreadsDeck’ arrived just in time for the Trump verdict

Japanese crypto exchange DMM Bitcoin confirmed on Friday that it had been the victim of a hack resulting in the theft of 4,502.9 bitcoin, or about $305 million.  According to…

Hackers steal $305M from DMM Bitcoin crypto exchange

This is not a drill! Today marks the final day to secure your early-bird tickets for TechCrunch Disrupt 2024 at a significantly reduced rate. At midnight tonight, May 31, ticket…

Disrupt 2024 early-bird prices end at midnight

Instagram is testing a way for creators to experiment with reels without committing to having them displayed on their profiles, giving the social network a possible edge over TikTok and…

Instagram tests ‘trial reels’ that don’t display to a creator’s followers

U.S. federal regulators have requested more information from Zoox, Amazon’s self-driving unit, as part of an investigation into rear-end crash risks posed by unexpected braking. The National Highway Traffic Safety…

Feds tell Zoox to send more info about autonomous vehicles suddenly braking

You thought the hottest rap battle of the summer was between Kendrick Lamar and Drake. You were wrong. It’s between Canva and an enterprise CIO. At its Canva Create event…

Canva’s rap battle is part of a long legacy of Silicon Valley cringe

Voice cloning startup ElevenLabs introduced a new tool for users to generate sound effects through prompts today after announcing the project back in February.

ElevenLabs debuts AI-powered tool to generate sound effects

We caught up with Antler founder and CEO Magnus Grimeland about the startup scene in Asia, the current tech startup trends in the region and investment approaches during the rise…

VC firm Antler’s CEO says Asia presents ‘biggest opportunity’ in the world for growth

Temu is to face Europe’s strictest rules after being designated as a “very large online platform” under the Digital Services Act (DSA).

Chinese e-commerce marketplace Temu faces stricter EU rules as a ‘very large online platform’

Meta has been banned from launching features on Facebook and Instagram that would have collected data on voters in Spain using the social networks ahead of next month’s European Elections.…

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears

Stripe, the world’s most valuable fintech startup, said on Friday that it will temporarily move to an invite-only model for new account sign-ups in India, calling the move “a tough…

Stripe curbs its India ambitions over regulatory situation