Media & Entertainment

Facebook again under fire for spreading illegal content

Comment

An investigation by a British newspaper into child sexual abuse content and terrorist propaganda being shared on Facebook has once again drawn critical attention to how the company handles complaints about offensive and extremist content being shared on its platform.

And, indeed, how Facebook’s algorithmically driven user generated content sharing platform apparently encourages the spread of what can also be illegal material.

In a report published today, The Times newspaper accuses Facebook of publishing child pornography after one of its reporters created a fake profile and was quickly able to find offensive and potentially illegal content on the site — including pedophilic cartoons; a video that apparently shows a child being violently abused; and various types of terrorist propaganda including a beheading video made by an ISIS supporter, and comments celebrating a recent attack against Christians in Egypt.

The Times says it reported the content to Facebook but in most instances was apparently told the imagery and videos did not violate the site’s community standards. (Although, when it subsequently contacted the platform identifying itself as The Times newspaper it says some of pedophilic cartoons that had been kept up by moderators were subsequently removed.)

Facebook says it has since removed all the content reported by the newspaper.

A draft law in Germany is proposing to tackle exactly this issue — using the threat of large fines for social media platforms that fail to quickly take down illegal content after a complaint. Ministers in the German cabinet backed the proposed law earlier this month, which could be adopted in the current legislative period.

And where one European government is heading, others in the region might well be moved to follow. The UK government, for example, has once again been talking tougher on social platforms and terrorism, following a terror attack in London last month — with the Home Secretary putting pressure on companies including Facebook to build tools to automate the flagging up and taking down of terrorist propaganda.

The Times says its reporter created a Facebook profile posing as an IT professional in his thirties and befriending more than 100 supporters of ISIS while also joining groups promoting “lewd or pornographic” images of children. “It did not take long to come across dozens of objectionable images posted by a mix of jihadists and those with a sexual interest in children,” it writes.

The Times showed the material it found to a UK QC, Julian Knowles, who told it that in his view many of the images and videos are likely to be illegal — potentially breaching UK indecency laws, and the Terrorism Act 2006 which outlaws speech and publications that directly or indirectly encourage terrorism.

“If someone reports an illegal image to Facebook and a senior moderator signs off on keeping it up, Facebook is at risk of committing a criminal offense because the company might be regarded as assisting or encouraging its publication and distribution,” Knowles told the newspaper.

Last month Facebook faced similar accusations over its content moderation system, after a BBC investigation looked at how the site responded to reports of child exploitation imagery, and also found the site failed to remove the vast majority of reported imagery. Last year the news organization also found that closed Facebook groups were being used by pedophiles to share images of child exploitation.

Facebook declined to provide a spokesperson to be interviewed about The Times report, but in an emailed statement Justin Osofsky, VP global operations, told us: “We are grateful to The Times for bringing this content to our attention. We have removed all of these images, which violate our policies and have no place on Facebook. We are sorry that this occurred. It is clear that we can do better, and we’ll continue to work hard to live up to the high standards people rightly expect of Facebook.”

Facebook says it employs “thousands” of human moderators, distributed in offices around the world (such as Dublin for European content) to ensure 24/7 availability. However given the platform has close to 2 billion monthly active users (1.86BN MAUs at the end of 2016, to be exact) this is very obviously just the tiniest drop in the ocean of content being uploaded to the site every second of every day.

Human moderation clearly cannot scale to review so much content without there being far more human moderators employed by Facebook — a move it clearly wants to resist, given the costs involved (Facebook’s entire company headcount only totals just over 17,000 staff).

Facebook has implemented Microsoft’s Photo DNA technology, which scans all uploads for known images of child abuse. However tackling all types of potentially problematic content is a very hard problem to try to fix with engineering; one that is not easily automated, given it requires individual judgement calls based on context as well as the specific content, while also potentially factoring in differences in legal regimes in different regions, and differing cultural attitudes.

CEO Mark Zuckerberg recently publicly discussed the issue — writing that “one of our greatest opportunities to keep people safe” is “building artificial intelligence to understand more quickly and accurately what is happening across our community”.

But he also conceded that Facebook needs to “do more”, and cautioned that an AI fix for content moderation is “years” out.

“Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization. This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide,” he wrote in February, before going on to emphasize that “protecting individual security and liberty” is also a core plank of Facebook’s community philosophy — which underscores the tricky ‘free speech vs offensive speech’ balancing act the social media giant continues to try to pull off.

In the end, illegal speech may be the driving force that catalyzes a substantial change to Facebook’s moderating processes — by providing harder red lines where it feels forced to act (even if defining what constitutes illegal speech in a particular region vs what is merely abusive and/or offensive entails another judgement challenge).

One factor is inescapable: Facebook has ultimately agreed that all of the problem content identified via various different high profile media investigations does indeed violate its community standards, and does not belong on its platform. Which rather begs the question why was it not taken down when it was first reported? Either that’s systemic failure of its moderating system — or rank hypocrisy at the corporate level.

The Times says it has reported its findings to the UK’s Metropolitan Police and the National Crime Agency. It’s unclear whether Facebook will face criminal prosecution in the UK for refusing to remove potentially illegal terrorist and child exploitation content.

The newspaper also calls out Facebook for algorithmically promoting some of the offensive material — by suggesting that users join particular groups or befriend profiles that had published it.

On that front features on Facebook such as ‘Pages You Might Known’ automatically suggest additional content a user might be interested on, based on factors such as mutual friends, work and education information, networks you’re part of and contacts that have been imported — but also many other undisclosed factors and signals.

And just as Facebook’s New Feed machine learning algorithms have been accused of favoring and promoting fake news clickbait, the underlying workings of its algorithmic processes for linking people and interests look to be being increasingly pulled into the firing line over how they might be accidentally aiding and abetting criminal acts.

More TechCrunch

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android

A hacker listed the data allegedly breached from Samco on a known cybercrime forum.

Hacker claims theft of India’s Samco account data

A top European privacy watchdog is investigating following the recent breaches of Dell customers’ personal information, TechCrunch has learned.  Ireland’s Data Protection Commission (DPC) deputy commissioner Graham Doyle confirmed to…

Ireland privacy watchdog confirms Dell data breach investigation

Ampere and Qualcomm aren’t the most obvious of partners. Both, after all, offer Arm-based chips for running data center servers (though Qualcomm’s largest market remains mobile). But as the two…

Ampere teams up with Qualcomm to launch an Arm-based AI server

At Google’s I/O developer conference, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the…

Google I/O was an AI evolution, not a revolution

TechCrunch Disrupt has always been the ultimate convergence point for all things startup and tech. In the bustling world of innovation, it serves as the “big top” tent, where entrepreneurs,…

Meet the Magnificent Six: A tour of the stages at Disrupt 2024

There’s apparently a lot of demand for an on-demand handyperson. Khosla Ventures and Pear VC have just tripled down on their investment in Honey Homes, which offers up a dedicated…

Khosla Ventures, Pear VC triple down on Honey Homes, a smart way to hire a handyman

TikTok is testing the ability for users to upload 60-minute videos, the company confirmed to TechCrunch on Thursday. The feature is available to a limited group of users in select…

TikTok tests 60-minute video uploads as it continues to take on YouTube

Flock Safety is a multibillion-dollar startup that’s got eyes everywhere. As of Wednesday, with the company’s new Solar Condor cameras, those eyes are solar-powered and use wireless 5G networks to…

Flock Safety’s solar-powered cameras could make surveillance more widespread

Since he was very young, Bar Mor knew that he would inevitably do something with real estate. His family was involved in all types of real estate projects, from ground-up…

Agora raises $34M Series B to keep building the Carta for real estate

Poshmark, the social commerce site that lets people buy and sell new and used items to each other, launched a paid marketing tool on Thursday, giving sellers the ability to…

Poshmark’s ‘Promoted Closet’ tool lets sellers boost all their listings at once

Google is launching a Gemini add-on for educational institutes through Google Workspace.

Google adds Gemini to its Education suite

More money for the generative AI boom: Y Combinator-backed developer infrastructure startup Recall.ai announced Thursday it has raised a $10 million Series A funding round, bringing its total raised to over…

YC-backed Recall.ai gets $10M Series A to help companies use virtual meeting data

Engineers Adam Keating and Jeremy Andrews were tired of using spreadsheets and screenshots to collab with teammates — so they launched a startup, CoLab, to build a better way. The…

CoLab’s collaborative tools for engineers line up $21M in new funding

Reddit announced on Wednesday that it is reintroducing its awards system after shutting down the program last year. The company said that most of the mechanisms related to awards will…

Reddit reintroduces its awards system

Sigma Computing, a startup building a range of data analytics and business intelligence tools, has raised $200 million in a fresh VC round.

Sigma is building a suite of collaborative data analytics tools