Social

X users are still complaining about arbitrary shadowbanning

Comment

a pattern of the X (formerly Twitter) logo on a cracked wall
Image Credits: TechCrunch

Users of Elon Musk-owned X (formerly Twitter) continue complaining the platform is engaging in shadowbanning — aka restricting the visibility of posts by applying a “temporary” label to accounts that can limit the reach/visibility of content — without providing clarity over why it’s imposed the sanctions.

Running a search on X for the phrase “temporary label” shows multiple instances of users complaining about being told they’ve been flagged by the platform; and, per an automated notification, that the reach of their content “may” be affected. Many users can be seen expressing confusion as to why they’re being penalized — apparently not having been given a meaningful explanation as to why the platform has imposed restrictions on their content.

Complaints that surface in a search for the phrase “temporary label” show users appear to have received only generic notifications about the reasons for the restrictions — including a vague text in which X states their accounts “may contain spam or be engaging in other types of platform manipulation”.

The notices X provides do not contain more specific reasons, nor any information on when/if the limit will be lifted, nor any route for affected users to appeal against having their account and its contents’ visibility degraded.

“Yikes. I just received a ‘temporary label’ on my account. Does anyone know what this means? I have no idea what I did wrong besides my tweets blowing up lately,” wrote X user, Jesabel (@JesabelRaay), who appears to mostly post about movies, in a complaint Monday voicing confusion over the sanction. “Apparently, people are saying they’ve been receiving this too & it’s a glitch. This place needs to get fixed, man.”

“There’s a temporary label restriction on my account for weeks now,” wrote another X user, Oma (@YouCanCallMeOma), in a public post on March 17. “I have tried appealing it but haven’t been successful. What else do I have to do?”

“So, it seems X has placed a temporary label on my account which may impact my reach. ( I’m not sure how. I don’t have much reach.),” wrote X user, Tidi Grey (@bgarmani) — whose account suggests they’ve been on the platform since 2010 — last week, on March 14. “Not sure why. I post everything I post by hand. I don’t sell anything spam anyone or post questionable content. Wonder what I did.”

The fact these complaints can be surfaced in search results means the accounts’ content still has some visibility. But shadowbanning can encompass a spectrum of actions — with different levels of post downranking and/or hiding potentially being applied. So the term itself is something of a fuzzy label — reflecting the operational opacity it references.

Musk, meanwhile, likes to claim defacto ownership of the baton of freedom of speech. But since taking over Twitter/X the shadowbanning issue has remained a thorn in the billionaire’s side, taking the sheen off claims he’s laser-focused on championing free expression. Public posts expressing confusion about account flagging suggest he’s failed to resolve long-standing gripes about random reach-sanctions. And without necessary transparency on these content decisions there can be no accountability.

Bottom line: You can’t credibly claim to be a free speech champion while presiding over a platform where arbitrary censorship continues to be baked in.

Last August, Musk claimed he would “soon” address the lack of transparency around shadowbanning on X. He blamed the problem being hard to tackle on the existence of “so many layers of ‘trust & safety’ software that it often takes the company hours to figure out who, how and why an account was suspended or shadowbanned” — and said a ground-up code rewrite was underway to simplify this codebase.

But more than half a year later complaints about opaque and arbitrary shadowbanning on X continue to roll in.

Lilian Edwards, an Internet law academic at the University of Newcastle, is another user of X who’s recently been affected by random restrictions on her account. In her case the shadowbanning appears particularly draconian, with the platform hiding her replies to threads even to users who directly follow her (in place of her content they see a “this post is unavailable” notice). She also can’t understand why she should be targeted for shadowbanning.

On Friday, when we were discussing the issues she’s experiencing with visibility of her content on X, her DM history appeared to have been briefly ‘memoryholed’ by the platform, too — with our full history of private message exchanges not visible for at least several hours. The platform also did not appear to be sending the standard notification when she sent DMs, meaning the recipient of her private messages would need to be manually checking to see if there was any new content in the conversation, rather than being proactively notified she had sent them a new DM.

She also told us her ability to RT (i.e repost) others’ content seems to be affected by the flag on her account which she said was applied last month.

Edwards, who has been on X/Twitter since 2007, posts a lot of original content on the platform — including lots of interesting legal analysis of tech policy issues — and is very obviously not a spammer. She’s also baffled by X’s notice about potential platform manipulation. Indeed, she said she was actually posting less than usual when she got the notification about the flag on her account as she was on holiday at the time.

“I’m really appalled at this because those are my private communications. Do they have a right to down-rank my private communications?!” she told us, saying she’s “furious” about the restrictions.

Another X user — a self professed “EU digital policy nerd”, per his platform biog, who goes by the handle @gateklons — has also recently been notified of a temporary flag and doesn’t understand why.

Discussing the impact of this, @gateklons told us: “The consequences of this deranking are: Replies hidden under ‘more replies’ (and often don’t show up even after pressing that button), replies hidden altogether (but still sometimes showing up in the reply count) unless you have a direct link to the tweet (e.g. from the profile or somewhere else), mentions/replies hidden from the notification tab and push notifications for such mentions/replies not being delivered (sometimes even if the quality filter is turned off and sometimes even if the two people follow each other), tweets appearing as if they are unavailable even when they are, randomly logging you out on desktop.”

@gateklons posits that the recent wave of X users complaining about being shadowbanned could be related to X applying some new “very erroneous” spammer detection rules. (And, in Edwards’ case, she told us she had logged into her X account from her vacation in Morocco when the flag was applied — so it’s possible the platform is using IP address location as a (crude) signal to factor into detection assessments, although @gateklons said they had not been travelling when their account got flagged.)

We reached out to X with questions about how it applies these sort of content restrictions but at the time of writing we’d only received its press email’s standard automated response — which reads: “Busy now, please check back later.”

Judging by search results for “temporary label”, complaints about X’s shadowbanning look to be coming from users all over the world (who are from various points on the political spectrum). But for X users located in the European Union there’s now a decent chance Musk will be forced to unpick this Gordian Knot — as the platform’s content moderation policies are under scrutiny by Commission enforcers overseeing compliance with the bloc’s Digital Services Act (DSA).

X was designated as a very large online platform (VLOP) under the DSA, the EU’s content moderation and online governance rulebook, last April. Compliance for VLOPs, which the Commission oversees, was required by late August. The EU went on to open a formal investigation of X in December — citing content moderation issues and transparency as among a longer list of suspected shortcomings.

That investigation remains ongoing but a spokesperson for the Commission confirmed “content moderation per se is part of the proceedings”, while declining to comment on the specifics of an ongoing investigation.

As you know, we have sent Requests for Information [to X] and, on December 18, 2023, opened formal proceedings into X concerning, among other things, the platform’s content moderation and platform manipulation policies,” the Commission spokesperson also told us, adding: “The current investigation covers Articles 34(1), 34(2) and 35(1), 16(5) and 16(6), 25(1), 39 and 40(12) of the DSA.”

Article 16 sets out “notice and action mechanism” rules for platforms — although this particular section is geared towards making sure platforms provide users with adequate means to report illegal content. Whereas the content moderation issue users are complaining about in respect to shadowbanning relates to arbitrary account restrictions being imposed without clarity or a route to seek redress.

Edwards points out that Article 17 of the pan-EU law requires X to provide a “clear and specific statement of reasons to any affected recipients for any restriction of the visibility of specific items of information” — with the law broadly draft to cover “any restrictions” on the visibility of the user’s content; any removal of their content; the disabling of access to content or demoting content.

The DSA also stipulates that a statement of reasons must — at the least — include specifics about the type of shadowbanning applied; the “facts and circumstances” related to the decision; whether there was any automated decisions involved in flagging an account; details of the alleged T&Cs breach/contractual grounds for taking the action and an explanation of it; and “clear and user-friendly information” about how the user can seek to appeal.

In the public complaints we’ve reviewed it’s clear X is not providing affected users with that level of detail. Yet — for users in the EU where the DSA applies — it is required to be so specific. (NB: Confirmed breaches of the pan-EU law can lead to fines of up to 6% of global annual turnover.)

The regulation does include one exception to Article 17 — exempting a platform from providing the statement of reasons if the information triggering the sanction is “deceptive high-volume commercial content”. But, as Edwards points out, that boils down to pure spam — and literally to spamming the same spammy content repeatedly. (“I think any interpretation would say high volume doesn’t just mean lots of stuff, it means lots of more or less the same stuff — deluging people to try to get them to buy spammy stuff,” she argues.) Which doesn’t appear to apply here.

(Or, well, unless all these accounts making public complaints have manually deleted loads of spammy posts before posting about the account restrictions — which seems unlikely for a range of factors, such as the volume of complaints; the variety of accounts reporting themselves affected; and how similarly confused-sounding users’ complaints are.)

It’s also notable that even X’s own boilerplate notification doesn’t explicitly accuse restricted users of being spammers; it just says there “may” be spam on their accounts or some (unspecified) form of platform manipulation going on (which, in the latter case, walks further away from the Article 17 exemption, unless it’s also platform manipulated related to “deceptive high-volume commercial content”, which would surely fit under the spam reason so why even bother mentioning platform manipulation?).

X’s use of a generic claim of spam and/or platform manipulation slapped atop what seem to be automated flags could be a crude attempt to circumvent the EU law’s requirement to provide users with both a comprehensive statement of reasons about why their account has been restricted and a way to for them to appeal the decision.

Or it could just be that X still hasn’t figured out how to untangle legacy issues attached to its trust and safety reporting systems — which are apparently related to a reliance on “free-text notes” that aren’t easily machine readable, per an explainer by Twitter’s former head of trust and safety, Yoel Roth, last year, but which are also looking like a growing DSA compliance headache for X — and replace a confusing mess of manual reports with a shiny new codebase able to programmatically parse enforcement attribution data and generate comprehensive reports.

As has previously been suggested, the headcount cuts Musk enacted when he took over Twitter may be taking a toll on what it’s able to achieve and/or how quickly it can undo knotty problems.

X is also under pressure from DSA enforcers to purge illegal content off its platform — which is an area of specific focus for the Commission probe — so perhaps, and we’re speculating here, it’s doing the equivalent of flicking a bunch of content visibility levers in a bid to shrink other types of content risks — but leaving itself open to charges of failing its DSA transparency obligations in the process.

Either way, the DSA and its enforcers are tasked with ensuring this kind of arbitrary and opaque content moderation doesn’t happen. So Musk & co are absolutely on watch in the region. Assuming the EU follows through with vigorous and effective DSA enforcement X could be forced to clean house sooner rather than later, even if only for a subset of users located in European countries where the law applies.

Asked during a press briefing last Thursday for an update on its DSA investigation into X, a Commission official pointed back to a recent meeting between the bloc’s internal market commissioner Thierry Breton and X CEO Linda Yaccarino, last month, saying she had reiterated Musk’s claim that it wants to comply with the regulation during that video call. In a post on X offering a brief digest of what the meeting had focused on, Breton wrote that he “emphasised that arbitrarily suspending accounts — voluntarily or not — is not acceptable”, adding: “The EU stands for freedom of expression and online safety.”

Balancing freedom and safety may prove to be the real Gordian Knot. For Musk. And for the EU.

Musk says X will address shadowbanning ‘soon,’ but former Trust & Safety exec explains why that will be difficult

Elon Musk’s X faces first DSA probe in EU over illegal content risks, moderation, transparency and deceptive design

More TechCrunch

Over the weekend, Instagram announced that it is expanding its creator marketplace to 10 new countries — this marketplace connects brands with creators to foster collaboration. The new regions include…

Instagram expands its creator marketplace to 10 new countries

Four-year-old Mexican BNPL startup Aplazo facilitates fractionated payments to offline and online merchants even when the buyer doesn’t have a credit card.

Aplazo is using buy-now-pay-later as a stepping stone to financial ubiquity in Mexico

We received countless submissions to speak at this year’s Disrupt 2024. After carefully sifting through all the applications, we’ve narrowed it down to 19 session finalists. Now we need your…

Vote for your Disrupt 2024 Audience Choice favs

Co-founder and CEO Bowie Cheung, who previously worked at Uber Eats, said the company now has 200 customers.

Healthy growth helps B2B food e-commerce startup Pepper nab $30 million led by ICONIQ Growth

Booking.com has been designated a gatekeeper under the EU’s DMA, meaning the firm will be regulated under the bloc’s market fairness framework.

Booking.com latest to fall under EU market power rules

Featured Article

‘Got that boomer!’: How cyber-criminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Estate is an invite-only website that has helped hundreds of attackers make thousands of phone calls aimed at stealing account passcodes, according to its leaked database.

2 hours ago
‘Got that boomer!’: How cyber-criminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Squarespace is being taken private in an all-cash deal that values the company on an equity basis at $6.6 billion.

Permira is taking Squarespace private in a $6.9 billion deal

AI-powered tools like OpenAI’s Whisper have enabled many apps to make transcription an integral part of their feature set for personal note-taking, and the space has quickly flourished as a…

Buymeacoffee’s founder has built an AI-powered voice note app

Airtel, India’s second-largest telco, is partnering with Google Cloud to develop and deliver cloud and GenAI solutions to Indian businesses.

Google partners with Airtel to offer cloud and genAI products to Indian businesses

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to…

Women in AI: Rep. Dar’shun Kendrick wants to pass more AI legislation

We took the pulse of emerging fund managers about what it’s been like for them during these post-ZERP, venture-capital-winter years.

A reckoning is coming for emerging venture funds, and that, VCs say, is a good thing

It’s been a busy weekend for union organizing efforts at U.S. Apple stores, with the union at one store voting to authorize a strike, while workers at another store voted…

Workers at a Maryland Apple store authorize strike

Alora Baby is not just aiming to manufacture baby cribs in an environmentally friendly way but is attempting to overhaul the whole lifecycle of a product

Alora Baby aims to push baby gear away from the ‘landfill economy’

Bumble founder and executive chair Whitney Wolfe Herd raised eyebrows this week with her comments about how AI might change the dating experience. During an onstage interview, Bloomberg’s Emily Chang…

Go on, let bots date other bots

Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news. This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for…

Why Apple’s ‘Crush’ ad is so misguided

The U.K. Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and academia…

U.K. agency releases tools to test AI model safety

AI startup Runway’s second annual AI Film Festival showcased movies that incorporated AI tech in some fashion, from backgrounds to animations.

At the AI Film Festival, humanity triumphed over tech

Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society.

Women in AI: Rachel Coldicutt researches how technology impacts society

SAP Chief Sustainability Officer Sophia Mendelsohn wants to incentivize companies to be green because it’s profitable, not just because it’s right.

SAP’s chief sustainability officer isn’t interested in getting your company to do the right thing

Here’s what one insider said happened in the days leading up to the layoffs.

Tesla’s profitable Supercharger network is in limbo after Musk axed the entire team

StrictlyVC events deliver exclusive insider content from the Silicon Valley & Global VC scene while creating meaningful connections over cocktails and canapés with leading investors, entrepreneurs and executives. And TechCrunch…

Meesho, a leading e-commerce startup in India, has secured $275 million in a new funding round.

Meesho, an Indian social commerce platform with 150M transacting users, raises $275M

Some Indian government websites have allowed scammers to plant advertisements capable of redirecting visitors to online betting platforms. TechCrunch discovered around four dozen “gov.in” website links associated with Indian states,…

Scammers found planting online betting ads on Indian government websites

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits