Zuckerberg reveals plans to address misinformation on Facebook

Comment

Image Credits: Bryce Durbin

Facebook’s fake news problem persists, CEO Mark Zuckerberg acknowledged last night.

He’d been dismissive about the reach of misinformation on Facebook, saying that false news accounted for less than one percent of all the posts on the social media network. But a slew of media reports this week have demonstrated that, although fake posts may not make up the bulk of the content on Facebook, they spread like wildfire — and Facebook has a responsibility to address it.

“We’ve made significant progress, but there is more work to be done,” Zuckerberg wrote, outlining several ways to address what he called a technically and philosophically complicated problem. He proposed stronger machine learning to detect misinformation, easier user reporting and content warnings for fake stories, while noting that Facebook has already taken action to eliminate fake news sites from its ad program.

The firestorm over misinformation on Facebook began with a particularly outrageous headline: “FBI Agent Suspected in Hillary Email Leaks Found Dead.”

The false story led to accusations that Facebook had tipped the election in Donald Trump’s favor by turning a blind eye to the flood of fake stories trending on its platform. The story, which ran just days before the election on a site for a made-up publication called Denver Guardian, suggests that Clinton plotted the murders of an imaginary agent and his imaginary wife, then tried to cover it up as an act of domestic violence. It was shared more than 568,000 times.

screen-shot-2016-11-18-at-11-29-00-amThe Denver Guardian story caused a crisis at Facebook, and it hasn’t gone away. Last night, the
story appeared yet again in a friend’s newsfeed. “BREAKING,” the post blared. “FBI AGENT & HIS WIFE FOUND DEAD After Being ACCUSED of LEAKING HILLARY’s EMAILS.” This time, the story was hosted by a site called Viral Liberty. Beneath the headline is a button encouraging Facebook users to share the story, and according to Facebook’s own data, it’s been shared 127,680 times.

Facebook isn’t alone. Google and Twitter grapple with similar problems and have mistakenly allowed fake stories to rise to prominence as well. And although stories about the rise of fake news online have focused primarily on pro-Trump propaganda, the sharing-without-reading epidemic exists in liberal circles too — several of my Facebook friends recently shared an article by the New Yorker‘s satirist Andy Borowitz titled “Trump Confirms That He Just Googled Obamacare” as if it were fact, celebrating in their posts that Trump may not dismantle the Affordable Care Act after all his campaign promises to the contrary.

But, as the hub where 44 percent of Americans read their news, Facebook bears a unique responsibility to address the problem. According to former Facebook employees and contractors, the company struggles with fake news because its culture prioritizes engineering over everything else and because it failed to build its news apparatus to recognize and prioritize reliable sources.

Facebook’s media troubles began this spring, when a contractor on its Trending Topics team told Gizmodo that the site was biased against conservative media outlets. To escape allegations of bias, Facebook fired the team of journalists who vetted and wrote Trending Topics blurbs and turned the feature over to an algorithm, which quickly began promoting fake stories from sites designed to churn out incendiary election stories and convert them into quick cash.

It’s not a surprise that Trending Topics went so wrong, so quickly — according to Adam Schrader, a former writer for Trending Topics, the tool pulled its hashtagged titles from Wikipedia, a source with its own struggles with the truth.

Mark Zuckerberg is the front page editor of every newspaper in the world. Antionio Garcia-Martinez

“The topics would pop up into the review tool by name, with no description. It was generated from a Wikipedia topic ID, essentially. If a Wikipedia topic was frequently discussed in the news or Facebook, it would pop up into the review tool,” Schrader explained.

From there, he and the other Trending Topics writers would scan through news stories and Facebook posts to determine why the topic was trending. Part of the job was to determine whether the story was true — in Facebook’s jargon, to determine whether a “real world event” had occurred. If the story was real, the writer would then draft a short description and choose an article to feature. If the topic didn’t have a Wikipedia page yet, the writers had the ability to override the tool and write their own title for the post.

Human intervention was necessary at several steps of the process — and it’s easy to see how Trending Topics broke down when humans were removed from the system. Without a journalist to determine whether a “real world event” had occurred and to choose a reputable news story to feature in the Topic, Facebook’s algorithm is barely more than a Wikipedia-scraping bot, susceptible to exploitation by fake news sites.

But the idea of using editorial judgement made Facebook executives uncomfortable, and ultimately Schrader and his co-workers lost their jobs.

“[Facebook] and Google and everyone else have been hiding behind mathematics. They’re allergic to becoming a media company. They don’t want to deal with it,” former Facebook product manager and author of Chaos Monkeys Antonio Garcia-Martinez told TechCrunch. “An engineering-first culture is completely antithetical to a media company.”

Of course, Facebook doesn’t want to be a media company. Facebook would say it’s a technology company, with no editorial voice. Now that the Trending editors are gone, the only content Facebook produces is code.

But Facebook is a media company, Garcia-Martinez and Schrader argue.

“Facebook, whether it says it is or it isn’t, is a media company. They have an obligation to provide legit information,” Schrader told me. “They should take actions that make their product cleaner and better for people who use Facebook as a news consumption tool.”

Garcia-Martinez agreed. “The New York Times has a front page editor, who arranges the front page. That’s what New York Times readers read every day — what the front page editor chooses for them. Now Mark Zuckerberg is the front page editor of every newspaper in the world. He has the job but he doesn’t want it,” he said.

Zuckerberg is resistant to this role, writing last night that he preferred to leave complex decisions about the accuracy of Facebook content in the hands of his users. “We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” he wrote. “We have relied on our community to help us understand what is fake and what is not. Anyone on Facebook can report any link as false, and we use signals from those reports along with a number of others — like people sharing links to myth-busting sites such as Snopes — to understand which stories we can confidently classify as misinformation.”

However, Facebook’s reliance on crowd-sourced truth from its users and from sites like Wikipedia will only take the company halfway to the truth. Zuckerberg also acknowledges that Facebook can and should do more.

Change the algorithm

“There’s definitely things Facebook could do to, if not solve the problem, at least mitigate it,” Garcia-Martinez said, highlighting his former work on ad quality and the massive moderation system Facebook uses to remove images and posts that violate its community guidelines.

To cut back on misinformation, he explains, “You could effectively change distribution at the algorithmic level so they don’t get the engagement that they do.”

This kind of technical solution is most likely to get traction in Facebook’s engineering-first culture, and Zuckerberg says the work is already underway. “The most important thing we can do is improve our ability to classify misinformation. This means better technical systems to detect what people will flag as false before they do it themselves,” he wrote.

This kind of algorithmic tweaking is already popular at Google and other major companies as a way to moderate content. But, in pursuing a strictly technical response, Facebook risks becoming an opaque censor. Legitimate content can vanish into the void, and when users protest, the only response they’re likely to get is, “Oops, there was some kind of error in the algorithm.”

Zuckerberg is rightfully wary of this. “We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content,” he said.

Improve the user interface

Mike Caulfield, the director of blended and networked learning at Washington State University Vancouver, has critiqued Facebook’s misinformation problem. He writes that sharing fake news on Facebook isn’t a passive act — rather, it trains us to believe the things we share are true.

“Early Facebook trained you to remember birthdays and share photos, and to some extent this trained you to be a better person, or in any case the sort of person you desired to be,” Caulfield said, adding:

The process that Facebook currently encourages, on the other hand, of looking at these short cards of news stories and forcing you to immediately decide whether to support or not support them trains people to be extremists. It takes a moment of ambivalence or nuance, and by design pushes the reader to go deeper into their support for whatever theory or argument they are staring at. When you consider that people are being trained in this way by Facebook for hours each day, that should scare the living daylights out of you.

When users look at articles in their News Feed today, Caulfield notes, they see prompts encouraging them to Like, Share, Comment — but nothing suggesting that they Read.

Caulfield suggests that Facebook place more emphasis on the domain name of the news source, rather than solely focusing on the name of the friend who shares the story. Facebook could also improve by driving readers to actually engage with the stories rather than simply reacting to them without reading, but as Caulfield notes, Facebook’s business model is all about keeping you locked into News Feed and not exiting to other sites.

Caulfield’s suggestions for an overhaul of the way articles appear in News Feed are powerful, but Facebook is more likely to make small tweaks than major changes. A compromise might be to label or flag fake news as such when it appears in the News Feed, and Zuckerberg says this is a strategy Facebook is considering.

“We are exploring labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them,” he said.

It’s a strategy that sources tell me is being considered not just at Facebook but at other social networks, but risk-averse tech giants are hesitant to slap a “FAKE” label on a news story. What if they get it wrong? And what about stories like Borowitz’s satire — should the story be called out as false, or merely a joke? And what if a news story from a legitimate publisher turns out to contain inaccuracies? Facebook, Google, Twitter, and others will be painted into a corner, forced to decide what percentage of the information in a story can be false before it’s blocked, downgraded, or marked with a warning label.

Fact-checking Instant Articles

Like the fight against spam, clickbait, and other undesirable content, the war against misinformation on platforms like Google and Facebook is a game of wack-a-mole. But both companies have built their own interfaces for news — Accelerated Mobile Pages and Instant Articles — and they could proactively counter fake stories in those spaces.

AMP and Instant Articles are open platforms, so fake news publishers are welcome to join and distribute their content. But the companies’ control over these spaces gives them an opportunity to detect fake news early.

Google and Facebook both have a unique opportunity to fact-check within AMP and Instant Articles — they could place annotations over certain parts of a news story in the style of News Genius to point out inaccuracies, or include links to other articles offering counterpoints and fact-checks.

Zuckerberg wasn’t clear about what third-party verification of the news on Facebook would look like, saying only, “There are many respected fact checking organizations and, while we have reached out to some, we plan to learn from many more.”

Bringing third-party vetting back into the picture means a return to the kind of human oversight Facebook had in its Trending Topics team. Although Facebook has made clear it wants to leave complex decisions up to its algorithms, the plummeting quality of Trending Topics makes it clear that the algorithm isn’t ready yet.

“I don’t think Trending ever had a problem with fake news or biases necessarily, before the Gizmodo article or after. All the problems were after the team was let go,” Schrader said, noting that Facebook intended to incorporate machine learning into Trending Topics but needed human input to guide and train the algorithm.

Engineers working on machine learning have told me they estimate it would take a dedicated team more than a year to train an algorithm to properly do the work Facebook is attempting with Trending Topics.

Appoint a public editor

Zuckerberg did acknowledge that perhaps Facebook can learn something from journalists like Schrader after all. “We will continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact checking systems and learn from them,” he said.

But the media certainly isn’t perfect. Sometimes we get our facts wrong, and the results can range from comical to disastrous. In 2004, the New York Times issued a statement questioning its own reporting on several factually-inaccurate stories that spurred the war in Iraq. As journalists sometimes make mistakes, so will Facebook. And when that happens, Facebook should address the errors.

“In a small back door sort of way, it will adopt some of the protocols of a media company,” Garcia-Martinez says of Facebook. One suggestion: “Get a public editor like the New York Times.”

The public editor serves as a liaison between a paper and its readers, and provides answers about the reporting and what could have been done better.

In his late-night Facebook posts, Zuckerberg has already somewhat assumed this role. But an individual with more independence could help Facebook learn and grow.

“They are going to get a lot better about this business of editorship,” Garcia-Martinez predicts. “When the stakes are American democracy, saying, ‘We’re not a media company,’ is not good enough.”

 

More TechCrunch

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android

A hacker listed the data allegedly breached from Samco on a known cybercrime forum.

Hacker claims theft of India’s Samco account data

A top European privacy watchdog is investigating following the recent breaches of Dell customers’ personal information, TechCrunch has learned.  Ireland’s Data Protection Commission (DPC) deputy commissioner Graham Doyle confirmed to…

Ireland privacy watchdog confirms Dell data breach investigation

Ampere and Qualcomm aren’t the most obvious of partners. Both, after all, offer Arm-based chips for running data center servers (though Qualcomm’s largest market remains mobile). But as the two…

Ampere teams up with Qualcomm to launch an Arm-based AI server

At Google’s I/O developer conference, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the…

Google I/O was an AI evolution, not a revolution

TechCrunch Disrupt has always been the ultimate convergence point for all things startup and tech. In the bustling world of innovation, it serves as the “big top” tent, where entrepreneurs,…

Meet the Magnificent Six: A tour of the stages at Disrupt 2024

There’s apparently a lot of demand for an on-demand handyperson. Khosla Ventures and Pear VC have just tripled down on their investment in Honey Homes, which offers up a dedicated…

Khosla Ventures, Pear VC triple down on Honey Homes, a smart way to hire a handyman

TikTok is testing the ability for users to upload 60-minute videos, the company confirmed to TechCrunch on Thursday. The feature is available to a limited group of users in select…

TikTok tests 60-minute video uploads as it continues to take on YouTube

Flock Safety is a multibillion-dollar startup that’s got eyes everywhere. As of Wednesday, with the company’s new Solar Condor cameras, those eyes are solar-powered and use wireless 5G networks to…

Flock Safety’s solar-powered cameras could make surveillance more widespread

Since he was very young, Bar Mor knew that he would inevitably do something with real estate. His family was involved in all types of real estate projects, from ground-up…

Agora raises $34M Series B to keep building the Carta for real estate

Poshmark, the social commerce site that lets people buy and sell new and used items to each other, launched a paid marketing tool on Thursday, giving sellers the ability to…

Poshmark’s ‘Promoted Closet’ tool lets sellers boost all their listings at once

Google is launching a Gemini add-on for educational institutes through Google Workspace.

Google adds Gemini to its Education suite

More money for the generative AI boom: Y Combinator-backed developer infrastructure startup Recall.ai announced Thursday it has raised a $10 million Series A funding round, bringing its total raised to over…

YC-backed Recall.ai gets $10M Series A to help companies use virtual meeting data

Engineers Adam Keating and Jeremy Andrews were tired of using spreadsheets and screenshots to collab with teammates — so they launched a startup, CoLab, to build a better way. The…

CoLab’s collaborative tools for engineers line up $21M in new funding

Reddit announced on Wednesday that it is reintroducing its awards system after shutting down the program last year. The company said that most of the mechanisms related to awards will…

Reddit reintroduces its awards system

Sigma Computing, a startup building a range of data analytics and business intelligence tools, has raised $200 million in a fresh VC round.

Sigma is building a suite of collaborative data analytics tools