Media & Entertainment

Facebook will never take responsibility for fake news

Comment

Image Credits: Bryce Durbin / TechCrunch

M

ark Zuckerberg is displeased. It’s been more than four months since election night, but Facebook still finds itself in the hot seat over the spread of fake news on the platform, and what role (if any) it played in the election.

As part of a nationwide tour, Zuck expressed that Facebook doesn’t want fake news on the platform.

Oh good!

“There have been some accusations that say that we actually want this kind of content on our service because it’s more content and people click on it, but that’s crap,” said Zuckerberg during a town hall style meeting in North Carolina last week. “No one in our community wants fake information. We are also victims of this and we do not want it on our service.

“It’s not always clear what is fake and what isn’t,” he continued. “A lot of what people are calling fake news are just opinions that people disagree with.”

This is just the latest in a long string of recent responses to the issue that shirk responsibility for the content that users consume and share on Facebook.

I believe that Facebook does not want fake news on the platform, especially given the backlash. I also believe that Facebook isn’t the only party responsible; the media hackers and political groups that originally post it and the users who share it also have responsibility in the matter.

But it’s hard to argue that Facebook does not want all the content. Facebook was built to foster engagement, and sensationalism (just one of the many forms of Fake News) encourages engagement.

Facebook made itself the middle man of media, but has yet to take responsibility for that role and its influence. 

Though the response after the election seems sprawling and elaborate, it comes only after a public outcry — Facebook itself admits that fake news has existed on the platform for years.

And given these latest comments from Zuckerberg, which adds Facebook’s victimhood to a list of excuses and distractions that have comprised Facebook’s post-election response, it seems more clear that Facebook will never take any real responsibility for the veracity of the content pouring through the site.

 

                <img src="https://techcrunch.com/wp-content/uploads/2017/03/fb-news-eyeballs-crop.jpg?w=1022" width="100%" class="wow" />



        <p></p><p class="first">I</p>

n 2015, Facebook’s interest in the world of news became more overt with the launch of “Instant Articles,” keeping users on the Facebook platform instead of sending them to the outside web when they clicked on a link to an article. But the quest to conquer content distribution and consumption on the internet started well before then.

Remember, Facebook used to have “a wall” and now it has a “News Feed.”

Facebook has gradually built out its platform to serve publishers, whether it’s through analytics dashboards or Instant Articles or rich-media link previews, through Follow buttons or Verified pages or the Trending section of the News Feed. And, through no fault of its own, Facebook also happens to have a lot of eyeballs.

Obviously, publishers became interested, and then hooked — with little power to negotiate against Facebook.

Until December, the company took no responsibility for blurring media brands and news content to all look the same through Instant Articles. Facebook threw this site or that site up on the Trending section with little-to-no regard for the most accurate and trusted sources. And yet, former Facebook News Feed editors admitted to suppressing conservative stories and boosting left-wing content, which shows some awareness that Facebook could influence its users via the news that flows through the social network’s veins.

Before the election, Facebook stuck with the party line: “We are a tech company, not a media company.”

It wasn’t until after the election, after BuzzFeed exposed how influential Facebook’s fake news problem could have been, that Zuck started to change his tune.

No president has ever represented the interest of 2 billion people, nor has anyone ever attempted to regulate the flow of information for a group so large and diverse, and take responsibility for their experience using the service. For that, Facebook will willingly play a role. But given the breadth of users and the depth of the problem, I highly doubt that Facebook will “move fast, break things.”

Instead, Facebook is pushing half-hearted solutions and scapegoating the issue to other parties, prepping to weather this storm on an already-laid foundation of media dependence and habit.

November 10, 2016

Zuckerberg’s initial reaction, two days after the election, was to diminish fake news on Facebook as an issue:

“Personally, I think the idea that fake news on Facebook — of which it’s a small amount of content — influenced the election in any way is a pretty crazy idea,” said Zuckerberg at the Techonomy conference. “I do think there is a profound lack of empathy in asserting that the only reason someone could’ve voted the way they did is fake news.”

That fails to account for the fact that more than half of U.S. adults get their news from social media, and that most students have trouble distinguishing fake news from real news and that 20 percent of social media users say that they modified their stance on a social or political issue because of content seen on social media.

November 12, 2016

Zuckerberg then recalibrated his language with a post on Facebook, sticking with the idea that it’s highly unlikely fake news affected the outcome of the election:

<blockquote>This is an area where I believe we must proceed very carefully though. Identifying the “truth” is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual. I am confident we can find ways for our community to tell us what content is most meaningful, but I believe we must be extremely cautious about becoming arbiters of truth ourselves.</blockquote>

There are a few glaring problems with this. The first was explained well by my colleague Lora Kolodny in this post: “Zuckerberg’s comment draws a false equivalency between “mainstream sources” of news (including TechCrunch) and political groups masquerading as news brands.”

The second issue is that Zuckerberg is holding to the idea that Facebook can not take full responsibility for the content, only the pipes.

November 18, 2016

Zuckerberg’s third reaction (also posted to his Facebook page) added specifics about how the company plans to combat fake news, revealing under-construction features like stronger detection, easier reporting, third-party verification, warning labels, elevating the quality of “related articles,” disrupting the economics of fake news and “listening.”

In the post, he lamented that the issue of fake news is both technically and philosophically complicated. On that, he’s right. Determining the truth is more complicated than an algorithm can handle.

                <img src="https://techcrunch.com/wp-content/uploads/2017/03/gettyimages-624343000.jpg?w=1024" width="100%" class="wow" />




                        <p>Rodrigo Buendia/AFP/Getty Images</p>





        <p><strong>Early December</strong></p>

On December 5, anti-fake news features started to trickle out, including a survey that asked users to rate articles for their use of “misleading language.” The company didn’t clarify the survey when asked for comment, but it seems like the company was using it as a tool to gauge how users themselves decipher real news from clickbait and straight-up fakery.

TC writer Devin Coldewey put it well:

Furthermore, because users are the ones propagating the fake news to begin with, it’s a curious decision to entrust them with its classification. The inmates are being invited to run the asylum, it seems, or at least there’s going to be a bit of A/B testing.

The company then announced new partnerships with fact-checkers from trusted organizations like Snopes, FactCheck.org, Politifact, ABC News and the AP, which is admittedly a step in the right direction.

“We’re not looking to get into the gray area of opinion,” Facebook’s VP of News Feed Adam Mosseri told TechCrunch at the time. “What we are focusing on with this work is specifically the worst of the worst — clear hoaxes that were shared intentionally, usually by spammers, for financial gain.”

This crosses off the list in one fell swoop a few of Zuck’s promises — making fake news less visible (stronger detection), append warnings, easier hoax reporting and disruption of the spam news economy. But still, it neglects to take on any extra editorial responsibility beyond the bare minimum, which in this case put the responsibility on third-party organizations. To be fair, they’re most certainly better equipped to do this job. But without admitting that Facebook is the front page and Zuck the editor, there is no way for Facebook to take a proactive (rather than reactive) approach to the fake news problem.

December 21, 2016

But then, Zuck budged. After six weeks of touting Tools and Other People as the solution, Zuckerberg tweaked the message.

“Facebook is a new kind of platform. It’s not a traditional technology company,” said Zuck. “It’s not a traditional media company. You know, we build technology and we feel responsible for how it’s used.”

                <img src="https://techcrunch.com/wp-content/uploads/2017/03/gettyimages-542774578.jpg?w=1024" width="100%" class="wow" />




                        <p>Photo: Justin Sullivan/Getty Images</p>





        <p>Saying that Facebook is not a <em>traditional</em> media company implies that it is <em>some kind</em> of media company. It’s not necessarily “taking responsibility,” but it’s progress. When originally reporting on Zuck’s subtle change of heart, Josh Constine <a href="https://techcrunch.com/2016/12/21/fbonc/">got into the nitty-gritty</a> on why this matters:</p>

Pure technology platforms receive greater immunity regarding the content they serve, both legally and in the public eye. This stems from the 1996 Communications Decency Act’s Section 230(c), or the Good Samaritan act, that states “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Media companies are considered more directly responsible for their content.

January 6, 2017

A new year, a new approach. Facebook rang in 2017 with a new hire — former television anchor and charter school advocate Campbell Brown joined the company to lead Facebook’s News Partnerships team.

Notice that Brown joined as head of News Partnerships. Notice how a high-profile broadcast anchor is leading a team that is working primarily on the digital distribution of news. No disrespect to Brown, but her appointment feels more like an appeasement to those who are calling for a public editor on Facebook, and instead are getting (at best) a liaison between the opaque Facebook News Feed and actual editors in actual newsrooms. At worst, critics are getting front-row tickets to see a prop in the theater of Facebook’s PR machine.

The company clarified that Brown would not be taking on an editorial role or setting content policy for the company.

No dip.

January 11, 2017

This is where Facebook really starts to add insult to injury.

The company unveiled “The Journalism Project,” which is a combination of a few great initiatives to combat fake news and good old-fashioned bribery.

Some pieces of the Journalism Project — plans to work more closely with local news; hackathons where Facebook’s engineers and news site developers can work together; promotion of news literacy; and helping with eyewitness media — are steps in the right direction, vague as they may be.

But the good is packaged in with business proposals, like digests of Instant Articles, free trials for subscription outlets, training for newsrooms on Facebook’s journalism tools and free analytics tools for journalists to see which news stories are trending.

Let’s remember that Instant Articles are a big part of the problem, blurring the line between the content and the source of the content. Tools that expand Instant Articles or help paid news outlets offer free trials through Facebook only expand the practice of Instant Articles and grow the media’s dependency on Facebook for views.

Tools, like Signal, which help journalists see stories that are trending, sound attractive. But the popularity of a news story doesn’t necessarily correlate with whether or not it deserves to be covered. Writing for clicks is very different from writing to inform the public, which shows how deeply Facebook misunderstands the problem altogether.

Mid-January to mid-February 2017

In the month following the announcement of the Journalism Project, Facebook took action. The company rolled out anti-fake news features in Germany and partnered with Google to help combat fake news in the French election.

Facebook executives also went before the public and discussed the situation. At a panel at UC Berkeley, VP of News Feed Adam Mosseri outlined ways to mitigate the spread of fake news, including retroactively notifying a user when they’ve consumed or shared something that has been flagged. Head of partnerships Dan Rose appeared onstage at Code Media in early February and touted the new party line: Facebook is a “new type of platform… where people discover a lot of media content.”

He also said that “at the end of the day, if people want to share stories that have been flagged with their friends, that’s ultimately their prerogative.”

Essentially, more of the same “it’s not our fault” but “we’re definitely working on it.”

February 16, 2017

Just over three months after the election, Mark Zuckerberg took to Facebook to post a 5,700-word manifesto that was full of warm fuzzies, buzzwords (but not the unflattering ones) and very little substance or coherence.

Zuck’s diary didn’t go into much detail regarding the fake news problems on Facebook, but it did make an effort to defend the social network. Our own Taylor Hatmaker did a great job breaking it down:

Zuckerberg suggests that providing a “range of perspectives” will combat fake news and sensational content over time, but this appears to naively assume users have some inherent drive to seek the truth. Most social science suggests that people pursue information that concerns their existing biases, time and time again. Facebook does not have a solution for how to incentivize users to behave otherwise.
Even if we’re friends with people like ourselves, Zuckerberg thinks that Facebook feeds display “more diverse content” than newspapers or broadcast news. That’s a big claim, one that seems to embrace Facebook’s identity as a media company, and it’s not backed up by anything at all.
Facebook explains that its approach “will focus less on banning misinformation, and more on surfacing additional perspectives and information.” For fear of backlash, Facebook will sit this one out, pretty much.
Zuckerberg thinks the real problem is polarization across not only social media but also “groups and communities, including companies, classrooms and juries,” which he clumsily dismisses as “usually unrelated to politics.” Basically, Facebook will reflect the systemic inequities found elsewhere in society and it shouldn’t really be expected to do otherwise.
Zuck “[wants] to emphasize that the vast majority of conversations on Facebook are social, not ideological.” By design, so are the vast majority of conversations Facebook has about Facebook. The company continues to be terrified of appearing politically or ideologically aligned.

                <img src="https://techcrunch.com/wp-content/uploads/2017/03/fb-news-eyeballs.jpg?w=1024" width="100%" class="wow" />



        <p></p><p class="first">I</p>

n his comments this week, Zuckerberg seemed to take a measured approach to address his accusers: “We need to make sure that we don’t get to a place where we’re not showing content or banning things from the service just because it hurts someone’s feelings or because someone doesn’t agree with it – I think that would actually hurt a lot of progress.”

Being the “arbiter of truth” is not something that should fall on a single organization, nor is it simple to balance that arbitration across 2 billion users, across a multitude of cultures and beliefs and who are using a product that is fundamentally geared toward fostering that feedback loop — read this, like this, share this, see more of this.

And it’s certainly not easy to do any of that while trying to run an advertising business.

One of the accusers Zuck may be referencing is former product designer Bobby Goodlatte, who said: “Sadly, News Feed optimizes for engagement. As we’ve learned in this election, bullshit is highly engaging.”

Fair point.

Complicating matters more is the fact that everyone experiences Facebook differently, based on that all-knowing algorithm. The New York Times has the same front page, across all users, every single day. There is only one New York Times, and you can take it or leave it. A Facebook user, on the other hand, may see an entirely different front page across a variety of sources and a range of topics that is completely different from another Facebook user.

Add to that a president who has declared the media the enemy and it’s hard to blame Zuck for not wanting to get his hands dirty.

Fake news is not easy to remedy.

But before Facebook, there was no website that was home to 2 billion users. There was no ‘connected world’ the way we know it today. And it rests at the feet of Facebook to be accountable for the way its technology has changed the way we consume information. 

Many smart people have suggested a public editor, but it would take an army to wade through the troves of content on Facebook with the same meticulous eye as a real news editor. Fortune’s Matthew Ingram put it well:

Depending on whom you believe, the problem of fake news on Facebook is either one of the most important issues facing mankind, or an over-blown controversy pumped up by the mainstream media. And in a way, that dichotomy itself points out the problem with defining — let alone actually getting rid of — “fake news.”
When someone uses that term, they could be referring to one of a number of different things: It might be a story about how Bill and Hillary Clinton murdered several top-level Washington insiders, or it might be one about how Donald Trump’s chief adviser is a neo-Nazi, or it might be one about how the most important election issue was Clinton’s emails.
The first of these is relatively easy to disprove just by using facts. The second is somewhat more difficult to rebut, since a lot of it is based on innuendo or implication. And the third is almost impossible to respond to because it is pure opinion.

Ingram suggested that, instead of a public editor, Facebook may implement some sort of Wikipedia-style crowdsourced truth filter. But suggestions aside, the first step toward solving the problem is Facebook taking responsibility for the problem, even if it’s just in part. And thus far, the company has yet to fully admit anything other than it’s not the arbiter of truth, it isn’t responsible for what it’s users post, and that despite that lack of accountability, the company is taking steps toward a remedy through growing the media’s dependence on Facebook as a platform.

Both as a business decision and as a matter of moral responsibility, the company is in a difficult position.

This is why I think we’ll never see Facebook admit that it’s a media company; that it has a responsibility for the content shared on its service; or make any legitimate moves that would suggest that Facebook is accountable for what you see on Facebook.

Based on the company’s response over the last four months, it’s starting to seem like we’ll get more of the PR runaround than an actionable, achievable solution.

Facebook and media publishers have grown their audiences together, but this particular issue has left them at an impasse. Publishers are calling for further action to stop the spread of fake news, which stands to discredit the whole lot of us, without taking a stand about sharing content through Facebook. And even if Facebook made moves toward filtering content, the media would undoubtedly hold it accountable for any mistake. “Censorship!” they’ll cry.

Meanwhile, Facebook will continue to insist that it’s the publishers’ problem, the users’ problem or the fact-checkers’ problem before they actually do something meaningful to solve the problem. After all, the first step is admitting that you have one.

“Move fast, break things” may have gotten us to this place, but that’s in the past now. “Be still, maybe it won’t see us” is the future.

Illustrations by Bryce Durbin
                <img src="https://techcrunch.com/wp-content/uploads/2017/03/fb-fake-news.jpg?w=1024" width="100%" class="wow" />

More TechCrunch

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android

A hacker listed the data allegedly breached from Samco on a known cybercrime forum.

Hacker claims theft of India’s Samco account data

A top European privacy watchdog is investigating following the recent breaches of Dell customers’ personal information, TechCrunch has learned.  Ireland’s Data Protection Commission (DPC) deputy commissioner Graham Doyle confirmed to…

Ireland privacy watchdog confirms Dell data breach investigation

Ampere and Qualcomm aren’t the most obvious of partners. Both, after all, offer Arm-based chips for running data center servers (though Qualcomm’s largest market remains mobile). But as the two…

Ampere teams up with Qualcomm to launch an Arm-based AI server

At Google’s I/O developer conference, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the…

Google I/O was an AI evolution, not a revolution

TechCrunch Disrupt has always been the ultimate convergence point for all things startup and tech. In the bustling world of innovation, it serves as the “big top” tent, where entrepreneurs,…

Meet the Magnificent Six: A tour of the stages at Disrupt 2024

There’s apparently a lot of demand for an on-demand handyperson. Khosla Ventures and Pear VC have just tripled down on their investment in Honey Homes, which offers up a dedicated…

Khosla Ventures, Pear VC triple down on Honey Homes, a smart way to hire a handyman

TikTok is testing the ability for users to upload 60-minute videos, the company confirmed to TechCrunch on Thursday. The feature is available to a limited group of users in select…

TikTok tests 60-minute video uploads as it continues to take on YouTube

Flock Safety is a multibillion-dollar startup that’s got eyes everywhere. As of Wednesday, with the company’s new Solar Condor cameras, those eyes are solar-powered and use wireless 5G networks to…

Flock Safety’s solar-powered cameras could make surveillance more widespread

Since he was very young, Bar Mor knew that he would inevitably do something with real estate. His family was involved in all types of real estate projects, from ground-up…

Agora raises $34M Series B to keep building the Carta for real estate

Poshmark, the social commerce site that lets people buy and sell new and used items to each other, launched a paid marketing tool on Thursday, giving sellers the ability to…

Poshmark’s ‘Promoted Closet’ tool lets sellers boost all their listings at once

Google is launching a Gemini add-on for educational institutes through Google Workspace.

Google adds Gemini to its Education suite

More money for the generative AI boom: Y Combinator-backed developer infrastructure startup Recall.ai announced Thursday it has raised a $10 million Series A funding round, bringing its total raised to over…

YC-backed Recall.ai gets $10M Series A to help companies use virtual meeting data

Engineers Adam Keating and Jeremy Andrews were tired of using spreadsheets and screenshots to collab with teammates — so they launched a startup, CoLab, to build a better way. The…

CoLab’s collaborative tools for engineers line up $21M in new funding

Reddit announced on Wednesday that it is reintroducing its awards system after shutting down the program last year. The company said that most of the mechanisms related to awards will…

Reddit reintroduces its awards system

Sigma Computing, a startup building a range of data analytics and business intelligence tools, has raised $200 million in a fresh VC round.

Sigma is building a suite of collaborative data analytics tools