the big thing
This week hosted a lovely little microcosm of the debate around internet platform responsibility.
As is so often the case, the case study centered around bad things that Facebook did. The Wall Street Journal has had a series of well-packaged reports lambasting the social media company for essentially acting with disregard for problems that its own research indicates that it creates for teens. The big figure was that 1 in 3 teen girls said the platform made their issues with body image worse. Clearly, not a great soundbite for Facebook.
The conversation that swelled was predictable enough, Facebook PR did their best to rebut the backlash especially since this time it was impacting their crown jewel — Instagram, but Facebook — the corporation — doesn’t seem to have that many fans these days who aren’t shareholders or getting biweekly checks from the company.
Facebook and Instagram aren’t getting criticized unfairly but it sometimes feels odd just how much more attention is placed on the house that Zuckerberg built, when just down the road YouTube seems to be facing many of the same challenges and failing in many of the same ways, but is doing so with a fraction of public scrutiny and user ire. Not that I’m particularly tired of seeing Facebook executives get hauled to congressional hearings, but I wouldn’t mind seeing YouTube CEO Susan Wojcicki being forced to address more of the issues with radicalization that are happening on her platform.
YouTube was in the headlines this week for platform safety, but they received positive coverage for seemingly being proactive instead of reactive.
The Alphabet-owned video platform announced that they were now removing previously-allowed anti-vaccine content on their site. They already had policies in place to remove anti-vax content related specifically to Covid-19, and says that they had already taken down 130,000 videos from the platform that violated this policy. That’s a big number but their enforcement of this has been far from flawless, it takes about 30 seconds of searching to find disinformation about Covid on YouTube with endless comments underneath doubling down on them — often with a few algorithmically suggested videos to take you deeper down the rabbit hole.
The platform clearly could have been doing worse, but it’s a little odd that we’re supposed to see the announcement of anti-vaccine content being abolished from their platform as a proactive move after 2 years of Covid-19 and 16 years of YouTube.
Here’s the company’s note on what types of content they’re removing as of this week:
Specifically, content that falsely alleges that approved vaccines are dangerous and cause chronic health effects, claims that vaccines do not reduce transmission or contraction of disease, or contains misinformation on the substances contained in vaccines will be removed. This would include content that falsely says that approved vaccines cause autism, cancer or infertility, or that substances in vaccines can track those who receive them. Our policies not only cover specific routine immunizations like for measles or Hepatitis B, but also apply to general statements about vaccines.
YouTube is a very large media platform that has made plenty of choices solely designed around pushing engagement, and — like Facebook — that hasn’t been much of a problem for the vast majority of topics. But — like Facebook — it’s also apparent that the company has over-indexed for driving views in areas where it should be keeping an eye out for vulnerable users on the path towards radicalization. The company’s suggested videos sidebar has grown more powerful over the years and it’s clear that YouTube should probably be killing algorithmic suggestions on potentially dangerous topics long before they take the plunge to ban topics entirely. While Facebook has been releasing engagement data for their top posts and sharing some ugly truths at times, this data isn’t nearly as accessible for YouTube, leaving many important questions unanswered.
YouTube often seems to be less scrutinized because its effects on individuals in each of our orbits are often harder to track. People can see a live feed of their friends’ or family members’ radicalization on Facebook, tracking misinformation they’ve shared over the years or seeing comments on videos from problematic pages they’ve followed. YouTube is a much less social experience and generally the only way to get a full picture of the rabbit hole someone is heading down is to talk to them directly or see their web history. Meanwhile, users’ actions are tied to user names rather than real names so the whole platform feels like an anonymized pool of “internet users” rather than personas tied to very real people in the world we live in.
YouTube is struggling with very modern problems and has been taking appropriate actions at times for the sake of user safety, doing so without quite as many public letters from government officials, user backlashes or editorial columns, but it also feels like the company hasn’t been held accountable for their share of the disinformation/ online radicalization scramble. Perhaps they’ve been given a bit too much breathing room to handle these issues at their own leisure and perhaps that shouldn’t be the case going forward.