YouTube Toughens its Rules Around Dangerous Conspiracy Theories, with a Focus on QAnon Content

YouTube has announced a new update to its rules around hate speech, with a focus on reducing the distribution of conspiracy theories like QAnon, that have eventually branched into real-world violence.

As explained by YouTube:

“Today we’re further expanding both our hate and harassment policies to prohibit content that targets an individual or group with conspiracy theories that have been used to justify real-world violence. One example would be content that threatens or harrasses someone by suggesting they are complicit in one of these harmful conspiracies, such as QAnon or Pizzagate.”

This comes just days after Facebook strengthened its stance against QAnon-related content, recognizing the evolving danger of the group and its activity – though Facebook’s action goes a little further, in that it will see the removal of all Facebook Pages, Groups and Instagram accounts which represent QAnon.

YouTube has left some room for exceptions in its updated approach:

“As always, context matters, so news coverage on these issues or content discussing them without targeting individuals or protected groups may stay up. We will begin enforcing this updated policy today, and will ramp up in the weeks to come.”

It’s the latest recognition from the major online platforms that facilitating such discussion can lead to real-world harm, and while YouTube has stopped short of an outright ban on all QAnon related content, the new measures will see further restriction of the group, which will limit its influence.

And YouTube says that it’s already minimized much of the QAnon discussion – two years ago, YouTube reduced the reach of harmful misinformation via its ‘Up Next’ recommendations, which it says has resulted in a 70% drop in views coming from its search and discovery systems.

“In fact, when we looked at QAnon content, we saw the number of views that come from non-subscribed recommendations to prominent Q-related channels dropped by over 80% since January 2019. Additionally, we’ve removed tens of thousands of QAnon-videos and terminated hundreds of channels under our existing policies, particularly those that explicitly threaten violence or deny the existence of major violent events.”

So, there are still known ‘Q-related’ channels on YouTube, which will not be removed under this update.

Which seems odd. I mean, I understand YouTube’s stance in this respect, in that it will only look to remove content that targets an individual or group. But part of the problem with QAnon, and other movements, is that they have been allowed to start off as relatively harmless chatter, and have expanded from there into significant, concerning movements.

You could argue, early on, that nobody knew that QAnon would evolve into what it has. But we do now. So why let any of it remain?

The QAnon case also highlights the need for social platforms to heed official warnings earlier in the process, in order to halt the momentum of such groups before they gain real traction. Experts have been warning the major platforms about the threat posed by QAnon for years, yet only now are they looking to significantly restrict the discussion. 

Why has it taken this long? And if we’re now acknowledging the threat posed by such groups, will that lead to increased action against other forms of misinformation, before they too can become more damaging, and pose a real danger?

The platforms are working to tackle COVID-19 conspiracy theories, and now anti-vaxxers are facing tougher restictions. What about climate change, and counter science movements? Are they not also a significant risk? Could ‘flat earthers’ eventually expand into more dangerous territory? Is there a risk in allowing any anti-science content to proliferate?  

It seems that, for the most part, the companies are still operating in retrospect, and waiting for such movements to become a problem before taking action.

In some respects, they do need to wait, in an ‘innocent till proven guilty’ view. But again, analysts have been highlighting concerns around QAnon since a follower of the conspiracy theory walked into a Washington pizza restaurant in 2016, armed with a semi-automatic rifle, seeking to investigate for himself what was happening inside.  

How you define risk, in this sense, is difficult, but it seems clear that more could be done, and more proactive action could be taken. Will that limit free speech? Will that restrict users from what they can share? Should they be restricted?

There are no easy answers, but the consequences can be severe. Maybe, with the increasing shift to restrict such movements, we’ll see a change in approach to similar warnings. 

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now