Blocked Hashtags, Violence Metrics, Bans – How Social Media Has Dealt with the US Election

For the last four years, the 2020 US Election has been projected as the ultimate test of social media networks and their capacity to respond to allegations of mass manipulation, foreign interference, fake news and more.

Facebook and Twitter, in particular, have worked to add in a range of new measures and tools to better protect the integrity of the critical vote, including new ad transparency measures, improved detection systems to stamp out ‘coordinated inauthentic behavior‘, updated response processes, etc.

So how have they done? Amidst the ongoing confusion around the results, as they continue to trickle in, how have the major social networks performed with respect to combating mass manipulation and counter-messaging through their apps?

Here’s a look at some of the key points of focus, and action, over the past week. 

Accurate Updates

Facebook has been working to keep users updated on the latest, accurate polling information by adding prominent banners to feeds, in both Facebook and Instagram, which remind users that the votes are still being counted.

Yesterday, Facebook announced that it will soon share a new banner, once a winner of the election is projected by official sources, providing more context on the process.

These official updates serve as a counter to the speculation online, and do seem to be having some impact in quelling angst around the result. 

Though speculation of voter fraud, as provoked by the US President, is also having an impact – which Facebook is reportedly measuring, in real-time, via its own internal systems.

Predicting Violence

According to BuzzFeed News, Facebook has an internal measurement tool which it uses to predict the likelihood of potential real-world violence, based on rising discussion trends.

Facebook violence measure

As explained by BuzzFeed:

“In a post to a group on Facebook’s internal message board, one employee alerted their colleagues to a nearly 45% increase in the metric, which assesses the potential for danger based on hashtags and search terms, over the last five days. The post notes that trends were previously rising “slowly,” but that recently “some conspiracy theory and general unhappiness posts/hashtags” were gaining popularity.”

Which is interesting, for several reasons.

For one, it shows that Facebook is aware of the influence it can have on real-world action, and that it understands that allowing certain trends to spread and grow can be dangerous. That means that Facebook is constantly measuring how far it can push things, and when it needs to slow trends up, in order to avoid them spilling over.

So for all it’s playing down of its potential to influence major incidents, Facebook knows that it does, and it allows certain controversial discussions to continue, uninhibited, until it deems that it needs to act.

That seems concerning, that Facebook is putting itself in a position to manage the balance between maximizing engagement and facilitating potential unrest. 

The very existence of the metric shows that Facebook could likely do more to stop such movements before they gain traction.

It also shows that Facebook can’t play dumb on other topics, like QAnon or anti-vaxxers, as based on this data, it would be well aware of their potential for harm, well before they become problematic.

Many experts had called on Facebook to do more about QAnon for years, before Facebook finally took action. This metric, if it does exist, suggests that Facebook knew the risk all along, and only chose to act when it felt it was absolutely necessary. Which could be too late for any victims caught in the early crossfire. So who decides when the risk is too high, and should Facebook be in charge of making that call?

This will no doubt come under further investigation in the wake of the election.

Blocked Hashtags

Based on these insights, Facebook also took the additional step this week of blocking certain hashtags linked to the rising criticism of the vote-counting process.

Following US President Donald Trump’s unsubstantiated claims that the vote tallying process was ‘illegal’ and ‘rigged’, groups of Trump supporters gathered outside several vote counting centers across the US and began calling for poll workers inside to ‘stop the count’. 

Again, with the threat of real-world violence rising, Facebook took action, blocking the hashtags #sharpiegate (in relation to the manual editing of vote slips), #stopthesteal and #riggedelection. TikTok also blocked these hashtags, while Twitter has continued to add warnings to all posts it detects which may contain election misinformation.

Indeed, incoming US Senator Marjorie Taylor Greene has seen a raft of her tweets hidden due to violations of Twitter’s election misinformation policy.

Tweets by Marjorie Taylor Greene

In addition to this, Facebook has also removed several groups which had been created on the back of questions around the election results, due to concerns that they could be used to organize violent protests in response.

Slowing Momentum

Facebook is also reportedly looking to add more friction to sharing of political posts in order to slow the momentum of conspiracy-fueling content.

As per The New York Times:

Facebook plans to add more “friction” – such as an additional click or two – before people can share posts and other content. The company will also demote content on the News Feed if it contains election-related misinformation, making it less visible, and limit the distribution of election-related Facebook Live streams.”

Again, potentially based on its violence predictor, Facebook is looking to add more measures to reduce the spread of harmful misinformation, and material that could incite further tension within the community.

But a lot of content is still getting through – it’s not difficult to find various videos and posts on the platform which raise questions about the voting process. 

It’s likely, of course, that Facebook can’t stop all of it, which is why adding more friction could be key to at least reducing dissent.

Removing Bannon

In another significant move, former Trump advisor Steve Bannon has been permanently banned from Twitter after calling for the beheadings of two top American public servants, in order to send a warning to others.

Bannon made the suggestion in a video, which has since also been removed from Facebook and YouTube as well.

Bannon was looking to equate modern-day politics to medieval times, saying:

“Second term kicks off with firing [FBI Director Christopher] Wray, firing [Dr. Anthony] Fauci. Now I actually want to go a step farther but I realize the President is a kind-hearted man and a good man. I’d actually like to go back to the old times of Tudor England. I’d put the heads on pikes, right, I’d put them at the two corners of the White House as a warning to federal bureaucrats – ‘you either get with the program or you’re gone – time to stop playing games.”

Bannon’s point was theoretical, not literal, but the concern here is that it’s entirely possible that not all of his listeners and/or supporters would derive the same meaning.

Bannon’s statement is another example of the importance of curbing violent rhetoric in public address, regardless of intention, as it can have significant consequences. Now, Bannon faces a ban, on several platforms, and could find it much harder to gain amplification for his future messaging as a result.

The Wash-Up

While the results of the election are not yet known, and may not be for some time, it does seem, at this stage, that the additional efforts and measures implemented by the major social platforms have been mostly effective in limiting the spread of misinformation, and quelling at least some of the angst around the results.

But we won’t know for a while yet. At present, there seems to be little discussion about foreign manipulation or similar, but it’s possible that it just hasn’t been detected. And while Facebook and Twitter are working quickly to add warning labels and limit distribution now, once the final results are announced, that could prove to be another key test.

There’ll still be a lot of assessment to come, and division in US society is still significant. But there are positive signs that the platforms themselves have done all they can, by most assessments.  

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now