Facebook Has Taken a Harder Stance Against COVID-19 Misinformation – So Why Cant It Do That for Everything?

Facebook has this week provided an overview of its expanded efforts to combat COVID-19 misinformation across its platforms, in order to ensure that the 2.89 billion people who log into its apps each month are getting timely, accurate information about the evolving pandemic.

As explained by Facebook:

Ever since the World Health Organization (WHO) declared COVID-19 a global public health emergency, we’ve been working to connect people to accurate information, and taking aggressive steps to stop misinformation and harmful content from spreading.”

Indeed, Facebook has been taking more aggressive action on COVID-19 misinformation. Among its various measures, Facebook has:

  • Sought to remove all posts across its platforms which present false claims about cures, treatments, the availability of essential services, and/or the location and severity of the outbreak
  • Banned all ads and commerce listings which seek to capitalize in fears, including all listings of face masks, hand sanitizer, disinfectant wipes and COVID-19 testing kits
  • Invested more funding into fact-checking resources to give them more capacity to detect and flag potentially misleading posts
  • Started removing all non-official COVID-19 accounts from recommendations listings on Instagram, as well as any AR effects related to coronavirus
  • Added labels to show people when they’ve received a forwarded or chain message on WhatsApp
  • Set a limit on the number of times messages can be forwarded on WhatsApp to reduce the spread of viral messages (which is also now being tested on Messenger)
  • Improved its machine learning tools in order to better identify and ban accounts engaged in mass messaging

​​Those are some impressive additions – and given the importance of ensuring that people are correctly informed about the virus, and how to limit its spread, it makes sense that Facebook has made this a key focus.

But the raft of updates has also lead to another key query – if Facebook is able to ramp up its misinformation combating tactics so significantly right now, why hasn’t it done so in the past in order to limit false claims from politicians and politically motivated groups, which have been used to sway voters, and potentially shift the outcomes of elections?

Facebook, you may recall, was heavily criticized late last year after announcing that it would not be subjecting ads from political groups to fact-checks, preferring instead to let the candidates say what they like, then letting the people decide who’s being truthful and who’s not. 

As explained by Facebook CEO Mark Zuckerberg:

We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.”

The announcement was seen by many as Facebook giving a green light to outright lies in political campaigns – if political parties are not held to account for their claims, the same as any other advertiser would be, that’ll make Facebook an even more powerful campaigning weapon, with its targeted advertising tools enabling specific messaging designed to influence certain voter groups into taking (or in some cases not taking) action.

Facebook defended its decision by noting that all political ads are now available in its Ad Archive, providing transparency, but essentially, the platform said that it didn’t want to be the referee in political speech. And either way, defining what’s true and untrue in political discourse is difficult – but that being the case, how is Facebook able to be so black and white on COVID-19 content, yet non-committal on other types?

Zuckerberg recently responded to this exact question in a press briefing:

When you’re dealing with a pandemic, […] it’s easier to set policies that are a little more black and white and take a much harder line.”

That makes sense – relative truth in this sense is based on scientific fact, as provided by medical authorities. I can tell you, for example, based on advice from medical professionals that drinking bleach won’t cure COVID-19, nor will walking out in the sun, drinking cow urine (really) or drowning it with whiskey.

In this respect, it makes it easier for Facebook to say ‘this is true’ and ‘this is not’, helping it to avoid potential harms. But then, if you extend that argument, how does the same relate to claims around, say, climate change?

The basics of climate science – that climate warming trends over the last century are “extremely likely due to human activities” – is agreed upon by upwards of 97% of experts in the field. And yet, many skeptics publish various claims to the contrary – and they often gain significant traction with such on Facebook. 

If the science is agreed, should Facebook also remove these? Facebook needs to take action on false COVID-19 claims due to their potential for harm, but you could argue the same, in theory, for climate change. Right?

Of course, there’s more nuance and debate in the finer details of climate change arguments, but it does raise some interesting questions about Facebook’s variable approach to what it lets through, and where it draws the line on misinformation, based on potential harm. 

Basically, Facebook’s increased enforcement actions around COVID-19 show that it can do more to combat misinformation, so long as it agrees with the base reason for such. In this case, Facebook rightfully sees COVID-19 misinformation as likely to cause societal damage. How it sees other forms of half-truths and misleading posts is variable – though they may well be just as damaging, depending on your perspective.

The immediate concern is obviously in ensuring accurate information is being distributed, and that people are collectively working to limit the spread of the virus, so that we can eventually get back to our normal way of life. But when we do, it’ll be interesting to see how Facebook’s increased action is viewed, and whether it sets a stronger precedent to put more pressure on Zuck and Co. to take further action on misinformation, in all its forms. 

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now