Facebook Adds New Measures to Limit the Spread of COVID-19 Misinformation

It’s hard not to be somewhat skeptical about Facebook’s enhanced efforts to combat the spread of COVID-19 misinformation across its platforms.

Not from a practical standpoint, of course – false information about the virus and its spread can have significant consequences, and there’s absolutely a clear need to address such. But the skepticism is more around how proactive Facebook has been in working to stamp out COVID-19 misinformation, which it hasn’t been for other misleading and untrue reports in the past.

In the latest on this front, Facebook is introducing two new measures to choke the circulation of coronavirus fake news.

First, Facebook is going to start showing users who’ve shared reports that have later been found to be untrue a new link to official information in their News Feeds, in the hopes that it can clarify any potential misunderstanding. Second, Facebook’s adding a new section to its COVID-19 Information Center called “Get the Facts” which will highlight fact-checked articles from partners that debunk false information about the coronavirus. 

These are good, logical measures, and again, they further expand on Facebook’s aggressive efforts to stop the spread of potentially harmful COVID-19 lies and conspiracy theories. 

But it does beg the question – if Facebook can take such a strong stance on COVID-19 fake news, why can’t it do the same for all types of false reports and lies, based on fact-checks.

The answer, of course, is that it can, but there are various reasons why this is not entirely feasible for every untrue claim and report.

For one, COVID-19 is science-based, and as such, there’s a clear delineation as to what’s true and untrue in this instance.

As Facebook CEO Mark Zuckerberg recently noted in a press briefing:

When you’re dealing with a pandemic, […] it’s easier to set policies that are a little more black and white and take a much harder line.”

Of course, the counter to this would be something like climate change – the basics of climate science, that climate warming trends over the last century are “extremely likely due to human activities”, is agreed upon by upwards of 97% of experts in the field. And yet, many skeptics publish various claims to the contrary, and they often gain significant traction with such on Facebook. 

Why can’t Facebook also apply the same rulings to climate change misinformation? 

Another consideration is workload, and the massive amount of manpower required to fact-check everything that gets shared across The Social Network.  

Facebook somewhat touches on the work required in its announcement post:

For example, during the month of March, we displayed warnings on about 40 million posts related to COVID-19 on Facebook, based on around 4,000 articles by our independent fact-checking partners. When people saw those warning labels, 95% of the time they did not go on to view the original content.”

That’s clearly effective, but it indicates the human effort required – reviewing 4,000 pieces of content, in various languages, takes time, and that’s just one element of what’s shared on platform. With 1.66 billion daily active users, the sheer scale of content being shared is hard to fathom, and filtering this, in any effective way, is incredibly complex. But Facebook could still do more – and maybe, with the additional fact-checking resources that it’s added as a result of COVID-19, it will – but that does still leave one critical element.

Facebook has also been heavily criticized over its decision to exempt political ads from fact-checks, preferring instead to let the candidates say what they like, then let the voters decide who’s being truthful and who’s not. 

As explained by Facebook CEO Mark Zuckerberg:

We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.”

Given its strong action on COVID-19 misinformation, its hands-off approach to political ads makes even less sense. Clearly, based on its coronavirus response, Facebook has the capacity to do more, but it comes down to a question of will, and taking a stand based on what it considers to be the key source of truth.

Facebook agrees with the science in this instance, so it’s stamping out contrary reports. Does that then suggest that Facebook doesn’t agree with the science on other issues, or that it doesn’t agree with its fact-checking partners on misleading and/or false political claims?

The point here is that Facebook can act, it can take a harder line against misinformation. And it’s right to do so on COVID-19, given the public health crisis at hand, and the impact untrue reports can have – but given the massive influence Facebook now has over news consumption, with more people now getting news content from Facebook than they do from newspapers. Given Facebook’s power in this respect, it’s important that it does consider how it uses that in an informational sense, and the role it plays – whether it likes it or not – in facilitating broader understanding.

That’s not necessarily to say that Facebook’s political ad stance is wrong, but it does seem as though it’s playing down its influence in this respect.  

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now