Investigation Finds Facebook has Removed Fact Check Markers on Some Climate Change Denial Posts

While Facebook continues to up its efforts to remove COVID-19 misinformation, various reports in recent weeks have suggested that it’s not doing anywhere near as much to halt the spread of other types of misinformation across its network, raising questions as to the role Facebook plays in such, and what its motivations are, exactly, for taking a more ‘hands off’ approach in certain contexts.

The latest report comes from Popular Information, which has today published an account of one specific piece of climate change denial literature which, despite being found by Facebook’s fact-checking partners to be ‘partly false’, and labeled as such as per Facebook’s policies, eventually had that tag removed, seemingly after intervention by Facebook management.

As per Popular Information:

“The article, authored by Michael Shellenberger and published on The Daily Wire, uses 12 “facts” to argue that concern about climate change is overblown. […] But then, without explanation, the fact-check was removed. If a Facebook user attempts to share the article today, there is no warning and no link to the fact-check. Shellenberger’s piece, on The Daily Wire and elsewhere, has now been shared over 65,000 times on Facebook.”

In their investigation, the PI team found that top Facebook executives – including Nick Clegg, its VP of Global Affairs and Communications, Campbell Brown, Facebook’s VP of Global News Partnerships, and Joel Kaplan, The Social Network’s VP of Global Public Policy – were specifically consulted on the fact-check ruling, which eventually lead to the removal of the marker, which Facebook’s own research has shown to be an effective tool in limiting the spread of misinformation online.

The report additionally notes that Facebook “was asked by the office of Congressman Mike Johnson (R-LA), a powerful member of Republican leadership, to reverse the fact-check”.

It’s not entirely clear what happened in this case, but it does seem that Facebook, at the behest of a political representative, may have chosen to remove a fact-check marker on a climate denial post, despite the post being marked as false by professional fact-checking partners. 

This is the second significant incident of this type, around this exact topic, for Facebook in recent months. Last month, reports emerged that Facebook was allowing some climate change denial content to remain on its platform by prompting its staffers to render such discussion ineligible for fact-checking by deeming it “opinion.”

Indeed, various climate change denial posts, groups and Pages are active, and see high engagement across The Social Network – which, given that social media platforms now outpace print newspapers as a news source for Americans, seems like a significant concern.

Of course, there is still some debate over the severity of climate change and its impacts, hence the “opinion” loophole. But when Facebook’s own fact-checkers raise a concern, that seems like it should be the time for Facebook to uphold any such action. 

Again, Facebook is working very hard to stamp out COVID-19 misinformation, so why not climate change falsehoods, based on science, also? And then, why not fact-check ads from politicians, the even bigger elephant in the room?

There are various theories as to why Facebook may not want to push as hard on certain issues, one being that Facebook, quite obviously, benefits from such discussion.

As noted by Bill McKibben in The New Yorker recently:

“Why is it so hard to get Facebook to do anything about the hate and deception that fill its pages, even when it’s clear that they are helping to destroy democracy? And why, of all things, did the company recently decide to exempt a climate-denial post from its fact-checking process? The answer is clear: Facebook’s core business is to get as many people as possible to spend as many hours as possible on its site, so that it can sell those people’s attention to advertisers.”

Many have shared the same view, that Facebook ultimately benefits from such engagement, with divisive, argumentative content like this prompting emotional response. Emotional reaction is key to viral sharing dynamics, so in many respects, it’s actually in Facebook’s interest to allow such content to live on its platform.

That, potentially, could be one of the reasons why Facebook has been so keen to push the usage of groups in recent years. If people are sharing such content to their public feeds, that prompts scrutiny, but sharing the same in private groups gives Facebook all the engagement benefits, without the associated criticism. 

Facebook’s algorithm is, in fact, built around prompting engagement, whatever that engagement may be. So again, Facebook’s system is designed to amplify content that sparks debate, and keeps users commenting – and as such, it’s clearly in Facebook’s interest, on some level, to allow such debate to happen, and be hosted on its sites.

You could also argue that this same process has changed the way such issues are reported more generally – because Facebook’s system incentivizes debate, publishers are subsequently incentivized to produce more partial, biased headlines, in order to glean optimal reach benefits across the network. That, on balance, could be one of the biggest factors in amplifying division within modern society. The advent of online sharing algorithms, which predicate amplification based on comments and shares, has altered the motivations for online publishers, making them push their readers to certain sides of a given debate through increasingly partisan reporting.

Either way you look at it, Facebook does benefit from division. So what can be done? What should regulators and officials do to limit the impacts of platform algorithms and removing bias – if, indeed, anything can be done to reduce such online?

The question is increasingly challenging, as Facebook continues to grow (now closing in on 3 billion users), and more users come to rely on its apps to stay informed. The more recent closures of smaller publications in regional areas, as a result of the COVID-19 lockdowns, will only amplify this, and if Facebook is additionally motivated by some other means to remove measures like fact-checks, it seems like this is a reality that we’ll have to deal with.

But such decisions are significant – if Facebook is simply able to pick and choose when it applies labels like fact-checks, then it should not equally be allowed to hold such influence. 

That’s more a question for regulatory bodies to address, but if you were looking to establish why we feel more divided, and why anti-science movements are gaining more traction than ever, this may be where to start. 

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now