Facebook Updates Report Moderation Systems to Ensure Worst Case Content is Dealt With First

Facebook has updated its content moderation queue system, which should lead to significant improvements in addressing the worst-case reports, and slowing the spread of harmful content.

The new process utilizes improved machine learning processes to categorize reported content – as explained by The Verge:

“In the past, [Facebook’s] moderators reviewed posts more or less chronologically, dealing with them in the order they were reported. Now, Facebook says it wants to make sure the most important posts are seen first, and is using machine learning to help. In the future, an amalgam of various machine learning algorithms will be used to sort this queue, prioritizing posts based on three criteria: their virality, their severity, and the likelihood they’re breaking the rules.”

The process will ensure that Facebook’s team of human moderators are being guided towards the worst-case reports first, optimizing their workload and limiting the spread of such, based on automated detection.

That’s obviously not going to be perfect. It will be difficult for any automated system to determine the correct order of such with 100% accuracy, which could see some of the more concerning cases left active for longer than others. But that wouldn’t be much worse than the current situation – and with Facebook factoring in ‘virality’, which, you would assume, considers the potential reach of the post, based on the posting users’ following, history, etc., that could lead to significant improvements.

Facebook has come under significant pressure, in various instances, over its slow response time in addressing potentially harmful content.

Back in May, a ‘Plandemic’ conspiracy-theory video racked up almost 2 million views on Facebook before the company removed it, while in July, Facebook admitted that it “took longer than it should have” to remove another conspiracy-laden video related to COVID-19, which reached 20 million views before Facebook took action.

Maybe, with these new measures in place, Facebook would have given the removal of such content more priority, given the potential for widespread exposure via high-reach Pages and people, while the detection of content based on ‘severity’ could also have significant benefits in addressing the worst kinds of violations that are posted to its network.

Definitely, Facebook’s automated systems have been improving in this respect. In its most recent Community Standards Enforcement Report, Facebook says that 99.5% of its actions relating to violent and graphic content were undertaken before being reported by users.

Facebook content violations

Now, those same detection systems will be used to categorize all moderation reports, and as Facebook’s systems continue to improve, that could see a significant reduction in impact related to concerning material in the app.

In some ways, it seems like Facebook should have always had some form of prioritization like this in place, but it’s possible that its systems simply weren’t capable of filtering such to this level till now. Regardless, now it is able to improve its processes, and that could have major benefits for user safety.

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now