France Announces Tough New Content Removal Requirements for Social Platforms

Social platforms could be up for a significant challenge in France, with French legislators today approving a new law that will require rapid response from moderation teams on highly offensive content posted in their apps.

As reported by Reuters:

Social networks and other online content providers will have to remove paedophile and terrorism-related content from their platforms within the hour or face a fine of up to 4% of their global revenue under a French law voted in on Wednesday.”

Such regulation has been in discussion over the last few years, with authorities keen to put more pressure on social networks to take more action on these fronts. 

But it’s a tough ask. As a measure of scale, Facebook has some 30 million users in France, which would likely equate to billions of posts and uploads every day. Even if the platforms were able to action ‘most’ content violations within an hour, establishing a definitive response threshold, and subsequent penalties, adds another level, which could even end up forcing some smaller platforms with less capacity to re-think their operations in the European nation entirely.  

The new law is a result of a year-long investigation by French regulators into Facebook’s various moderation processes, through which Facebook provided unprecedented access to its internal systems and functions. Given the various social media-borne controversies around the world, French President Emmanuel Macron has been seeking a new way to regulate tech platforms, in variance to current models. That, eventually, has lead to the introduction of this new law.

But as noted, it may not be workable.

While Facebook’s machine learning systems for content identification have evolved, and are now much more capable of detecting and blocking certain types of posts before anyone even sees them, they’re not perfect, and with its human moderation teams only able to cover so much, there’s almost no doubt that some material will still get through beyond that one-hour threshold. 

The shortcomings of Facebook’s machine learning systems in this respect were underlined in an investigation by Vice in 2018, which noted that:

While Facebook’s AI has been very successful at identifying spam and nudity, the nuance of human language, cultural contexts, and widespread disagreements about what constitute “hate speech” make it a far more challenging problem. Facebook’s AI detects just 38 percent of the hate speech-related posts it ultimately removes, and at the moment it doesn’t have enough training data for the AI to be very effective outside of English and Portuguese.”

Facebook’s systems have continued to improve since then, but providing a concrete guarantee that all posts which fall under this category will be taken down within an hour seems largely implausible, given the various factors.

At the same time, social platforms have significantly improved their response efforts on such content.

Back in 2016, Microsoft, Twitter, Facebook and YouTube all signed a code of conduct with the European Union in which they committed to reviewing “most” complaints within a 24-hour time frame. EU officials were pleased with the progress made on this front – but the main difference here, aside from a broader response time requirement, is that there was no penalty, as such, for the companies failing to meet these requirements. The risk, if they failed, was that they would face tougher regulation.

Now, it seems, they’re going to face tougher rules anyway. 

Of course, the pervading view is likely that they can afford it – social media networks are generating billions of dollars in ad revenue, they can afford to hire more moderators or invest in system improvements to meet such requirements. But as noted, even with endless resources, it still might not be possible to capture all incidences in such a short response time.

It adds to the mounting list of regulatory challenges faced by tech platforms. Last month, the Australian Government announced coming new laws that will force Facebook and Google to share revenue with publishers, an initiative that will likely not lead to the outcome they’re seeking to achieve.

The new French law will also likely not deliver its intended goal, and while the impetus for such a ruling makes sense, in practical terms, if it is enforced, it’ll likely only result in tech platforms shifting to less beneficial workarounds to avoid financial losses. 

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now