Facebook has shared its latest Community Standards Enforcement Report, which outlines its various policy enforcement actions due platform rule violations over the final three months of 2020.
In addition to this, Facebook has also published a new overview of its advancing AI detection efforts, and how its systems are getting better at detecting offending content before anyone even sees it.
First off, on reports – Facebook says that its proactive efforts to address hate speech and harassment have lead to significant improvements in enforcement, with the prevalence of hate speech on the platform dropping to 7 to 8 views of hate speech for every 10,000 views of content (0.07%).
Which is a good result – but the problem in Facebook’s case is scale. 7 to 8 views for every 10,000 posts is an excellent stat, but at 2.8 billion users, each of whom are viewing , say, 100 posts per day, the scope of exposure for hate speech is still significant. Still, Facebook’s systems are improving, which is a positive sign for its proactive efforts.
Facebook has also taken additional steps to ban dangerous groups, like QAnon, while it also stepped up its enforcement efforts for dangerous hate speech in the wake of the Capitol Riots last month.
Overall, Facebook says that prevalence of all violating content has declined to 0.04%.
Facebook also says that its automated systems are getting much better at detecting incidents of bullying and harassment.
“In the final three months of 2020, we did better than ever before to proactively detect hate speech and bullying and harassment content – 97% of hate speech taken down from Facebook was spotted by our automated systems before any human flagged it, up from 94% in the previous quarter and 80.5% in late 2019.”
How, exactly, that’s measured is an important consideration – if such a violation is never detected at all, then it can’t be included in the stats. But the point Facebook’s making is that it’s removing more potentially offensive content by evolving its systems based on improved training models.
Facebook took action on 6.3 million incidents of potential bullying and harassment in Q4 last year.
Its chart follows a similar upward trajectory for bullying and harassment enforcement on Instagram.
A noted, in order to advance its automated detection systems, Facebook has had to evolve the way in which it trains its AI models, based on variances in language use, by enabling it to better detect surrounding context.
“One example of this is the way our systems are now detecting violating content in the comments of posts. This has historically been a challenge for AI, because determining whether a comment violates our policies often depends on the context of the post it is replying to. “This is great news” can mean entirely different things when it’s left beneath posts announcing the birth of a child and the death of a loved one.”
Facebook says that its system advancements have been focused on establishing the surrounding context of each comment by ensuring its systems can analyze not just the text itself, but also images, language context, and other details contained within a post.
“The results of these efforts are apparent in the numbers released today – in the first three months of 2020, our systems spotted just 16% of the bullying and harassment content that we took action on before anyone reported it. By the end of the year, that number had increased to almost 49%, meaning millions of additional pieces of content were detected and removed for violating our policies before anyone reported it.”
These are huge advancements in data modeling, which could lead to major improvements in user protection. And what’s more, these systems are now also transferrable across languages, which has seen Facebook accelerate its efforts on the same in all regions.
On other fronts, Instagram saw increased enforcement on posts which contained firearms, suicide and self-injury (a key area of focus for the platform) and violent and graphic content.
Again, these are significant advancements for Facebook, which is increasingly looking to take on more responsibility for the content that it hosts, and how it facilitates the distribution of such throughout its network. In addition to this, Facebook is also now experimenting with a reduction in political content in user feeds, which could also have a significant impact on its broader societal impact.
At the end of the day, despite being the largest network of connected people in history, Facebook is still learning how to best manage that, and ensure it minimizes harm. There’s much to debate about the impact of the platform in this respect, but these notes show that the platform is evolving its approach, and is seeing results from those efforts.
You can read Facebook’s full Q4 Community Standards Enforcement Report here.