Facebook has published the latest version of its Community Standards Enforcement Report, which outlines all of the policy and rule violations that Facebook has taken action on, across both Facebook and Instagram, over the preceding three months.
The report now covers 12 policy areas on Facebook, and 10 on Instagram, with this latest version providing more insight into Instagram content specifically.
As explained by Facebook:
“The report introduces Instagram data in four issue areas: Hate Speech, Adult Nudity and Sexual Activity, Violent and Graphic Content, and Bullying and Harassment. For the first time, we’re also sharing data on the number of appeals people make on content we’ve taken action against on Instagram, and the number of decisions we overturn either based on those appeals or when we identify the issue ourselves.”
I mean, those two graphs should probably be on the same chart to indicate the number of appeals versus approvals (it looks like Instagram restored around a quarter of the content it originally removed after appeal), but still, the insight is valuable, and helps to provide some context as to the level of activity that Facebook’s moderators are dealing with.
In terms of rising trends, on Facebook, there appears to have been an uptick in its enforcement of drug-related posts in recent months:
That partly relates to improved detection, but also user activity.
The data also indicates that hate speech removals have increased – though Facebook definitively attributes this to improved detection technology, which, among other things, has also enabled it to detect hate speech in more languages.
Also worth noting – Facebook says that fake accounts still make up around 5% of its worldwide monthly active users, despite Facebook improving its detection and removal processes in recent months. That means that there are around 130 million fake profiles active on the platform.
Over on Instagram, Facebook has reported an increase in action against nudity and child exploitation related content, while it’s also seen a significant jump in the removal of terror-related posts.
Again, it is important to note that these charts represent actions taken, which can be a result of improved processes as much as an increase in relative activity. But still, that seems to be a big rise – could Instagram be becoming a new target for this type of material?
Instagram’s also improved its detection of suicide and self-injury content, which has seen it increase removals on this front, while bullying on the platform has remained steady from the previous quarter, a key area of focus for the platform.
In addition to policy enforcement, Facebook has also reported that government requests for user data increased by 9.5% in the last six months of 2019, consistent with ongoing trends of governments seeking data insights from Facebook.
Government agencies and their subsidiaries are taking Facebook more seriously, with an evolving understanding of the value of Facebook’s data, for varying purpose. The lower trendline on the above chart indicates how many of these requests Facebook provided some level of data in response to, which shows that Facebook has remained consistent in its approach. Though, inevitably, that does mean that more Facebook data is being provided, overall, in this respect.
The US continues to submit the largest number of requests, followed by India, the UK, Germany and France.
Overall, the trends likely suggest that Facebook’s systems are improving, and with those improved detection measures, it’s difficult to pinpoint which elements are seeing significant increases in activity, as opposed to Facebook simply getting better at finding them. Some of the upward trends are a concern, but ideally, Facebook’s removing more of this type of content – and definitely the specific notes on removing more self-harm content on Instagram, for example, are a positive sign.
You can read Facebook’s full Community Standards Report here, and the Government Requests Report here.