Facebook Outlines New System for Detecting Fake Accounts and Misinformation Based on Interactions

Facebook has published an outline of a new system that it’s developed which will help it improve its detection of fake accounts and misinformation, adding to its existing approaches and refining its overall effort in eliminating bad actors from its platforms.

But it’s fairly technical – in this post, I’ll provide a simplified overview of Facebook’s new Temporal Interaction EmbeddingS (TIES) research, which it says will provide incremental improvements to its detection process, and could have a significant impact on such over time.

Traditionally, Facebook says, approaches to detecting fake accounts and other malicious activity has relied on what it calls ‘static’ behavior, which is not necessarily reflective of how people interact:

“Entities on the platform, such as accounts, posts, pages, and groups, are not static. They interact with one another over time, which can reveal a lot about their nature. For instance, fake accounts and misinformation posts elicit different types of reactions from other accounts than do normal/benign accounts and posts .”

Could that be true? Is there a distinct difference in the way that people interact with fake accounts or misinformation which could help to identify such, even without user reports?

That’s the basis of Facebook’s TIES system – by monitoring all of the various activities and interactions with each post, TIES is able to highlight common behaviors that are linked to inauthentic entities. 

“Entities on social media (accounts, posts, stories, Groups, Pages, etc.) generate numerous interactions from other entities over time. For instance, posts get likes, shares, comments, etc. by users, or accounts send or reject friend requests, send or block messages, etc. from other accounts”

At scale, Facebook’s new system is now able to better match those behaviors to clear signals, which can then flag them for further examination.

In this image of overall interactions, for example, Facebook says that the yellow dots are indicative of malicious activity. As you can see, the interactions with these posts is significantly different from the regular (purple) user actions.

“The manner in which fake accounts behave is different from normal accounts. Hateful posts generate different type of engagements compared to regular posts. Not only the type but also the target of these engagements can be informative. For instance, an account with history of spreading hate or misinformation sharing or engaging positively with a post can be indicative of a piece of questionable content.”

Again the documentation is very complex, and I may be oversimplifying the complexities of the system, but the basis is that Facebook’s TIES model, trained on 2.5M accounts (with 80/20 real/fake), and 130K posts (roughly 10% of which are labeled as misinformation), has been able to accurately identify bad actors within Facebook’s graph by measuring the full scale of interactions and fluid engagements with each post. On a small scale, you wouldn’t note any real difference with how users engage with these entities, but with a larger scope, definitive patterns emerge. And that will enable Facebook to improve its detection and enforcement efforts.

Which is obviously a big deal. Facebook has come under significant pressure in recent months to do more in tackling misinformation, with the platform hosting a range of COVID-19 conspiracy theory groups, as well as political extremists driven by unfounded theories about societal establishment.

Facebook has taken action on these, and continues to work to address their spread, but still, at 2.7 billion users, the sheer scale of managing such makes it almost impossible to stamp them out completely.

And it’s only going to get worse – amid the COVID-19 lockdowns, various local publishers have been forced to shut down due to lack of revenue options. That will likely see even more people turning to Facebook for news updates – where its algorithm prioritizes engagement, which often sees salicious, divisive posts gain more traction than those grounded in less-enticing facts.

In terms of fake accounts, back in April, Facebook reported that while it had significantly improved its fake account detection efforts, around 5% of its user base is still made up of fake profiles. Which doesn’t sound so bad – but at Facebook’s scale, 5% equates to more than 135 million active fake profiles on the platform at present.

Facebook notes this in its TIES documentation, explaining that while TIES may only lead to fractional improvements:

“It should also be noted that, at the scale of Facebook, even a couple of percentage points improvement in recall for the same precision translates into significant number of additional fake accounts being caught.”

It’s hard to get your head around the full scale of Facebook, and related changes like this, within this context, but essentially, the TIES system will lead to incremental improvements on several fronts, which will help Facebook further advances its detection systems. 

And that will lead to further improvements that could have much greater impacts. Facebook still has more work to do, and it’s going to come under increasing pressure over its facilitation of hate speech and related groups which are contributing to societal division.

As such, any improvement can only be a positive, and the TIES system may be another stepping stone for it to build upon in this respect. 

You can read the full TIES documentation here.

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now