Back in June, Twitter added a new pop-up alert on articles that users attempt to retweet without actually opening the article link and reading the post.
After a full three months of implementation, today, Twitter has shared some new insight into the effectiveness of the prompt, and how it’s changed user behavior when they’re shown the alert.
According to Twitter:
- People open articles 40% more often after seeing the prompt
- People opening articles before retweeting increased by 33%
- Some people didn’t end up retweeting after opening the article – “which is fine – some Tweets are best left in drafts”
Those are some pretty impressive numbers, underlining the value of simple prompts like this in getting users to think twice about what it is they’re distributing through their social media activity.
Adding any level of share friction seems to have some effect. Back in 2016, Facebook added similar pop-ups on posts which had been disputed by third-party fact checkers, prompting users to re-think their intention before they hit ‘Share’.
Analysis conducted by MIT found that these labels reduce people’s propensity to share misinformation by around 13%, while Facebook has since also added new prompts when users attempt to share a link that’s more than 90 days old, reducing the spread of outdated content.
It seems that simple pushes like this can actually have a big impact. And while free speech advocates have criticized such labels as being overly intrusive, if the net effect is less blind sharing, and more reading and research into topics, then that’s surely a good thing that can only benefit online discourse.
Given the success of the new prompts, Twitter’s now working to bring them to all users globally (currently only available on Android), while it’s also looking to make the alerts smaller after their initial display to each user.
And clearly, the impacts could be significant. While the above figures may not hold in a broader launch of the option, the numbers do show that the prompts are at last somewhat effective, and can help in reducing ill-informed sharing, and the distribution of misinformation.