Twitter’s Working on a New ‘Safety Mode’ to Limit the Impact of On-Platform Abuse

Amongst Twitter’s various announcements in its Analyst Day presentation today, including subscription tools and on-platform communities, it also outlined its work on a new anti-troll feature, which it’s calling ‘Safety Mode’

As you can see here, the new process would alert users when their tweets are getting negative attention. Tap through on that notification and you’ll be taken to the ‘Safety Mode’ control panel, where you can choose to activate ‘auto-block and mute’, which will then, as it sounds, automatically stop any accounts that are sending abusive or rude replies from engaging with you for one week.

But you won’t have to activate the auto-block function – as you can see below the auto-block toggle, users will also be able to review the accounts and replies Twitter’s system has identified as being potentially harmful. You would then be able to review and block as you see fit. So if your on-platform connections have a habit of mocking your comments, and Twitter’s system incorrectly tags them as abuse, you won’t have to block them, unless you choose to keep Safety Mode active.

It could be a good option, though a lot depends on how good Twitter’s automated detection process is. 

Twitter would be looking to utilize the same system it’s testing for its new prompts (on iOS) that alert users to potentially offensive language within their tweets.

Twitter offensive tweet check

Twitter’s been testing that option for almost a year, and the language modeling that it’s developed for that process would give it a good base to go on for this new Safety Mode system. 

If Twitter can reliably detect abuse, and stop people from ever having to see it, that could be a good thing, while it could also disincentivize trolls who make such remarks in order to provoke a response. If the risk is that their clever replies could get automatically blocked, and as Twitter notes, will be seen by fewer people as a result, that could make people more cautious about what they say. Which some will see as intrusion on free speech and a violation of some amendment of some kind. But it’s really not. 

If it helps people who are experiencing trolls and abuse, there’s definitely merit to the test.

Twitter hasn’t provided any specific detail, or information on where it’s placed in the development cycle. But it looks likely to get a live test soon, and it’ll be interesting to see what sort of response Twitter sees once the option is made available to users.

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now