Twitter Has Updated its Warning Prompts on Potentially Harmful Tweet Replies

After testing them out over the last few months, Twitter has today announced a new update for its warning prompts on tweet replies which it detects may contain offensive comments.

As explained by Twitter:

If someone in the experiment Tweets a reply, our technology scans the text for language we’ve determined may be harmful and may consider how the accounts have interacted previously.”

After it’s initial trials, Twitter has now improved its detection methods for potentially problematic replies, and added more detail to its explanations, which could help users better understand the language that they’re using, and maybe reduce instances of unintended offense.

Of course, some people see this as overstepping the mark – that Twitter’s trying to control what you say, how you say it, free speech, amendment, etc. But it’s really not – the prompts, sparked by previously reported replies, simply aim to help eliminate misinterpretation and offense by asking users to re-assess.

If you’re happy with your tweet, you can still reply as normal. Instagram uses a similar system for its comments. 

As noted in the tweet above, the new process is being tested with selected users on Android, iOS, and on the Twitter website.  

Like this article?

Share on facebook
Share on Facebook
Share on twitter
Share on Twitter
Share on linkedin
Share on Linkdin
Share on pinterest
Share on Pinterest

Leave a comment

Why You Need A Website

Now