Breaking News
Twitter to Warn Users if They Tweet Reply with “Offensive or Hurtful Language”
Social networking app Twitter will test sending its users a prompt when they tweet “offensive or hurtful language.”
Upon hitting “send” on their reply, users will be notified if the words in their reply are similar to those in reported posts. The platform will also ask them if they want to revise it.
“When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” Twitter Support said in a tweet.
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
“We're trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” said Sunita Saligram, Twitter's global head of site policy for trust and safety.
The Verge reported that it is not clear how Twitter labels harmful language. However, the company has “hate speech policies and a broader Twitter Rules document” available. This document “outlines its stances on everything from threats of violence and terrorism-related content to abuse and harassment.”
The app’s policies do not permit users to target others using slurs, racist or sexist tropes, or degrading content.
According to the company, the experiment will start on Tuesday and last at least a few weeks.
Up Next: