Twitter expands its hateful conduct policy based on religion
Last year, Twitter asked for feedback to ensure that it considers a wide range of perspectives and received more than 8,000 responses from people located in more than 30 countries.
Social media company Twitter said it is expanding its rules against hateful conduct against language that dehumanises others on the basis of religion.
In a blog post, the company wrote that tweets like the ones below will be taken down from the site when they’re reported to the company.
Twitter is tweaking its replies layout. But all users care about is the ‘edit’ tweet option
Twitter also clarified, tweets that break this new rule tweeted before today will need to be deleted, but will not directly result in any account suspensions because they were tweeted before the rule was set.
The social media giant has been following on it by setting rules to keep people safe on the platform. It has continuously evolved to reflect the realities of the world we operate within.
"Our primary focus is on addressing the risks of offline harm, and research shows that dehumanising language increases that risk. As a result, after months of conversations and feedback from the public, external experts and our own teams, we’re expanding our rules against hateful conduct to include language that dehumanises others on the basis of religion," the blog post added.
Why start with religious groups?
Twitter says that it asked for feedback last year to ensure that it considers a wide range of perspectives and hear directly from the users from different communities and cultures around the globe.
"In two weeks, we received more than 8,000 responses from people located in more than 30 countries," said Twitter.
Some of the most consistent feedback it received included:
Clearer language — Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered.
"We incorporated this feedback when refining this rule, and also made sure that we provided additional detail and clarity across all our rules," said the company.
Narrow down what’s considered — Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalised groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as “kittens” and “monsters”.
Consistent enforcement — Many people raised concerns about the company's ability to enforce its rules fairly and consistently, so Twitter says it has "developed a longer, more in-depth training process with our teams to make sure they were better informed when reviewing reports".
Further, the company said that it will look to expand the scope of this change, and will update on how it will address it within the rules.
Twitter has been updating its rules for some time now to make the platform free from hate and misinformation among other things.
In a recent conversation at TED2019 platform, Jack Dorsey, Founder and CEO, spoke about making Twitter free of abuse and spam. Stating that it was the health of the conversation that worries him the most about the platform, he said,
"Our purpose is to serve the public conversation, and we have seen a number of attacks on it. We’ve seen abuse, we’ve seen harassment, we’ve seen manipulation, automatic and human coordination, misinformation… What worries me most is our ability to address it in a systemic way that is scalable.”
In its Q4 update and letter to shareholders, Twitter said,
“Our focus on improving the health of the public conversation on Twitter delivered promising results in 2018, with a 16 percent year-over-year decrease in abuse reports from people who had an interaction with their alleged abuser on Twitter, and enforcement on reported content that was 3X more effective.”
In addition to this Twitter also revealed in its current blog post -
- 38 percent of the abusive content that’s enforced on Twitter at present is surfaced proactively to its teams for review instead of relying on reports from people on Twitter.
- 100,000 accounts were suspended for creating new accounts after a suspension during January-March 2019. This is a 45 percent increase from the same time last year.
- 60 percent faster response to appeals requests with our new in-app appeal process.
- Three times more abusive accounts suspended within 24 hours after a report compared to the same time last year.
- 2.5 times more private information removed with a new, easier reporting process.
Previously, Twitter only reviewed potentially abusive tweets if it was reported to the platform. Then, earlier this year, the company made it a priority to take a proactive approach to abuse in addition to relying on people’s reports.
(Edited by Saheli Sen Gupta)