How Twitter plans to fight abuse and hate on its platform

Twitter India MD, Manish Maheswari, says the company is pulling out all stops to make the micro-blogging platform a safe space, free of hate and abuse.

How Twitter plans to fight abuse and hate on its platform

Friday December 11, 2020,

4 min Read

Micro-blogging site Twitter for all its resourcefulness can become a pretty toxic place from time to time; such is the nature of the beast it rides on, the internet. The company is aware of this problem and wants to clean up its house, by weeding out abusive and hateful content with a vengeance.

According to Twitter India Managing Director (MD), Manish Maheswari, the company has been proactively taking action against abusive behaviour in a ‘predictive way’ rather than just a ‘reactive way’. 

Over the years, Twitter has typically relied on its users to flag abusive or hateful content before taking action; the problem with that obviously is, with 187 million monetisable daily active users, things are bound to slip through the cracks. To be sure, Twitter has often publicly acknowledged the issue of abuse, highlighting that it’s working towards making the platform a safe space, free of hate and abuse. 

Manish tells YourStory Media Founder and CEO Shradha Sharma that in this endeavour, the company has been investing in machine learning (ML) and other predictive tools to call out and block abusive behaviour on its platform.

“I'm happy to report that now 50 per cent of what is getting actioned particularly, in the sense that even before getting reported by a person, the system identifies it and acts on it [sic]. So that’s a pretty large number, and this was very low earlier,” he says.

Last year, in April, the company had said that it was able to weed out 38 per cent of the abusive contentproactively”, without relying on users to report them. The year before the number stood at 20 per cent. So the company has indeed been making steady progress in using technology to flag tweets that fall in the category of “abusive behaviour, hateful conduct, encouraging self-harm, and threats, including those that may be violent.”

Ensuring transparency and quick reaction 

As it wages war on abusive and inappropriate behaviour, the San Francisco-headquartered startup led by Jack Dorsey says that it remains committed to being a fully neutral and transparent platform.

“If you go on the ‘Twitter Transparency Center’, it gives you all the statics about what action we have taken, against whom, and why [sic]. But it's an ever evolving field, so some action we have to take reactively as well,” says Manish.
Twitter India MD Manish Maheswari

The micro-blogging site is committed to being a fully neutral and transparent platform, says the Twitter India MD.

To ensure that the reaction time to users flagging content is cut to a minimum, Twitter is also investing in significantly increasing the number of ‘agents’ who react to reports.

“So if something gets reported, we now have a larger capacity of agents which are globally distributed, you also have them in India,” Manish shares.

Last year, social media giant Facebook had announced that it was open-sourcing two algorithms it uses to identify abusive or harmful content. The algorithms were released on Github, a subsidiary of Microsoft which provides hosting for software development and version control using Git (a free and open source distributed version control system designed to handle everything from small to very large projects). Facebook had said that it hopes the decision to add the algorithm to a shared database would help developers and other companies curb online violence.

Big country, big problems

According to market and consumer data platform, Statista, India has an audience reach of 18.9 million users for Twitter, after the US (68.7 million) and Japan (51.9 million). Other than its sheer size, India is also unique because of the vast number of languages it has. Unsurprisingly, non-English tweets nearly equal the number of English language tweets in India.

Thus, Twitter is also upping its “language capacity” to ensure that non-English abusive tweets do not slip through its technology net, meant to filter out hateful content.

“I'm also more confident that things are getting better because we are doing a  multi-intervention, not just on the policy side, but also improving our product to ensure that the right behaviour is encouraged and sort of the bad actors are discouraged,” says Manish.

Watch the full conversation here:

To separate the wheat from the chaff, the company actively works towards highlighting  authoritative sources of content.

“We want to ensure that everyone has the freedom to express themselves. At the same time, we know that some sources are more credible and authoritative, so if you are from a credible source, your information is sort of prominently highlighted, so that people get to gather information,” Manish explains.