Facebook releases its Community Standards Enforcement Report
A shift is visible in how Facebook is talking about data and information in the aftermath of the Cambridge Analytica data breach and Mark Zuckerberg’s subsequent hearings before the US Congress. The European laws and the coming of the General Data Protection Regulation (GDPR) rules have also made a big impact.
Facebook is now on its toes, and with the recent executive reorganisation, more new things are expected from the social media giant.
Facebook published the internal guidelines on how they enforce Community Standards which basically lists what is acceptable on the social media platform and what is not. On May 15, 2018, in a first, the company has released a Community Standards Enforcement Report which covers the period from October 2017 through March 2018 and shows the scale of hate speech, violence, fake accounts, adult content and nudity, graphic violence, and terrorist propaganda.
As regards spam, Facebook took down 837 million pieces of spam in Q1 2018, saying it found and flagged nearly 100 percent of the content before anyone reported it. The company also took down 21 million pieces of adult nudity and sexual activity in Q1 2018, claiming that 96 percent of this was found and flagged by their technology. Before being reported, 86 percent of content which fell into the bracket of graphic violence was identified by Facebook. The company took down or applied warning labels to about 3.5 million pieces of violent content. The report also claims that their technology doesn’t work well for hate speech and needs reviews by the team. Though the company claims that it removed 2.5 million pieces of hate speech in Q1 2018, 38 percent of it was flagged by its technology.
The aim of the report is also to answer questions that Facebook frequently gets to hear about enforcement of each Community Standard, including: how prevalent are Community Standards violations on Facebook, how much content does the company take action on, how much violating content do they find before users report it, how quickly do they take action on violations, etc.? The report addresses each of these questions individually.
Guy Rosen, VP of Product Management, in a Facebook blog states the reason why Facebook is making this data available. “We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too. This is the same data we use to measure our progress internally – and you can now see it to judge our progress for yourselves. We look forward to your feedback.”
While Facebook builds on AI and other technology that will help it beat spam and fake news etc. more effectively, making this report and the numbers public allows Facebook to show it is taking responsibility and accountability, things that the company has been accused of not doing till the Cambridge Analytica data breach. It’s Facebook’s way of showing that is taking responsibility.