Still reeling in the aftermath of the Cambridge Analytica data harvesting revelations, Facebook is clearly acting fast to sustain dwindling user trust by spelling out policies loud and clear. After clarifying how it collects user data through third-party apps even when one is not logged onto Facebook, it has now updated its community standards guidelines section to further elaborate upon what kind of posts one is not allowed to publish on the site.
While the broad rules and categories on the page remain the same, the company has further fleshed them out to aid its content moderators to weed out objectionable material, with anecdotal and situational explanations as well in certain cases to spell out as explicitly as possibly what kind of posts it will be taking down, and on what grounds.
"These standards will continue to evolve as our community continues to grow. Now everybody out there can see how we're instructing these reviewers," says Monika Bickert, Vice President of Product Policy and Counterterrorism, said last week at a press briefing in Facebook's Menlo Park, California headquarters, according to an article on CNET.
“The consequences of breaching our Community Standards vary depending on the severity of the breach and a person's history on Facebook. For instance, we may warn someone for a first breach, but if they continue to breach our policies, we may restrict their ability to post on Facebook or disable their profile. We may also notify law enforcement when we believe that there is a genuine risk of physical harm or a direct threat to public safety,” reads an excerpt from their introduction.
Under its six categories, namely 'Violence and Criminal Behavior', 'Safety', 'Objectionable Content', 'Integrity and Authenticity', 'Respecting Intellectual Property', and 'Content-Related Requests', one can now see expansions that weren’t previously available to the public.
"Real-world harm that may be related to content on Facebook" and "content that constitutes a credible threat to public or personal safety."
'Respecting Intellectual Property' prevents the circulation of content that breaches someone else's intellectual property rights, including copyright and trademark.
'Content-Related Requests' section will entertain user requests for removal of their own account, removal of a deceased user's account from a verified immediate family member or executor and requests for removal of an incapacitated user's account from an authorised representative.
In fact, in an earlier Vox interview, founder and boss Mark Zuckerberg even revealed that they were toying with the idea of having a Facebook Supreme court constituting individuals from outside the company, to make the "final judgement call" on allegedly objectionable content. Another initiative tentatively set to be launched in May would be 'Facebook Open Dialogue', to get feedback on their policies, through events in Paris, Berlin, the UK, the US, India and Singapore.
Facebook is also expanding its rules around appeals. Where before you could request an appeal only if your Facebook profile, Page or Group was taken down, you now can challenge the social network about the removal of an individual piece of content. Users can also appeal Facebook's decision to preserve content they'd reported as a violation of the company's rules. In November, Facebook said it will be doubling its 10,000-strong content moderator team to 20,000.
To read the complete account of its guidelines, visit its official page.