Still reeling in the aftermath of the Cambridge Analytica data harvesting revelations, Facebook is clearly acting fast to sustain dwindling user trust by spelling out policies loud and clear. After clarifying how it collects user data through third-party apps even when one is not logged onto Facebook, it has now updated its community standards guidelines section to further elaborate upon what kind of posts one is not allowed to publish on the site.
While the broad rules and categories on the page remain the same, the company has further fleshed them out to aid its content moderators to weed out objectionable material, with anecdotal and situational explanations as well in certain cases to spell out as explicitly as possibly what kind of posts it will be taking down, and on what grounds.
"These standards will continue to evolve as our community continues to grow. Now everybody out there can see how we're instructing these reviewers," says Monika Bickert, Vice President of Product Policy and Counterterrorism, said last week at a press briefing in Facebook's Menlo Park, California headquarters, according to an article on CNET.
“The consequences of breaching our Community Standards vary depending on the severity of the breach and a person's history on Facebook. For instance, we may warn someone for a first breach, but if they continue to breach our policies, we may restrict their ability to post on Facebook or disable their profile. We may also notify law enforcement when we believe that there is a genuine risk of physical harm or a direct threat to public safety,” reads an excerpt from their introduction.
Under its six categories, namely 'Violence and Criminal Behavior', 'Safety', 'Objectionable Content', 'Integrity and Authenticity', 'Respecting Intellectual Property', and 'Content-Related Requests', one can now see expansions that weren’t previously available to the public.
'Violence and Criminal Behavior' constitute:
"Real-world harm that may be related to content on Facebook" and "content that constitutes a credible threat to public or personal safety."
- Facebook warns users against posting credible statements of intent to commit violence against any person, groups of people or place (city or smaller), seeking bounty or arms, instructions on how to use weapons and arms if the goal to target and harm someone is evident.
- It is disallowing individuals, groups or organisations involved in terrorist activity, organised hate, mass or serial murder, human trafficking or organised violence or criminal activity “to have a presence on Facebook,” and will further also flag content supporting such entities.
- Prohibiting people from promoting or publicising violent crime, theft and/or fraud.
- Promoting, purchasing or selling “regulated goods” like marijuana, non-medical drugs and pharmaceutical drugs, firearms etc.
In the 'Safety' section, Facebook states that it will:
- Remove content that encourages suicide or self-injury, including real-time depictions that might lead others to engage in similar behaviour.
- Not allow content that sexually exploits or endangers children.
- Remove content that depicts, broadcasts, threatens or promotes sexual violence, sexual assault or sexual exploitation, while also allowing space for victims to share their experiences.
- Not tolerate bullying on Facebook, and therefore, remove content that purposefully targets private individuals with the intention of degrading or shaming them. This policy, however, does not apply to public figures as they want to leave room for discourse that may involve critiquing. However, “hate speech or credible threats” will be taken off the site, it added.
- Prevent unwanted or malicious contact on the platform.
- Post(s) (with) personal or confidential information about others without first getting their consent.
Its 'Objectionable Content' section encompasses:
- Hate speech against race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity and serious disability or disease.
- Content that glorifies violence or celebrates the suffering or humiliation of others because it may create an environment that discourages participation.
- Adult nudity and sexual activity, that is, the display of nudity or sexual activity.
- Content that targets victims of serious physical or emotional harm, classified as “cruel and insensitive” in its guidelines. However, pictures of art (paintings and sculptures that depict nudity) are permitted.
'Integrity and Authenticity' section includes:
- Spam, defined as “false advertising, fraud and security breaches, all of which detract from people's ability to share and connect.”
- Misrepresentation – which entails being a “real, verifiable identity on Facebook.”
- False news- which Facebook is not cracking down on completely, given that the line between satire, opinion and fake news is thin. Therefore, it will “significantly reduce its distribution by showing it lower in News Feed” instead.
'Respecting Intellectual Property' prevents the circulation of content that breaches someone else's intellectual property rights, including copyright and trademark.
'Content-Related Requests' section will entertain user requests for removal of their own account, removal of a deceased user's account from a verified immediate family member or executor and requests for removal of an incapacitated user's account from an authorised representative.
In fact, in an earlier Vox interview, founder and boss Mark Zuckerberg even revealed that they were toying with the idea of having a Facebook Supreme court constituting individuals from outside the company, to make the "final judgement call" on allegedly objectionable content. Another initiative tentatively set to be launched in May would be 'Facebook Open Dialogue', to get feedback on their policies, through events in Paris, Berlin, the UK, the US, India and Singapore.
Facebook is also expanding its rules around appeals. Where before you could request an appeal only if your Facebook profile, Page or Group was taken down, you now can challenge the social network about the removal of an individual piece of content. Users can also appeal Facebook's decision to preserve content they'd reported as a violation of the company's rules. In November, Facebook said it will be doubling its 10,000-strong content moderator team to 20,000.
To read the complete account of its guidelines, visit its official page.