Brands
Discover
Events
Newsletter
More

Follow Us

twitterfacebookinstagramyoutube
Youtstory

Brands

Resources

Stories

General

In-Depth

Announcement

Reports

News

Funding

Startup Sectors

Women in tech

Sportstech

Agritech

E-Commerce

Education

Lifestyle

Entertainment

Art & Culture

Travel & Leisure

Curtain Raiser

Wine and Food

YSTV

ADVERTISEMENT
Advertise with us

YouTube releases its first Community Guidelines Enforcement Report

YouTube releases its first Community Guidelines Enforcement Report

Tuesday April 24, 2018 , 3 min Read

Transparency is the new motto on the block. Post the Cambridge Analytica data breach, companies have become more circumspect about their user policies and guidelines. With hate speech and videos doing the rounds last year, YouTube has received flak for not monitoring content posted on the platform.

Keeping up with the promise it made in December last year, the world’s leading video streaming website has released its first quarterly Community Guidelines Enforcement Report. The company has also launched a Reporting Dashboard that allows users to see the videos that have been flagged for review.

The report highlights the data for the last quarter of 2017. YouTube has revealed that it is using machines to flag content for review at scale. In a statement, the company has stated, “And our investment in machine learning to help speed up removals is paying off across high-risk, low-volume areas (like violent extremism) and in high-volume areas (like spam).” YouTube introduced machine learning for flagging content in June 2017.

According to the report, YouTube removed over eight million videos in the last quarter, not including videos from channels that were completely removed for violating YouTube guidelines. Machines flagged 6.7 million videos, and YouTube claims that 76 percent were removed even before they received a single view.

Most of these videos fall under spam or adult content, and represent a “fraction of a percent of YouTube’s total views.”

In its Policy and Safety guidelines, YouTube clearly states the kind of content that is not suitable for the platform, including nudity or sexual content, hateful content, harmful or dangerous content, violent or graphic content, threats, harassment and cyberbullying, spams, misleading false videos, and, of course, copyright content. Content that shares personal information of people without their knowledge or permission, or is based on or around child exploitation, etc. is also not permitted on the platform.

The company stated, “Last year, we committed to bringing the total number of people working to address violative content to 10,000 across Google by the end of 2018. At YouTube, we’ve staffed the majority of additional roles needed to reach our contribution to meeting that goal. We’ve also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we’ve expanded regional expert teams.”

Flags from human detection can come from a user or a member of YouTube’s Trusted Flagger programme which includes individuals, NGOs, and government agencies that are particularly effective at notifying YouTube of content that violates Community Guidelines. In the last quarter of 2017, India ranked first in the list of countries from which YouTube received the most human flags, ranked by total volume.

This is only the first step. Whether YouTube will be able to keep up with monitoring and policing unsuitable content on a massive scale with the help of machine learning remains to be seen. Meanwhile, the company believes that the quarterly report is an important step and that regular updates will help everyone see the progress the company is making in removing violative content from the platform.

“By the end of the year, we plan to refine our reporting systems and add additional data, including data on comments, the speed of removal, and policy removal reasons,” shared YouTube.