YouTube expands its workforce to reduce abusive content

04 Dec 2017  |  Scarlett O'Donoghue 
YouTube expands its workforce to reduce abusive content

YouTube announced this week that by 2018 the company will expand its workforce to help moderate content that violates its policies following a spate of mishaps.

CEO Susan Wojcicki said thousands more people will be employed to flag harmful videos and comments, taking the total workforce of moderators to around 10,000 people.

The announcement comes after a barrage of negative press criticising YouTube's role in streaming violent or extremist videos, as well as content that is harmful to children.

Wojcicki also said that the firm's investment in new machine learning technology has seen positive results and that it has begun using this system across other areas.

The technology has enabled YouTube to remove nearly five times more videos than before, with 98% of the videos it removes flagged by machine learning algorithms. In addition, the system allows 70% of the content violating the company's policies to be taken down within seven hours of it being uploaded.

However, the company is working to increase its transparency in terms of tackling problematic content. From 2018 it will be publishing a regular report to provide more data about the actions it takes to remove inappropriate videos and comments.

Wojcicki added that YouTube will take a new approach to advertising to protect brands from videos that violate the company's guidelines, and ensure that their campaigns run alongside relevant content that reflects the brand's values. It is also planning to apply stricter criteria when considering which channels and videos are eligible for advertising, and to ensure ads are only running where they should.

"In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats," Wojcicki wrote in the blog post. "Now, we are applying the lessons we’ve learned from our work fighting violent extremism content over the last year in order to tackle other problematic content."

Commenting on the news, Justin Taylor, UK MD Teads said: "While more moderators is a step in the right direction, with thousands of hours of content uploaded every minute, it’s still almost impossible to guarantee brand-safe environments around user-generated content.

"As an industry, we must work together to ensure that we have the best destinations for brands to advertise on, supporting quality and premium publishing and championing formats that respect the user."

Latest

A new dilemma for broadcasters Conflicts of interest Amazon's advertising growth: industry analysis Global enters the out-of-home market: industry reaction Global buys Outdoor Plus and Primesight

Related articles

Scale of Facebook's abuse unveiled Brand safety: resolving the CMO's worst fear YouTube secures brand safety certification
Leave a comment

Thank you for your comment - a copy has now been sent to the Newsline team who will review it shortly. Please note that the editor may edit your comment before publication.

DATA SNAPSHOT

17 Sep 2018 

Data from Mediatel Connected
Find out more about the UK's most comprehensive aggregator of media data.

Arrange a demo
Advertisement

Newsline Bulletins

Receive weekly round-ups of the latest comment, opinion and media news, direct to your inbox.

More Info
Advertisement