YouTube expands its workforce to reduce abusive content

04 Dec 2017  |  Scarlett O'Donoghue 
YouTube expands its workforce to reduce abusive content

YouTube announced this week that by 2018 the company will expand its workforce to help moderate content that violates its policies following a spate of mishaps.

CEO Susan Wojcicki said thousands more people will be employed to flag harmful videos and comments, taking the total workforce of moderators to around 10,000 people.

The announcement comes after a barrage of negative press criticising YouTube's role in streaming violent or extremist videos, as well as content that is harmful to children.

Wojcicki also said that the firm's investment in new machine learning technology has seen positive results and that it has begun using this system across other areas.

The technology has enabled YouTube to remove nearly five times more videos than before, with 98% of the videos it removes flagged by machine learning algorithms. In addition, the system allows 70% of the content violating the company's policies to be taken down within seven hours of it being uploaded.

However, the company is working to increase its transparency in terms of tackling problematic content. From 2018 it will be publishing a regular report to provide more data about the actions it takes to remove inappropriate videos and comments.

Wojcicki added that YouTube will take a new approach to advertising to protect brands from videos that violate the company's guidelines, and ensure that their campaigns run alongside relevant content that reflects the brand's values. It is also planning to apply stricter criteria when considering which channels and videos are eligible for advertising, and to ensure ads are only running where they should.

"In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats," Wojcicki wrote in the blog post. "Now, we are applying the lessons we’ve learned from our work fighting violent extremism content over the last year in order to tackle other problematic content."

Commenting on the news, Justin Taylor, UK MD Teads said: "While more moderators is a step in the right direction, with thousands of hours of content uploaded every minute, it’s still almost impossible to guarantee brand-safe environments around user-generated content.

"As an industry, we must work together to ensure that we have the best destinations for brands to advertise on, supporting quality and premium publishing and championing formats that respect the user."

Leave a comment

Thank you for your comment - a copy has now been sent to the Mediatel Newsline team who will review it shortly. Please note that the editor may edit your comment before publication.

DATA SNAPSHOT

18 Apr 2019 

Data from Mediatel Connected
Find out more about the UK's most comprehensive aggregator of media data.

Arrange a demo
Advertisement

Newsline Bulletins

Receive weekly round-ups of the latest comment, opinion and media news, direct to your inbox.

More Info
Advertisement

Join thousands more readers by signing-up to receive our trusted news and opinion articles over email.

 

Please read our privacy policy.

As you're already registered with us, we've sent you an email which will allow you to manage your communication preferences.

Nice one. We've emailed you for verification. We'd like to get to know you and if you give us your details we promise not to share it with others or spam you.

Please complete the following fields:

 

Please read our privacy policy.