Scale of Facebook's abuse unveiled

16 May 2018  |  Michaela Jefferson 
Scale of Facebook's abuse unveiled

Facebook removed 837 million pieces of spam in the first quarter of 2018, according to a new report released by the social media platform as it continues to appease demands for increased transparency.

Published this week, the Community Standards Enforcement Report covers Facebook's enforcement efforts from October 2017 to March 2018 in six key areas: spam, fake accounts, hate speech, adult nudity, graphic violence and terrorist propaganda.

"We believe that increased transparency tends to lead to increased accountability and responsibility over time," said Guy Rosen, VP of product management, Facebook.

"This is the same data we use to measure our progress internally - and you can now see it to judge our progress for yourselves."

Facebook's largest enforcement concern is both spam and the fake accounts which circulate it, with 3-4% of active accounts reportedly fake. However, it states that in Q1 alone it had successfully disabled 583 million fake accounts, mostly within minutes of registration.

21 million pieces of adult nudity and sexual activity have also been removed during the first three months of the year, as well as 3.5 million pieces of violent content.

However, the report also admits the current limitations of AI in the enforcement process. Although it could accurately detect and remove spam and adult nudity content nearly 100% of the time, the technology does not yet work well for tackling hate speech - just 38% of the 2.5 million pieces of hate speech removed in Q1 were detected by AI.

The data reflects comments by CEO Mark Zuckerberg during his appearance in front of US congress last month, where he stated that it could take up to 10 years to develop tech sophisticated enough to properly combat Facebook's content problems.

According to Rosen, Facebook's AI is up against "sophisticated adversaries" - but the business is "investing heavily in more people and better technology to make Facebook safer for everyone".

7,500 people currently moderate Facebook content globally.

The release of the report is yet another addition to the list of announcements Facebook has made in response to the bad press it has suffered since the Cambridge Analytica scandal earlier this year.

The month so far has already seen Facebook announce plans to develop a new Clear History tool, as well as an update on their investigation into apps potentially misusing user data.

Latest

Unprecedentedly unpresidential: Trump's war on journalism Trying harder....and living the brand promise Podcast: Richard Reeves and Dominic Williams Consumer ABCs: industry analysis ABC Magazine circulations: Women's weekly

Related articles

Cambridge Analytica: scapegoats, strip clubs and stalking Holding Facebook to account...we need a better strategy Investigation update: Facebook suspends 200 apps
Leave a comment

Thank you for your comment - a copy has now been sent to the Newsline team who will review it shortly. Please note that the editor may edit your comment before publication.

DATA SNAPSHOT

17 Aug 2018 

Data from Mediatel Connected
Find out more about the UK's most comprehensive aggregator of media data.

Arrange a demo
Advertisement

Newsline Bulletins

Receive weekly round-ups of the latest comment, opinion and media news, direct to your inbox.

More Info
Advertisement