using website header

Connected: Display Connected: Media Landscape Connected: Regional Connected: AV Consumer Surveys Connected: Direct LinkedIn LinkedIn logo icon Twitter Twitter logo icon Youtube Youtube logo icon Flickr Flickr logo icon Instagram Instagram logo icon Mail Mail icon Down arrow
Ella Sagar 

What action has social media taken after CAN's demands over online racist abuse?

What action has social media taken after CAN's demands over online racist abuse?

The Conscious Advertising Network (CAN), a voluntary coalition of 70 companies aiming to help the ethics of the media industry, wrote an open letter to the CEOs of Facebook, Instagram, Twitter and Snapchat after England footballers Marcus Rashford, Jadon Sancho and Bukayo Saka received online racist abuse after the Euro Final.

The letter was signed by over 400 brands, agencies, civil society groups and individuals and made four demands of the social media companies to prevent similar racist abuse on its platforms.

CAN gave the platforms a 30-day deadline to enact these changes before the start of the Premier League season – over seven weeks ago now.

Since then, there has been more racist abuse of footballers online with several cases going to court and one landmark ruling resulting in a football fan being jailed for what he posted online about striker Romaine Sawyers.

Ex- Manchester United and England defender, Rio Ferdinand also gave evidence in Parliament about the issue during the drafting of the Online Harm Safety Bill and particularly stressed the impact on friends and family of those abused as well as the burden being put on the victim to block, report or switch off from abusive messages, when it should be on the perpetrator and their networks.

Facebook in particular has been in the spotlight this week after an ex-employee turned whistleblower claimed it puts "astronomical profits before people" and harms children and destabilises democracy in the process.

So, what changes have the platforms made in light of this public pressure, ongoing issues and resulting dialogue with CAN? And what more does CAN think social media platforms should do?

Mediatel News has asked the social media companies about each of CAN's recommendations and what they are doing or are promising to do in response. Our report today reveals:

  • Apart from some exceptions, most of CAN's recommendations have not been followed, mostly because the social media companies believe their current policies were already sufficient
  • These exceptions include AI-driven popup warnings on Facebook, Instagram and Twitter and comment filters designed to stop bullying,
  • CAN's leadership is "disappointed" that the social media platforms appear unwilling to come together urgently for a joint marketing campaign to stamp out racist abuse online
  • CAN now believes it was "wrong" to ask for all possible racist abuse to be reported to the police by the platforms.

One: 'Publish updated hate speech policies, that include the use of emojis, to support your zero tolerance approach'

In response to the first demand in CAN's open letter, Facebook and Instagram pointed to its hate-speech guidelines which specifies attacks on "protected characteristics" are forbidden "in written or visual form".

On Facebook's Transparency Centre, it specifies a list under "Do not post" including dehumanising speech or imagery, violent speech in written or visual form or mocking generalisations and comparisons between "Black people and apes or ape-like creatures",  "Jewish people and rats", and "Muslim people and pigs" among others.

Twitter has a section on "Hateful Imagery" under its Hateful Conduct page. It prohibits the use of hateful imagery or symbols in profile images, headers, usernames, display names, or bios that may constitute as harassment or abuse towards a person, group or protected category.

The guidelines go on to specify: "We consider hateful imagery to be logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, gender identity or ethnicity/national origin", and gives examples like the Nazi swastika symbol.

While these do not mention emojis specifically, "imagery" or "visual form" are seen as a catch-all term to include emojis.

Snap responded to CAN and has said it will update its community guidelines to include emojis to be clear that it will not tolerate hate speech in any content on its platform.

Two: 'Advertise your zero-tolerance approach directly to users'

Facebook

partnered with Kick It Out on an anti-racism initiative called Take A Stand which it says "empowers fans to call out discrimination and racism wherever they see it and move the conversation from awareness to action" using a Messenger app when they go back to stadiums during the football season. Reporting and education were the main focusses of this action. It is also working with partners BT Sport, Arsenal, UEFA as well as ISBA on this issue.

Twitter said it has continued to take part in Kick It Out’s Football Online Hate Working Group and is sharing regular updates on data and actions taken with the football community.

While community guidelines and hate speech policies are accessible on each platform's site, CAN makes clear these platforms have not come together to present a high-profile united advertising campaign focussed on the zero-tolerance posting or sharing of hate speech in their online communities.

Each platform has published a blog about their response to racism after the events of the Euro Final but conversations are still ongoing about further partnerships and campaigns.

Facebook and Instagram sent their blog post posted at the same time of the release of the CAN letter which outlines their latest actions to combat online hate.

Twitter responded to request for comment with a link to its blog detailing an update about how it is combatting online abuse in response to the Euro final.

Snap replied that upon receiving the open letter they responded to CAN and met them virtually a number of times to talk about the issue of racist abuse on the platform. It also sent its blog post  from 16 July written by the head of policy for the UK, Katy Minshall.

Three: 'Enforce your policies and report racist abuse to the police, employers and relevant football clubs as a crime'

In its section on law enforcement, Instagram said: "We’re also committed to cooperation with UK law enforcement authorities on hate speech and will respond to valid legal requests for information in these cases. As we do with all requests from law enforcement, we’ll push back if they’re too broad, inconsistent with human rights, or not legally valid."

In a similar vein, Instagram's parent company Facebook said: "We’re committed to cooperating with law enforcement authorities and to responding to valid legal requests for information. We are in ongoing discussions with the National Police Chiefs Council, the UK Home Office Football Policing Unit and relevant local police forces to understand how we can continue to best support active investigations and ensure valid data requests can be submitted and actioned in accordance with applicable laws and our terms of service."

Snap released its Transparency Report for the second half of 2020 which shows detailed information about the number of violations on its platform, what was done to enforce their policies and how they've responded to police and governments organised by country.

Twitter does not specifically refer to the police, employers or football clubs in its statement. While CAN acknowledges all of the platforms' progress and arrests made by the police following the Euro final, how the Online Harms Bill once in effect will help law enforcement fight online hate and social media tools to help users report abuse, it recognised reporting every incident of abuse was not practicable and more emphasis should be put on perpetrators of abuse.

In CAN's blog on the response to its open letter, it says: "Having consulted with platforms and the industry, we believe that we were wrong to ask for all possible racist abuse to be reported to the police by the platforms. We always recognised that it is not for the platforms to decide what is and what isn’t a crime. That is for the police.

"However, we do understand that the police currently may not have the resources to deal with the volume if posts that contain racist content or a possible hate crime were automatically passed to them. Our intention was not to suggest overwhelming police capacity, but it was to call for accountability if a hate crime has been committed."

However, Dino Myers-Lamptey, founder of The Barber Shop and co-chair of the CAN GSD Board, told Mediatel News that he believed CAN was right to demand that social media companies report crimes to the police, given what it knew at the time.

He explained: “Until then, we weren't getting full access to the facts and work of the organisations with the police, however now there is more openness about that process.”

Four: 'Add an interstitial to disrupt potentially racist remarks, and ensure human checking on all posts flagged in this way.'

In their blogs on tackling online abuse, Facebook, Instagram and Twitter mention their automated tools they use to take down content that violates their guidelines.

In terms of preventing people posting it in the first place and human checking, Facebook wrote that along with new anti-bullying tools like filters and controls on what comments you can see, it has expanded its comment warnings to pop up when people try to post potentially offensive comments.

Instagram noted that it has seen a decrease in offensive content after developing its AI system to "warn people when they’re about to post something that might be hurtful".

Twitter said: "We have further improved our proactive tools to identify racist abuse, such that we have been able to swiftly identify more Tweets than ever before targeting the UK football community."

There is also a new comment feature that "autoblocks accounts using harmful language, such that they’re stopped from being able to interact with your account" and they are continuing to extend their "replies prompts" which encourages people to edit their replies to Tweets if the language is detected as potentially harmful.

It found that through this replies prompt, 34% of people revised their initial reply or decided not to send their reply at all and, after being prompted once, people composed on average 11% fewer potentially harmful replies in the future.

These platforms note that they take down the vast majority of their content before it is reported, which reduces how many users see it, but only Facebook commented on human review particularly as AI does not always understand context. Specifically it promised: "We will continue to work on this so we can remove violating emojis from our platform quicker."

Violating content in numbers

Facebook 

took down more than 25 million pieces of hate speech content between January and March 2021, nearly 97% of this was before someone reported it.

On Instagram, they actioned 3 million pieces of content in the same timeframe, 93% before someone reported it.

Twitter removed nearly 13,000 Tweets between publishing their blog on online abuse on 19 February to 1 June, of which 95% were identified using AI technology.

Snap said in its transparency report  for 1 July – 31 December 2020, it enforced against 5,543,281 pieces of content globally that violated its guidelines.

Although, it added: "Snapchat does not offer an open news feed where unvetted publishers or individuals have an opportunity to broadcast hate or abusive content. Our Discover platform  for news and entertainment, and our Spotlight platform  for the community’s best Snaps, are curated and moderated environments. This means that content in Discover or Spotlight is provided either by our professional media partners, who agree to abide by strict Content Guidelines, or is user-generated content that is pre-moderated using human review, prior to being surfaced to large groups of Snapchatters. And Snapchat does not enable public comments which can facilitate abuse."

Calls for united anti-racism campaign unanswered

When asked about the outcome of the dialogue with the social media platforms, Myers-Lamptey (pictured, below) added: “Following the CAN open letter to the largest social networks, we engaged in a number of conversations with each of them, along with the Police and other concerned organisations, such as the PFA and Fifpro. From those conversations, we became better informed on each of their strategies in tackling the issues of hate speech and abuse online."

“There were no complete solutions, and while progress was being made, all were still lagging behind reasonable expectations.  We seemed to align with all on the need to collaborate and campaign together with urgency to the public, explaining the zero-tolerance stance and the policies and punishments for breaches.

“However, we are disappointed by the lack of urgency around this communication and are currently waiting to hear back from each marketing department on their plans and willingness to collaborate on a united campaign.

“It isn't difficult to run a campaign that tells the public of their policies and zero-tolerance stance. This campaign must be done in an impactful and collaborative way, with the social media companies standing side by side. It doesn't need a huge budget, a creative agency strategy or an idea development plan. It just needs to be done, and to be done about six weeks ago, as requested by CAN, before the Premier League started. Sky even has a £30m anti-racism fund to support these kinds of campaigns, so what are they waiting for?"

Users need 'seismic' change

Lydia Amoah, founder of the Black Pound Report and CEO of Backlight, an agency that helps companies in the creative industries become more inclusive, agreed that social media companies could do more to protect Black, Asian and Multi-Ethnic users from racist abuse while using their platforms.

"Unfortunately, this type of reaction is one that people of colour in the public eye have come to expect while online, but it is not one that they should be expected to accept," Amoah said. "The cultural change that is needed to make these platforms safe spaces for all users is seismic, and this has to start with accountability."

"It is reasonable to expect the platforms to make users aware of their policies and for there to be penalties for violations of the code of conduct that we all agree to when we participate in these spaces.

"In my opinion, the clearest way for platforms to demonstrate their commitment to this would be for those who subject other users to racist abuse to face the same real world consequences as they would if they made these remarks, and often threats, offline."

Leave a comment

Thank you for your comment - a copy has now been sent to the Mediatel Newsline team who will review it shortly. Please note that the editor may edit your comment before publication.