|

Facebook: fines, flaws and the battle against fake news

Facebook: fines, flaws and the battle against fake news

 As Facebook records its fastest revenue growth since 2016, Raymond Snoddy looks at the responses to misinformation and data breaches

It has been an intriguing period for Facebook, although you might counter that there is rarely a time when the $1 trillion tech company is not, at the very least, interesting.

The information commissioner, Elizabeth Denham announced that the social media giant is going to be fined £500,000 for two “very serious” breaches of the Data Protection Act 1998 for its part in the Cambridge Analytica scandal.

Denham is perfectly aware that the imposition of such a fine on a corporate behemoth such as Facebook amounts to a “pyramid of piffle” as Boris Johnson might have said. At the time in question, the first quarter of 2018, Facebook was taking in around £500,000 in revenues every five minutes or so.

A frustrated information commissioner could only lament that the paltry sum was the maximum she could impose under the legislation in force at the time.

Denham sounded as if she would have positively relished getting to grips with the powers she now has under the European General Data Protection legislation to fine companies either a maximum of £17 million pounds or 4% of global turnover, whichever is the higher.

A fine of around £1.4 billion based on 4% of Facebook’s turnover might just have been enough to get the company’s attention.

Yet as Denham emphasised, while fines and prosecutions are necessary to punish “the bad actors” her real goal was to effect change and thus help to restore trust and confidence in the democratic system.

The trouble with regulation is that it is usually two or three steps behind the latest instant manoeuvres of the tech giants of California, quite apart from being hamstrung, until recently, by a 1998 fines scale.

At least Facebook had plenty of time to think about the coming judgement of the UK’s commissioner of information and her desire to encourage change since the initial prosecutions were launched.

It is therefore more than a little unfortunate that in the same month of the Denham action Facebook, after a long standoff, decided to suspend the accounts, not of anti-vaxxers or 6 January conspiracy theorists, but members of the Cybersecurity for Democracy project at New York University.

The project has, according to Vice News, revealed major flaws in Facebook political ad transparency tools and “highlighted how Facebook’s algorithms were amplifying misinformation.”

Recently the researchers helped to track vaccine disinformation on Facebook.

The account suspensions mean that NYU researchers are no longer able to get access to their raw material in Facebook’s Ad Library data.

One of the researchers, Damon McCoy argued that disinformation on Facebook around Covid-19 and vaccines was literally costing lives.

[advert position=”left”]

“It is disgraceful that Facebook is attempting to quash legitimate research that is informing the public about disinformation on its platform,” McCoy said.

The issue, as many of such issues are, is complicated.

The researchers use a browser extension tool called Ad Observer, which users download voluntarily. The researchers are then able to collect anonymised data about the ads they are seeing.

As a result, journalists and researchers can see how and where politicians are focusing their ad spend.

Unless Facebook, which ironically in all the circumstances, says it has acted to protect the privacy of its users, changes its stance the investigations of the Cybersecurity for Democracy group are essentially over for now.

Critics have pointed out that allowing Facebook to dictate who can investigate what is occurring on it platform is not in the public interest.

Has Facebook done enough to change or respond effectively to critics?

On Tuesday this week, Facebook told us it had removed a network of accounts from Russia that it linked to a marketing firm, which aimed to enlist influencers to push anti-vaccine content about the Covid-19 jabs.

Of course the biggest thing of all never changes – there is absolutely no let-up in the never-ending stream of revenue and profit, which keeps on rising whatever critics say about misinformation or regulators do about data breeches.

The company has just recorded its fastest growth since 2016 with revenues in the second quarter hitting a higher than expected record $29 billion, with profits doubling from the same time last year to $10.39 billion.

The rate of growth is expected to slow as the “post-Covid” economic bounce weakens, yet the heart of the business –its  audience- remains strong with daily active users rising from 2.7 billion to 2.9 billion.

Any sensible person is left scratching their head wondering if anything can be done to stop the spread of misinformation when there remains so much money in it, however great the effort companies such as Facebook say they take to try to stamp it out.

NewsGuard, the US journalism and technology company, which rates websites of all kinds for the reliability of the information they provide, has just tried to quantify the scale of the overall problem.

In a collaboration with Comscore, NewsGuard has produced a report warning that the misinformation industry is booming worldwide.

It estimates that around $2.6 billion a year in advertising revenue, some of it supporting top brands, is being sent inadvertently to publishers of misinformation and disinformation by programmatic advertisers.

This includes hundreds of millions used to support false health claims, anti-vaccine myths and election misinformation.

The co-operation involved cross-referencing two sets of information. One was NewsGuard data on more than 6,500 news and information websites rated for reliability or the lack of it. This was combined with Comscore information on traffic and advertising rates in a sample of 7,500 sites.

It estimated- and you have to emphasise estimated- that 1.68% of display ad spending among the sample 7,500 had gone to misinformation sites, which were not individually identified.

Applied to the $155 billion global programmatic advertising industry this would equate to $2.6 billion a year.

The problem is as obvious as it is difficult to deal with.

Producing false information could hardly be cheaper compared with the creation of legitimate, checked and edited news, yet the phoney articles still generate serious engagement and with it revenues and workable business models.

As NewsGuard puts it: “Because misinformation does not cost much to produce, each ad dollar spent on misinformation goes further towards producing fake news than each ad dollar spent on legitimate media outlets goes towards producing credible journalism.”

Clearly such a comment goes to the intractable heart of the matter leaving food for thought for Elizabeth Denham and her successor, the high tech companies, and advertisers who are sometimes a little casual on where they allow programmatic agencies to take them.

Media Jobs