| |

The Great Hack is a Great Con

The Great Hack is a Great Con

The Netflix documentary offers no real evidence that either the Trump campaign in the US or the EU Referendum campaign in the UK was influenced by the micro-targeting of advertising messages, writes Tracey Follows

It had been billed as a ‘must-see’ and ‘the film everyone is going to be talking about’ by almost everyone on Twitter, so I dived into the two-hour Netflix extravaganza.

I’d like to say I wasn’t disappointed. But I was.

The Great Hack is actually two documentaries in one. There are two stories being told which are not only overlapping, but very often transposed.

The first is David Carroll’s story. This is the interesting part of the movie. David is the professor of media design at Parsons in New York, who was so worried about the amount and type of personal data that Cambridge Analytica might have on him, that he put in a request to get his data back. By the end of the film we discover that he never gets access to his data, and that Cambridge Analytica went bankrupt and never actually opened up its algorithms to the people whose data it probably had.

David’s story is interesting because it’s one man’s quest to follow the data crumbs to try to find out what happened to his data, where it is now and how much of it there might be. Anyone following his personal journey is left in no doubt about the number of data points through which we are observed, the amount of personality information that is collected on us, and the power of the social networking platforms to become the canvas on which all of this is sketched out. David is looking for evidence. And there is a legitimate public interest story here about the way in which our privacy is being compromised and our personal data collected, stored and used in ways in which the average citizen can just never know.


Pictured: David Carroll. Source: Netflix/YouTube

The other story is the more specific one about the demise of Cambridge Analytica in light of the unethically scraped data on millions of individuals who happened to be friends of people on Facebook who were participating in a personality test. We all know the story by now but is worth repeating what Jamie Bartlett wrote about this in his book, The People Vs Tech:

“By cross referencing people’s survey answers against their Facebook likes…[Dr Michael Kosinski]… was able to work out the correlation between the two. From that he created an algorithm that could determine, from likes alone, intimate details of millions of other users who hadn’t taken the survey. In 2013 he published the results, showing that easily accessible digital records of behaviour can be used to quickly and accurately predict sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and gender.”

Cambridge Analytica went one further, claiming as we saw in the film, that they could predict the personality type of every single person in the US. Much of the film is spent looking back at Alexander Nix and Cambridge Analytica’s involvement in election campaigns in India and Africa, suggesting that they were often in the business of suppressing or motivating voter turnout to help influence the results of elections.

The film then puts these two sub plots together to make it seem that because CA had access to lots of social media user likes and other personal data, and because they seemed to be into influencing voter turnout, that the EU Referendum and the US election were somehow ‘hacked’ and their results are illegitimate. Observer journalist Carole Cadwalladr is there to join all the dots. But then, as her Twitter activity makes all sorts of now retracted claims even over the last week, she has a tendency to join dots that don’t even exist.

The film in fact offers no evidence whatsoever that either the Trump campaign in the US or the EU Referendum campaign in the UK was influenced by ‘micro-targeting’ of advertising messages that were matching people whose data identified as being ‘persuadable’ with messages that made them vote a different way to how they otherwise would have.


Source: Netflix/YouTube

Now, it might be the case that variations of pro-Brexit messages were targeted at people who were undecided about which way to vote, and that those messages were placed in people’s social media feeds. But we don’t know the effect of those messages, or if they even had any affect at all. We certainly cannot say, as the film implies, that such messages helped to change people’s minds. But unlike David Carroll, Cadwalladr isn’t interested in looking for evidence.

If she was, she might have turned to the testimony that Eitan Hersh gave to the United States Senate Hearing in 2018. Hersh is a professor of political science at Tufts University and in 2015 he published an interesting book called Hacking The Electorate: How Campaigns Perceive Voters. Much of the data used in that book draws on the 2012 Obama re-election campaign, and this gave him a solid background from which to pass comment on voter targeting and whether it had shifted of late, from persuasion to manipulation.

In his testimony he talks in detail about the gaps in knowledge around the effects of social-media targeting – which is interesting, and some body or entity should undertake research to fill these gaps. But on this election issue he makes many points, including two main ones:

– On persuadables: a persuadable voter is merely one whose opinions will change on hearing or reading new information. He says, “Cambridge Analytica’s strategy of contacting likely voters who are not surely supportive of one candidate over the other but who support gun rights and who are predicted to bear a particular personality trait, is likely to give them very little traction in moving voters’ opinions. And indeed, I have seen no evidence presented by the firm or by anyone suggesting the firm’s strategies were effective at doing this.”

– Then on psychological profiling: “The new component of targeting…is psychological profiling…Facebook ‘likes’ might be correlated with traits like openness and neuroticism but the correlation is likely to be weak.” The weak correlation means that the prediction will have lots of false positives – namely, people who Cambridge Analytica predicts will have a trait but who don’t actually have that trait. He goes on to suggest that targeting models such as this is wrong about 25-30% of the time. He then goes on to state that he is skeptical that CA manipulated voters in a way that affected the election.

In a nutshell, there is little evidence that psychographics is particularly effective and predicting people’s personality (which was the claim CA made) is not the same as persuading people into action, for which there seems little or no evidence .


Source: Netflix/YouTube

Yet the film wants to suggest that in both election cases in the US and UK, the winning sides’ victories were marginal. Characters in the film are aghast at how slim the margins were in the very few states that gave Trump his victory. Everyone shakes their heads in unison, in disgust, in utter disbelief. But wasn’t it not so long ago that we were being told by psychologists and business leaders that it was all about marginal gains? I remember many a presentation deck wheeling out (sorry) Dave Brailsford to educate the industry on how tiny incremental gains could add up to the marginal difference between a loss and a win. Apparently, we no longer think that is legitimate!

Yet, still the film keeps going, trying to implant the seeds of doubt in the viewer’s brain that recent election results were legitimate. It goes on and on about how we now have our own private advertising messages that no-one else sees. It seems to imply this is sinister and that people are being directly and personally ‘brainwashed’ into thinking something they would not otherwise think.

The fact is that the very media they seem dead set against is called ‘social’ media. It is social. It is not the case that someone receives a message that they and only they see. If someone sees a message in their feed they may like it or comment on it or even share it. Others may comment on it. A discussion may ensue.

In my recent research into the future of media, it is abundantly clear that people don’t just receive messages in a vacuum, they aren’t sitting at their laptops waiting to be programmed into thinking something that is untrue is true, or believing something that until that moment they did not believe at all. In fact, whether advertising or editorial content, people tend to share it with others to solicit their opinion, intentionally searching for confirmatory signs from others that it is valid, true, real. If anything, it is the community around a person that authenticates stories and messages for them, not the sender or the content of that message itself.

Not since Vance Packard’s The Hidden Persuaders have people fallen victim to such an un-evidenced, spurious theory about the power of communications to control our minds. First published in 1957, the book emphasised the dangers of consumer analysis, using psychological research to play on people’s hidden fears and anxieties to drive their buying impulses. Post-war TV broadcasting was becoming popular in the United States and Britain, and television sets becoming commonplace in homes, businesses and institutions. It was during the 1950s, and television was the primary medium for influencing public opinion, hence it was also a target for demonisation in this way.

Today, twenty years into internet communications, social media and online is seen as the primary medium for influencing public opinion and is once again the target of demonisation in a similar way.


Source: Netflix/YouTube

I realise that the audience for this column is not necessarily made up of persuadables here, that many people in media believe that the area in which they work is responsible for micro-targeting misleading messages to people who voted differently to them. But for others, it will not have passed them by that this documentary was screened on Netflix. A media channel that was deified for its smart use of data-driven targeting when it launched with House of Cards. Millions of column inches and many platforms were dedicated to presentations about the sophisticated ways that content had been tagged, and people were seeing different versions of the trailer for the new programme dependent upon the algorithm.

I certainly remember the director of global communications, Joris Evers, saying: “There are 33 million different versions of Netflix”. What a triumph. What an innovation. Not a single media outlet or conspiracy theorist at the time suggested that that kind of micro-targeting could feed fear and become a very dangerous idea.

So it is very possible that it is not the social media platforms and data-scientists and psychological profilers who are using fear and anger to gain traction with their online targeting; we may come to find that it is the very people who are attacking and criticising them that are the actual fear-mongers; that it is they who are the hidden persuaders after all.

Tracey Follows is the founder of futures consultancy Futuremade

@traceyfutures

NigelGwilliam, Director of Media Affairs, IPA, on 06 Aug 2019
“To conflate Netflix recommendations with political microtargeting is asinine.

As we stated in the IPA submission to the Centre for Data Ethics and Innovation’s inquiry into online targeting, microtargeting is not the problem, political microtargeting is.
An exert: “…online enclaves and niches relating to products, services and cultural groups (imagine classic car owners, airline executive club members, fans of a particular football club or pop band) pose little threat to individuals, organisations or to society. Within the bounds of the law, online targeting in these spaces is largely innocuous. The contrast with regard to political online targeting and the potential for its abuse could not be starker.
True democracies rely on the public sphere – where open, honest and collective debate can flourish. We strongly believe that micro-targeted political ads circumvent this open debate because very small numbers of voters can be targeted with specific messages that not only exist online briefly, but which can be tailored specifically to that individual’s particular biases and beliefs for maximum effect.
And critically, political advertising, unlike every other category, is not covered by the CAP Code under the self-regulatory system, overseen by the Advertising Standards Authority (ASA). There is no governance. We therefore believe that the absence of governance allows this growing, opaque and unaccountable form of political communication to be open to abuse.”

Ironically, the author’s point that “it is the community around a person that authenticates stories and messages for them” is precisely part of the problem.

Political microtargeting is a key component of a larger challenge to democracy presented by social media driving group polarisation. Let me quote Cass R. Sunstein writing FOR the Facebook newsroom:

“Are automobiles good for transportation? Absolutely, but in the United States alone, over 35,000 people died in crashes in 2016… Social media platforms are terrific for democracy in many ways, but pretty bad in others… For social media and democracy, the equivalents of car crashes include false reports (“fake news”) and the proliferation of information cocoons — and as a result, an increase in fragmentation, polarization and extremism. If you live in an information cocoon, you will believe many things that are false, and you will fail to learn countless things that are true. That’s awful for democracy. And as we have seen, those with specific interests — including politicians and nations, such as Russia, seeking to disrupt democratic processes — can use social media to promote those interests.

This problem is linked to the phenomenon of group polarization — which takes hold when like-minded people talk to one another and end up thinking a more extreme version of what they thought before they started to talk. In fact that’s a common outcome. At best, it’s a problem. At worst, it’s dangerous.”
Source: https://newsroom.fb.com/news/2018/01/sunstein-democracy/

I am considerably more influenced by Professor Sunstein's concerns than I am by the politically invested author of this weak piece.”
GeorgeMorgan, Strategic Planner, News UK, on 05 Aug 2019
“The most important line in the documentary came from Christopher Wylie, who reasonably pointed out that if you're caught doping in athletics, no one asks "but by how much" or "was the doping actually that effective"; we simply accept it's wrong and that it invalidates the result.

Turning a debate about how we conduct politics, into one about the mechanisms of ad tech, does seem to rather miss this.”
NigelJacklin, MD, Think.me.UK, on 02 Aug 2019
“I agree with Nick. Basically CA were mostly guilty of hype...saying their approach could do more than it could. Clearly people have not yet come to terms with the fact that Farage/Leave and Trump were chosen by people...and obviously 'it was fixed' is more entertaining than 'it didn't work.' The other social media success was Momentum and Corbyn...where the people spoke out in favour of Labour.”
NickDrew, CEO, Fuse Insights, on 02 Aug 2019
“A very thorough, insightful analysis Tracey. It's incredibly notable (and actually important) that when it comes to big headlines there's very rarely any thoughtful or robust debunking of them - whether it's analysis of "OMG Cambridge Analytica has changed elections!!!!!", or futurists’ claims that we’ll all have 5G phones next year.
At the time that CA first became a news story, The Economist ran an article delving into some of Cambridge Analytica’s claims; the most memorable point from the article was that despite the sales pitch, various campaign managers had actually found CA’s data to produce no better results than their own more traditional approaches based on geodemographics and the like. And stepping back from the hype and excitement, one could see that there's no reason why it should be - what people present on Facebook is not the same as who they are, after all.
Unfortunately, such reasoned thinking doesn’t sell, and “Cambridge Analytica’s data tactics were unethical but legal; and what they did actually had no impact on any election” is not what people want to hear right now. For some reason there’s a real victim mentality in the public at large – “my voice isn’t being heard” and “everything’s a con to stop ‘real’ people’s opinions from counting” – and so themes like the Great Hack resonate with people’s views.
So it’s refreshing to see this thoughtful analysis, (momentarily) restoring one’s faith that not *everyone* is a blithering idiot...”
TomFilmore, Prof, University of Chicago, on 30 Jul 2019
“the evidence is not in an obvious form of 1-1 manipulation - it is more in the correlation between various ideological traits. there were people sharing militia messages that have no affiliation with militias. this is not traditional media targeting. we can't measure it that way. there is a lot of insight into the intersection of Trump supporters and Sanders supporters. taking a more sociological approach rather than a media one gets you to what really happened here.”
BrianJacobs, Founder, BJ&A, on 30 Jul 2019
“I watched the film and enjoyed it. And yes Tracey is right that it didn't include any analytical evidence that the ads targeted in the way described had any effect.
But there's a great deal of anecdotal evidence that the ads did indeed do the job they were intended to do, including examples in which people when asked why they voted the way they did quoted reasons that were untrue but which were featured in targeted social media communications (like 70m Turks about to arrive on our doorstep as a result of Turkey joining the EU).
There are other pieces of the jigsaw Tracey misses out too. Such as the spreading of misinformation, and the fact that nobody gave permission for their data to be used in the way that it was.
Her pop at Carole Cadwalladr is unnecessary and beneath her. There is no investigative journalist I can think of who's done more to unearth various wrongs. And when she makes mistakes (as happened in the last few days as Tracey well knows) she holds her hand up and admits to them.
The Great Hack is I'm sure not faultless but if it brings to the fore the ways in which some bad players misuse data, and trample all over individuals' rights (as Professor Carroll has it) then it's done us all a favour.”
traceyfollows, ceo, futuremade, on 30 Jul 2019
“Thanks for stopping by but you have missed the point. Try reading it again. And this time pay particular attention to the claims made by both CA and the film, that FB preferences lead to personality traits that then lead to behavior change. ps. you seem very angry, perhaps someone has been micro-targeting you...”
MikeFollett, Managing Director, Lumen Research, on 30 Jul 2019
“The fact that a film about micro-targeting appeared on a micro-targeted platform like Netflix doesn't discount micro-targeting, it's just another example of the rise of micro-targeting. It's not a gotcha moment, it's grist to the mill.

This article presents quite a challenge to our industry. Does advertising work or not? Tracey comes close to saying that it doesn't, so it doesn't matter if Facebook was used to disseminate fake ads.

But there's quite a big industry, and a series of regulators dishing out fairly big fines, that says that it does. In fact, Tracey makes a living on the assumption that it does.

So which is it? If advertising 'works' then it could 'work' of good or for ill (in this case, for ill). If advertising doesn't 'work', then none of this matters. But if that's the case, why are we wasting our time reading what Tracey thinks?”

Media Jobs