The Great Hack is a Great Con
The Netflix documentary offers no real evidence that either the Trump campaign in the US or the EU Referendum campaign in the UK was influenced by the micro-targeting of advertising messages, writes Tracey Follows
It had been billed as a ‘must-see’ and ‘the film everyone is going to be talking about’ by almost everyone on Twitter, so I dived into the two-hour Netflix extravaganza.
I’d like to say I wasn’t disappointed. But I was.
The Great Hack is actually two documentaries in one. There are two stories being told which are not only overlapping, but very often transposed.
The first is David Carroll’s story. This is the interesting part of the movie. David is the professor of media design at Parsons in New York, who was so worried about the amount and type of personal data that Cambridge Analytica might have on him, that he put in a request to get his data back. By the end of the film we discover that he never gets access to his data, and that Cambridge Analytica went bankrupt and never actually opened up its algorithms to the people whose data it probably had.
David’s story is interesting because it’s one man’s quest to follow the data crumbs to try to find out what happened to his data, where it is now and how much of it there might be. Anyone following his personal journey is left in no doubt about the number of data points through which we are observed, the amount of personality information that is collected on us, and the power of the social networking platforms to become the canvas on which all of this is sketched out. David is looking for evidence. And there is a legitimate public interest story here about the way in which our privacy is being compromised and our personal data collected, stored and used in ways in which the average citizen can just never know.
Pictured: David Carroll. Source: Netflix/YouTube
The other story is the more specific one about the demise of Cambridge Analytica in light of the unethically scraped data on millions of individuals who happened to be friends of people on Facebook who were participating in a personality test. We all know the story by now but is worth repeating what Jamie Bartlett wrote about this in his book, The People Vs Tech:
“By cross referencing people’s survey answers against their Facebook likes…[Dr Michael Kosinski]… was able to work out the correlation between the two. From that he created an algorithm that could determine, from likes alone, intimate details of millions of other users who hadn’t taken the survey. In 2013 he published the results, showing that easily accessible digital records of behaviour can be used to quickly and accurately predict sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and gender.”
Cambridge Analytica went one further, claiming as we saw in the film, that they could predict the personality type of every single person in the US. Much of the film is spent looking back at Alexander Nix and Cambridge Analytica’s involvement in election campaigns in India and Africa, suggesting that they were often in the business of suppressing or motivating voter turnout to help influence the results of elections.
The film then puts these two sub plots together to make it seem that because CA had access to lots of social media user likes and other personal data, and because they seemed to be into influencing voter turnout, that the EU Referendum and the US election were somehow ‘hacked’ and their results are illegitimate. Observer journalist Carole Cadwalladr is there to join all the dots. But then, as her Twitter activity makes all sorts of now retracted claims even over the last week, she has a tendency to join dots that don’t even exist.
The film in fact offers no evidence whatsoever that either the Trump campaign in the US or the EU Referendum campaign in the UK was influenced by ‘micro-targeting’ of advertising messages that were matching people whose data identified as being ‘persuadable’ with messages that made them vote a different way to how they otherwise would have.
Now, it might be the case that variations of pro-Brexit messages were targeted at people who were undecided about which way to vote, and that those messages were placed in people’s social media feeds. But we don’t know the effect of those messages, or if they even had any affect at all. We certainly cannot say, as the film implies, that such messages helped to change people’s minds. But unlike David Carroll, Cadwalladr isn’t interested in looking for evidence.
If she was, she might have turned to the testimony that Eitan Hersh gave to the United States Senate Hearing in 2018. Hersh is a professor of political science at Tufts University and in 2015 he published an interesting book called Hacking The Electorate: How Campaigns Perceive Voters. Much of the data used in that book draws on the 2012 Obama re-election campaign, and this gave him a solid background from which to pass comment on voter targeting and whether it had shifted of late, from persuasion to manipulation.
In his testimony he talks in detail about the gaps in knowledge around the effects of social-media targeting - which is interesting, and some body or entity should undertake research to fill these gaps. But on this election issue he makes many points, including two main ones:
- On persuadables: a persuadable voter is merely one whose opinions will change on hearing or reading new information. He says, “Cambridge Analytica’s strategy of contacting likely voters who are not surely supportive of one candidate over the other but who support gun rights and who are predicted to bear a particular personality trait, is likely to give them very little traction in moving voters’ opinions. And indeed, I have seen no evidence presented by the firm or by anyone suggesting the firm’s strategies were effective at doing this.”
- Then on psychological profiling: “The new component of targeting…is psychological profiling…Facebook ‘likes’ might be correlated with traits like openness and neuroticism but the correlation is likely to be weak.” The weak correlation means that the prediction will have lots of false positives - namely, people who Cambridge Analytica predicts will have a trait but who don’t actually have that trait. He goes on to suggest that targeting models such as this is wrong about 25-30% of the time. He then goes on to state that he is skeptical that CA manipulated voters in a way that affected the election.
In a nutshell, there is little evidence that psychographics is particularly effective and predicting people’s personality (which was the claim CA made) is not the same as persuading people into action, for which there seems little or no evidence .
Yet the film wants to suggest that in both election cases in the US and UK, the winning sides’ victories were marginal. Characters in the film are aghast at how slim the margins were in the very few states that gave Trump his victory. Everyone shakes their heads in unison, in disgust, in utter disbelief. But wasn’t it not so long ago that we were being told by psychologists and business leaders that it was all about marginal gains? I remember many a presentation deck wheeling out (sorry) Dave Brailsford to educate the industry on how tiny incremental gains could add up to the marginal difference between a loss and a win. Apparently, we no longer think that is legitimate!
Yet, still the film keeps going, trying to implant the seeds of doubt in the viewer’s brain that recent election results were legitimate. It goes on and on about how we now have our own private advertising messages that no-one else sees. It seems to imply this is sinister and that people are being directly and personally ‘brainwashed’ into thinking something they would not otherwise think.
The fact is that the very media they seem dead set against is called ‘social’ media. It is social. It is not the case that someone receives a message that they and only they see. If someone sees a message in their feed they may like it or comment on it or even share it. Others may comment on it. A discussion may ensue.
In my recent research into the future of media, it is abundantly clear that people don’t just receive messages in a vacuum, they aren’t sitting at their laptops waiting to be programmed into thinking something that is untrue is true, or believing something that until that moment they did not believe at all. In fact, whether advertising or editorial content, people tend to share it with others to solicit their opinion, intentionally searching for confirmatory signs from others that it is valid, true, real. If anything, it is the community around a person that authenticates stories and messages for them, not the sender or the content of that message itself.
Not since Vance Packard’s The Hidden Persuaders have people fallen victim to such an un-evidenced, spurious theory about the power of communications to control our minds. First published in 1957, the book emphasised the dangers of consumer analysis, using psychological research to play on people's hidden fears and anxieties to drive their buying impulses. Post-war TV broadcasting was becoming popular in the United States and Britain, and television sets becoming commonplace in homes, businesses and institutions. It was during the 1950s, and television was the primary medium for influencing public opinion, hence it was also a target for demonisation in this way.
Today, twenty years into internet communications, social media and online is seen as the primary medium for influencing public opinion and is once again the target of demonisation in a similar way.
I realise that the audience for this column is not necessarily made up of persuadables here, that many people in media believe that the area in which they work is responsible for micro-targeting misleading messages to people who voted differently to them. But for others, it will not have passed them by that this documentary was screened on Netflix. A media channel that was deified for its smart use of data-driven targeting when it launched with House of Cards. Millions of column inches and many platforms were dedicated to presentations about the sophisticated ways that content had been tagged, and people were seeing different versions of the trailer for the new programme dependent upon the algorithm.
I certainly remember the director of global communications, Joris Evers, saying: “There are 33 million different versions of Netflix”. What a triumph. What an innovation. Not a single media outlet or conspiracy theorist at the time suggested that that kind of micro-targeting could feed fear and become a very dangerous idea.
So it is very possible that it is not the social media platforms and data-scientists and psychological profilers who are using fear and anger to gain traction with their online targeting; we may come to find that it is the very people who are attacking and criticising them that are the actual fear-mongers; that it is they who are the hidden persuaders after all.
Tracey Follows is the founder of futures consultancy Futuremade