|

All this talk about big data makes me feel uncomfortable

All this talk about big data makes me feel uncomfortable

dominicmills

I wouldn’t go as far as the participant at MediaTel’s The Year Ahead in dismissing big data (it seems obligatory to cap up the B and the D – but I refuse) as ‘what we used to call research’. But this endless talk about it makes me feel uncomfortable.

Let’s start with the name, which is wrong-headed. Data, by nature, are small and granular. So when people talk about big data, what they really mean is lots of individual bits of data. It’s a volume thing. Now, thanks to technology and new devices, tons of it is available.

As a term, big data is now a jargonistic cliché, lazy shorthand. It’s over-used, mis-used and abused. It’s become the prerogative of people who peddle snake oil in the form of hugely expensive consultancy and software, luring in the suckers with promises of unrivalled customer insight and the ability to create organic growth. A bit like this IBM infographic.

Often these people don’t really understand it themselves, but they talk the talk, repeat the clichés and know just enough to fool the rest of us. If I read another piece about how big data is going to change this or that industry, I’m going to scream.

If people want a cliché, I prefer “data is (strictly speaking ‘are’) the new oil”. I like this because it seems to me to best reflect the use of data. Not as an end in itself, and nor as black gold, but as a lubricant to smooth and improve interactions between marketers and consumers. Think engine oil.

The route to this new era of data is not hard to trace: rapid falls in server costs and the advent of the cloud make storage easier and cheaper. This has been matched by more touchpoints with consumers and more collecting points. Add on the ability to access and process data faster through better connectivity and new analytical software packages (some of them free, like HaLoop and MongoDB), and off you go.

But as former Dunn Humby boss Martin Hayward points out, many organisations still lack the ability to join up their data, let alone do anything meaningful with it.

The trick is not about finding the right answers, but asking the right questions.

And that’s if you accept that the models they use – algorithms to predict our behaviour – will actually work. As John Lahr of the New York Times notes, many of these models are used by hedge funds – and they singly failed to predict the crash of 2007-8 .

So, while the promised land may beckon, marketers have still got at least three major hurdles to overcome. Number one is privacy and ownership and the reverse of the coin: trust.

Take Kraft’s intelligent vending machine, developed with Intel. It uses sensors as customers approach to determine their sex and age, and then recommends what it thinks is the most appropriate product. How intrusive and unnecessary, you might think, but what happens if it used face recognition software – using pictures of me publicly available online – to go a stage further and offer a personalised welcome or promotions based on my previous purchases?

Is that their data? Or mine? Or is it a third party’s? And is it a breach of privacy? Complicated, huh? And still unresolved. At the very least, misuse or abuse that knowledge about me, and any trust is gone. As a recent JWT survey shows, this is an area where marketers have to tread very carefully.

The second hurdle is the so-called averaging of data. This produces the phenomenon which you can call “I don’t know you, but I know your type”. It’s what cookies do. The end result is a sort of spurious intimacy, correct up to a point but ultimately wrong in that it fails to capture your real essence or soul – an essence that may never be captured because there is no data for that.

I checked who Google thinks I am: it’s correct – but only up to a point because it’s put me in the wrong age bracket (I’m older than it thinks) and, bizarrely given the amount of time I spend on the Fulham website, lists golf as my major sporting interest. Four out of ten, I’d say.

The last is the cultural gap between corporate technology departments and their marketing counterparts. To call it a gap might be an understatement. But unless it’s bridged effectively, opportunities will be wasted. I mean, have you ever talked to a data scientist?

The men from McKinsey suggest the creation of a new role – the Chief Marketing Technology Officer – an individual who can speak the language of both fluently.

Hmm. That may be optimistic. I don’t think such people exist yet. I suspect that someone who can translate effectively may the best they can hope for. But that may be some time, because there appears to be an acute shortage of people with sufficient data skills, let alone ones that combine that with knowledge of marketing.

Until then, when you meet someone who claims to be a big data guru, treat them with great suspicion and keep your chequebook in your pocket. (Good news: Big Data Guru on Twitter has just the one follower).

And finally, I’m away travelling (Indo-China, if you’re interested) for a couple of weeks. But I’ll be back on Newsline in the week commencing February 25th.

Monday, 11 February 2013

As I pointed out at the Future of Media Research seminar it was dunnhumby who were probably the first organisation to throw what has become known as ‘big data’ at the market and media research industry – and they did it it quite an aggressive way.

Their publicly stated aim was to use the Tesco loyalty card information to make Superpanel redundant and, at the very least, to make the likes of BARB and the NRS significantly less influential in terms of the allocation of media expenditure.

Their argument was simply that big numbers must be more accurate and informative than much smaller sample-based surveys even if, on the face of it, the latter were more obviously representative of the total universe. Richard Marks’s renaming of ‘big data’ as ‘deep data’ is very helpful in understanding the difference and, perhaps how they work best together rather than as alternatives for each other.

Deep data gives great depth of understanding of behaviour (frequency of purchase, combinations of purchase, even sequence of purchase) but it does it at the expense of insight as to who is doing that buying and what else they do when they aren’t operating within the sphere being measured. Market, or indeed media, research may only have some of that granularity but it usually paints a much broader picture of the shopper/viewer/ reader etc.

In classic marketing terms if I want to get my existing customers (regardless of who they are) to buy more from me then ‘deep data’ can be very useful. But if I’m trying to tempt someone else’s customers into my shop (or newspaper or TV channel) it can tell me pretty well nothing. Traditional research has the potential of being much more enlightening when winning new customers is key.

In the end you pays your money and you takes your choice but my gut feeling is that the fashion for big/deep data has followed the fashion for more and more short term decision making and research budget cutting – it may have even helped them firmly along their way. It’s a downward path and a dangerous one to take.

Richard Bedwell
Consultant

Media Jobs