If TV data's future is hybrid, how transparent will it be?
After a trip to the asi European Television Symposium earlier this month, Research the Media's Richard Marks believes, in principle, that TV audience research really does have a hybrid future - but how will it work in practice?
Earlier this month I had the pleasure of chairing the TV measurement sessions at this year's asi European Television Symposium, taking place in a tranquil Venice. Nearly 300 broadcasters, planners and researchers gathered to debate the future of TV audience measurement, a topic close to this column's heart.
The asi seminar has been an annual fixture for over two decades and whilst other events have come and gone, the asi continues to grow and thrive as the highlight of the research calendar for those designing and using TV currencies. So what did I make of it all, both up close on the stage as I introduced and questioned speakers, and from the audience during other sessions?
Well, as I commented at the event itself, the best analogy I can give for the current state of TV audience measurement is Schroedinger's Cat. This is a thought experiment that sits at the heart of thinking on quantum physics and the theory of parallel words. A cat is placed in a sealed box with a device that may or may not kill it. So how do we know whether the cat is alive or dead given that we cannot empirically observe its current state inside the box?
The only solution is for scientists to accept that the cat is simultaneously alive and dead, that two possible outcomes, two different universes, co-exist simultaneously. I won't delve into the details too much, Sheldon Cooper from The Big Bang Theory gives an explanation that is more my intellectual level.
The true state of audience measurement's health will depend on how lofty aspirations towards hybrid measurement are actually acted upon."
Suffice it to say that it seems that audience measurement is also simultaneously in rude health, innovating to measure across platforms and at death's door, King Canute about to be drowned by a tidal wave of big data.
Which outcome ensues will depend very much on the amount of work going on at the moment. Significantly, three of the TV Joint Industry Surveys speaking had tenders under way, all involving the use of data across TV platforms, BARB's Dovetail fusion initiative in the UK, a similar initiative in The Netherlands and a full tender process in Sweden.
Across the sessions, if one theme emerged it was the need for hybrid approaches, combining industry data derived from research samples with server and machine-generated data. Big Data has a lot to offer but the trading currencies represent people, not machines and devices, and the conference reflected a growing consensus that hybrid initiatives involving fusion are the way forward to achieve both granularity and cross-platform, cross-device measurement. If Schroedinger's Cat lives it will be as a cross-breed.
However, aspiring to doing something and actually doing it are two different things, and to be frank what wasn't yet evident was the real, actual detail on how this would actually be done in practice. This could be due to reticence related to Intellectual Property considerations or more likely because the thinking is not yet fully developed. The true state of the audience measurement cat's health will depend on how these lofty aspirations towards hybrid measurement are actually acted upon.
Compared to past asi conferences, where the relative merits of fixed versus portable meters and of meter panels versus Return Path Data have seen heated debate, an industry consensus is emerging about where television currency research wants to be.
However, getting there is very much a work in progress. Some important steps forward were in evidence. Throwing caution to the wind, BARB did a successful live demo of the approach they will be using for tablet measurement, whilst ESPN's Blueprint project in the US is successfully combining PPM, Set Top Box and online panel data. The German currency is being fused with online streaming data and Facebook unveiled a fusion of their social media data with Touchpoints to show campaign reach across TV and Facebook combined.
What struck me is how hard the agencies have to work to create some clear blue water between themselves in a world in which research thinking is converging."
However, two of the best-received papers across the conference derived from qualitative ethnographic studies of TV viewing, hugely ironic for what is primarily a quantitative conference.
The BBC showed how people take decisions about what they choose what to watch and a clear hierarchy can be seen from live channels in the EPG through to VOD as a last resort. It's a survey well worth tracking down.
Meanwhile, Best Paper was awarded to ThinkBox. Updates on their Screenlife initiative have been seen on many platforms over the last year or so, but the focus is on what happens when people are denied live television. Respondents (presumably with some financial incentive) tried to survive without live TV for a few days and kept video blogs about it.
Denied the live experience, rather than diving gleefully into a sea of on-demand content available when they want it, most wandered around like junkies doing cold turkey, lamenting the conversations they could not have with friends and in one extreme case, sat staring at a blank television describing what they were missing.
As Neil Mortensen put it, on demand is a box of chocolates, but live TV is our daily food.
So why did these papers do so well? Mainly, I suspect, because currency measurement does have something of an 'under construction' sign over it at the moment, with most of the emphasis, understandably, on methodologies rather than outcomes.
Meanwhile, qualitative research is much more engaging to present, particularly using video and a talking heads format. However 'qual' only gets you so far. It helps you to understand what the TV currency is saying, but it is quantitative methods that give that currency a voice in the first place.
Representatives of the three main global TV Audience Measurement suppliers, Nielsen, Kantar Media and GfK, took part in a panel debating their future, faced by new entrants like comScore and Rentak, and the possibility of disintermediation as clients access Big Data directly.
What struck me from the resulting discussion is how hard the agencies have to work to create some clear blue water between themselves in a world in which research thinking is converging.
GfK managed to create some positive vibes within the room by claiming that they are proud to be 'researchers' as opposed to 'Data Investment Managers'. However, that retro vibe may play better with us researchers in the room at the asi than in client boardrooms and on Wall Street.
Looking at the future of the media research agencies is a project I am currently engaged with, speaking to both clients and agency researchers ahead of a paper I will be delivering at the Media Research Group Conference at BAFTA in November. I hope to summarise it in next month's column.
As for the asi Symposium, a lot of agreement on a general plan for keeping Schroedinger's Cat alive, but less concrete detail on the actual steps needed to do it. The signs are good, but the cat maintains its duality of existence as we contemplate a likely future of Black Box solutions.
Richard Marks is the Director of Research The Media. Find out more here.