| |

Just how effective are effectiveness studies?

Just how effective are effectiveness studies?

The RAB should feel proud to have made a strong case for radio’s role in media effectiveness, says David Brennan, however, as an industry, we’re going to have to rethink our approach to measuring effectiveness – and fast.

Another week, another effectiveness study. Two weeks ago it was radio’s turn to tell us it is one of the most effective pound-for-pound media channels in terms of delivering ROI.

The study – ‘The ROI Multiplier‘ – shows strong, consistent returns across a large number of campaigns for most media channels – but especially radio – and provides further insight via Radio Gauge to identify ways advertisers can optimise their returns (the main one seems to be ‘invest 20% of your budgets in radio’).

Don’t get me wrong; I think the RAB has commissioned an impressive piece of work, based on plenty of cases, millions of data points, the support of the major media agency groups and analysis conducted by the reputable Holmes & Cook.

Not only that, but it emerges with the dream headline that radio provides a higher ROI than all other media channels apart from TV, delivering returns of almost eight times the media investment.

Of course, that is part of the problem; the results don’t tally with the multitude of other effectiveness studies out there, and we are left with that residual sense of doubt (can it be true?) and uncertainty (what does it all mean?).

Am I right to feel so cynical? After all, I’m guilty myself, having commissioned a number of effectiveness studies when I was at Thinkbox, and I still feel their findings are valid and the insights they produced are genuinely helpful to media planners.

As these studies become more common – and, arguably, more diverse in their findings – we get further away from a ‘definitive’ view of how effectiveness actually works.”

They were popular with advertisers and agencies seeking to justify their media investments and, in my humble opinion, produced a fair amount of ROI themselves. No wonder there have been so many rival studies hitting the headlines ever since.

The reason for the emergence of large-scale effectiveness studies like these is down to the rapid advances made in statistical analysis and availability of data over the past couple of decades. This is smart data at its finest; focusing on the most important metrics (sales, profit, investments, competitive activity, and so on) and applying rigorous statistical analysis to exploring the relationships between them.

It is now not uncommon to be presented with effectiveness studies covering hundreds, if not thousands, of campaigns across a range of market sectors. Plus, of course, they fit with the zeitgeist; anything that can measure and optimise ‘value’ has to be valued itself in these procurement-driven times.

So, the RAB should feel proud to have made a strong case for radio’s role in the effectiveness mix. Like all of the ‘traditional’ media, so easily written off just a few years ago, radio appears to punch above its weight when the powerful forces of multiple linear regression analysis evaluate its performance in the cold light of data.

But…

As an industry, I think we are going to have to rethink our approach to measuring media effectiveness soon; as these studies become more common – and, arguably, more diverse in their findings – we get further away from a ‘definitive’ view of how effectiveness actually works.

For example, the Radio Centre’s ‘ROI Multiplier’ data is biased towards radio campaigns, which featured in 464 of the media campaigns analysed. The equivalent figures for TV (122), press (122), outdoor (41) and online (12!) were much lower. Already, issues of selectivity and representativeness are raised, although to be fair to RAB, all effectiveness studies are, by definition, based on self-selecting samples.

The sales ROI figures quoted in the RAB study are generally much higher than most equivalent studies report – even if we set aside the strong skew towards retail campaigns, which tend to produce higher than average advertising ROI figures. They are much higher on average than those quoted in the IPA databank of effectiveness awards entries, for example.

The more effectiveness studies that are published by individual media channels, the less traction they will create within the industry overall.”

But perhaps most significantly, the results show a very different pattern to most other recent studies, which immediately raises the question of ‘who to believe?’ I reckon the common answer amongst marketers and media agencies will be ‘whoever validates the decision I was going to take in the first place!’ Not that such an outcome is a bad thing – I believe most media insight is based on post-rationalisation – but in terms of changing hearts and minds, such analyses have a natural limit.

There is another issue with effectiveness studies; to those of us without a degree in statistical analysis they are just a little bit too ‘black box’. Sure, they produce numbers which are reassuringly precise and consistent, and more often than not tend to reflect how the market works.

They are based on increasingly high quality data sets (no more of those 8-brand online effectiveness studies these days, thank goodness) and are overseen by experts who can produce highly credible findings. But we don’t really understand how they are produced and, more importantly, they rarely tell us why effectiveness is just so damn elusive.

The more effectiveness studies that are published by individual media channels, the less traction they will create within the industry overall. That prediction is based on the following equation; Different metrics + different conclusions = apathy (confusion).

Wouldn’t it be great if all the interested parties could get together and create a combined effectiveness study that aimed to not only offer definitive (and independent) proof of different media channels’ contribution to payback, but also provided valuable insights into how that payback could be optimised?

Of course! We’ve already been there, in a sense. In 2007, Les Binet & Peter Field provided us with their impressive analysis of almost 30 years’ worth of IPA Advertising Effectiveness Award entries, published as ‘Marketing in the Era of Accountability’, possibly the king of effectiveness studies.

It provided a feast of knowledge and more than a few challenges to traditional marketing practice. But the IPA database is based on the ‘best of the best’; effectiveness studies such as RAB’s ‘ROI Multiplier’ have the advantage of looking at a range of brands and campaigns, with a more consistent dataset.

The meta-meta-analyst in me wants to take these thousands upon thousands of campaigns that have been analysed in this way, reflecting all media channels equally and fairly, and see what happens when the data shakes out. Two things I predict;

1. Most of the media channels written off over the last decade or so will show positive returns on investment – indeed, such a study would boost advertiser confidence and investment in general.

2. It’ll probably never happen – I doubt a single media channel could fund something so ambitious, and there are few signs of media working together for the good of the media industry in general

In the meantime, let’s think about what this plethora of effectiveness studies tells us about the big issues that face the way media works and how advertising payback can be optimised. Then, when we realise they can only take us so far, let’s think of a new way of doing them, so that the industry benefits from an equitable, comprehensive, long-term and wide-ranging analysis into what really drives advertising effectiveness.

Louise Cook, Managing Director, Holmes & Cook, on 08 Nov 2013
“I totally agree with Dave that, as an industry we seem to be getting further away from a definitive view of how communications work. Hardly surprising when the goalposts keep moving so fast! But looking back over my 25 years in the industry, we have made staggering progress. So many fascinating insights have been unearthed about how consumers receive and act upon different types of advertising. When we finally produce a general theory of communication, it will arise from the synthesis of all this work. I believe that studies like the recent RAB ROI Multiplier study, because it is based on multi-agency and multi-media data, will have an important part to play in this.

When the RAB commissioned this study, none of us had any idea what the results would look like. It was actually incredibly brave of them to fund such a large-scale project, when the result could have been “spend less on radio”. As researchers, I can tell you, our hearts were in our mouths all the way through the data collection and processing stage. No-one was more relieved than us to see radio more than justifying its position on the media plan. We were also extremely conscious that the results of this study were likely to be picked apart as soon as it hit the street and that the reputation of a "reputable agency", as Dave so kindly refers to us, (thank you Dave – we love you) would be trashed overnight if they weren’t found to be robust.

Meta-analysis is a very powerful technique. It is designed to investigate where the truth might be i.e. to deal with a range of types of bias. So, it did allow us to adjust for the things which are being raised as issues, for example, the number of retail cases. It also allows us to adjust for the different modelling methodologies used by the participating agencies. 30 observations is considered an adequate sample in meta-analytic terms. We had more than 7 times this for one part of the RAB analysis, meaning the results are drawn from what is genuinely a very substantial study. Our ROIs are higher than the IPA cases because they are revenue based, not profit. The relativity between sectors is though very similar.

Dave flags up another critical issue for effectiveness measurement - it is difficult for those without a degree in statistical analysis to see its findings as anything other than black-box output. As more and more opportunities arise to conduct studies of this complexity, it does demand that we develop good working partnerships between the expert technician and those on the ground. Relationships where we trust one another. With the RAB we really did achieve this. We both made sure that what Holmes & Cook had done was fully understood at every stage of the project. And, proof of concept - it was the RAB, not us, who took the technical results and wrote up the findings in the context of existing knowledge. We of course checked there were no over-claims. Oops - that doesn’t sound very trusting, I meant typos!

We always put all our models through the appropriate statistical hoops to check their validity, but for me the final and most important ratification of any results is whether they accord with real world experience. That so many companies are already dedicating 20% of their media budgets to radio shows that 20% is far more than fanciful extrapolation from a bit of modelling. So, whilst this work has brought something apparently surprising to everyone’s attention, it has actually only shown that something which was already a reality for some companies, really works.”
Simon Redican, Managing Director, RadioAdvertising Bureau, on 08 Nov 2013
“We certainly agree on the central importance of proving to clients what return they are getting on their marketing investment. At the RAB we applaud Thinbox's work in this area and have been sharing the findings over the last couple of years.

I'm just a little troubled by the implication in your first paragraph, that such surveys are now almost weekly occurrences. I would contend that there are more sectors than not, where there is a complete absence of any ROI evidence.

A more accurate context would be that such reports are few and far between. I agree that an overall ROI approach would be desirable and would be happy to discuss this with the other industry bodies. I am more confident than ever, that radio would assert its efficiency in the ROI field in any such study.

However, in the absence of such an ambitious, overarching study, I believe it is the responsibility of all media sectors to demonstrate the likely return they can deliver for their customers. Such studies stimulate long overdue debate in this area and help focus the conversation onto true effectiveness measures rather than the debate about audience metrics and relative scale of media which too often distract our industry. That's why the radio industry invested significant sums and worked with the main econometrics agencies and sought the advice of the IPA to produce the definitive picture for our medium.

The fact is, there are relatively few such studies and the amount invested in them is dwarfed by the sums all sectors invest in audience measurement research Interestingly, the commonalities in the findings often mirror the findings of other reports and I believe help validate the studies involved. Radio ; The ROI Multiplier echoes Thinkbox Payback 3 research in showing TV delivering the highest ROI, with radio second, a headline which Thinkbox were keen to share with their twitter audience.

Let's hope publication of more studies in this area gets us all talking about what really matters and that is helping clients get the best return from their marketing investment.”
Mark Barber, Plannig Director, RAB, on 08 Nov 2013
“It's always interesting to read David's perspective on media research studies, especially - as he reminds us - he was responsible for commissioning many himself.

I don't disagree with the opinion that a plethora of different studies with different conclusions risks creating stasis IF they are all attributed the same value. However, I would counter that some research studies are 'more equal' than others and that a higher value should be placed on their findings.

Now you won't be surprised to hear that I believe the RAB's ROI Multiplier study to be one of the 'more equal' ones that demands to be given greater credence. Here are a few elements that differentiate it from the rest (and that address the concerns raised about it here):

1. The results are based on media ROI data supplied by nine econometrics agencies, making it the biggest and broadest-ever database of radio ROI results in the world. The additional benefit of this is that because the data is sourced from a range of different companies all of whom use slightly different techniques in the way they approach modelling, our analysis strips out any particular skew that may exist within a given analytical approach.

2. The headline finding of the RAB study closely mirrors that of Thinkbox's Payback3 study - in that TV delivers the highest ROI and radio the second highest. The RAB study features Revenue ROI data whereas the Thinkbox study and IPA Databank feature Profit ROI - hence the difference.

3. Our study is about the ROI impact of radio within the media mix and therefore focuses on campaigns that feature radio. Yes, we have more individual cases for radio than other media but media spend analysis demonstrates that our data set is clearly representative of the wider market, overall and by sector. The way agencies report on different media also varies, meaning that some TV campaigns were designated to have lasted a year, across which time they were supported by a number of individual radio campaigns.

4. Unlike many comparative studies the RAB has been completely transparent in publishing the number of cases that make up each ROI figure at every stage of the analysis - allowing people to draw their own conclusions about the robustness of the findings.

5. Like all RAB research studies, an important element of the ROI Multiplier is the practical outputs, exploring what the data can tell us about best practice media planning and creative execution to optimise both radio ROI and overall campaign ROI for advertisers.

Don't get me wrong, I can understand why some initial reaction towards the 20% share for radio finding is incredulity. But this is just symptomatic of just how deeply perceptions of radio's limitations are ingrained based on its current share of ad spend rather than the proven effectiveness of the medium within the mix. In this regard, the findings of the RAB report are surprising and challenging but, based on the unique breadth and approach of this study, advertisers can be confident that they offer a credible and robust perspective of the value of radio within the mix.

We commissioned this study to address our customers questions about ROI from radio. Ultimately, they will be the ones that decide whether the approach we have taken deems the findings 'more equal' than others...”

Media Jobs