|

Connected TV Experience: One screen or two?

Connected TV Experience: One screen or two?

The Media Native

A new series of blogs about the broadcast industry, narrated by David Brennan

The morning session at this year’s MediaTel Group Connected TV Experience conference was in danger of breaking out into violent agreement at times, but was all the more informative for it, as a new consensus is emerging regarding what ‘connected TV’ actually means and where the opportunities lie.

Expectations have been adjusted to the new reality of a challenging advertising climate, entrenched TV brands and the demand for VOD content following a very different curve to what had been predicted.

That said, there was a guarded optimism regarding how connected TVs (or preferably, smart TVs but on no account internet TVs, I was informed during coffee break) can expand and enhance the delivery of content and services to the consumer.

For me, the most interesting debate was around what happens on which screen? To begin with, there was a sense that more and more of the action would be happening via the newly-connected TV sets, although the pace of change might be slow. Only one in eight households claim to have a connected TV, but I hesitate to call it a smart TV as only one in three of them has it permanently connected to the internet.

While that is happening, other screens might be taking an increasing share of the living room action. It was pointed out more than once that the TV is a shared screen (most of the time – with second and third screens  more personal, so activities such as social networking or shopping are more suited to them. As screens proliferate and orchestrated media content becomes more accessible, it is possible that our smartphones, tablets and laptops may be doing much of the work that smart TVs would have been expected to do.

There was also excitement about those devices becoming smart remote controls and navigation aids. That sounds like a really positive development, although three separate remotes vying for control of the one set would definitely not work in my living room!

About a year ago, when connected TVs were first becoming a reality, I was asked to predict whether the consumer would eventually go for one screen or two; would everything migrate to the connected TV screen or would we be using second screens for most activities beyond viewing?

As a man who feels that having my cake and eating it should be a minimum negotiating position, I said both! My feelings then were that the big screen is primarily about immersive entertainment and anything that gets in the way of that wouldn’t work; so connectivity would mainly be about immediate access to on-demand TV content and simple, non-intrusive interactivity (voting, liking, saving etc.).

I also predicted some social networking, with communities based around the programme content, might work on the big screen, but that would be it. Second screens would be for most other social activities and deeper or more open-ended response interactions, as well as stuff completely unrelated to what was on the telly. Most of what I heard yesterday reinforced those views.

One of the panellists in the afternoon session also emphasised the growth of second screen activity, increasingly aligned to what was being viewed on TV, describing it as “massively disruptive”. I must admit, that got me bridling for the first time all day. It was partly because I’d already heard ‘disruptive’ at least twenty times already but also because it is totally inappropriate in the context of how second screens (note the name!) are being used. All of the evidence suggests that they are creating new opportunities for broadcasters and advertisers to generate more engagement, loyalty, conversation and, ultimately, response. Why should we consider that to be ‘disruptive’?

The dictionary has two definitions of ‘disrupt’. The first – “to interrupt the usual course of a process or activity” – doesn’t really do it for me, because I think laptops and smartphones and tablets enhance the flow of what was happening in the first place, often mimicking the ways people already play along with TV content. But I suspect it is the second definition – “to destroy order, or the orderly progression of something” – that the term ‘disruptive’ is generally implying.

It’s a great term for consultants or solution providers to use within their pitch documents but less helpful when applied to a thriving eco-system where, for viewer and advertiser alike, the whole definitely becomes greater than the sum of the parts.

My fear is that a ‘disruptive’ mindset might cause us to ignore the core activity that drives that eco-system and focus on the disorder; after all, it’s happened before!

It was noted several times during the conference that the TV mindset is very different to the technologist mindset, often in the context of explaining the failure of some TV solution or another, and I think ‘disruptive’ is very much part of the technologist mindset.

Most consumers hate disruption and seek to contain it as much as they possibly can. Most innovation is absorbed into existing needs states and behaviour patterns, and nowhere is that more true than television. This became the theme of the penultimate session, which I shall refer to as Tess Alps vs The Technologists, and gained the biggest laugh of the day when Tess wondered aloud whether the obsession with personalisation (when 70% of TV viewing is shared) was because most technologists might live alone.

The recent Deloitte Study I referred to in a previous blog pointed out that in a third of UK households (I think), there are regularly more screens than scatter cushions in the living room.

Whatever is happening on each of those screens will be significant in itself, and when they can be synced-up into an orchestrated media moment it could be almost magical. Yeah, magical. A much better word than disruptive and more about what our media is meant to provide.

Media Jobs