| |

Attention Revolution: guidelines for building a new measurement category

Attention Revolution: guidelines for building a new measurement category

As we build an attention-based audience measurement category, guiding rules around data quality must be followed.

In a recent blog, audience-measurement expert Brian Jacobs talks about emerging measurement in the context of building houses using better bricks.

He suggests that not only does our industry need (and want) a better house, but better bricks make for better houses (based on the sensible premise that if your bricks are sub-standard your house will fall).

This resonated with me. Our industry is calling for a better house and attention metrics have been identified as a solution. But as I predicted, in the wake of attention economy hyperdrive, plenty of brick companies are popping up. Some are good, many are not.

Why does this matter? More players are good for an emerging category, but only if their solutions are robust and do their job. This is data, not detergent. If a detergent doesn’t do its job, you try a different brand. If attention data is dirty, you risk infected systems and meaningless outcomes. Bad metrics will send us backwards. Misinformation will bring confusion and hesitancy for change.

We need to be fussy about proxy models dressed up as attention. We need to be critical of unbelievable claims. We need guidelines on how to compare the capabilities of data models that fuel the attention solutions on offer.

Start by asking your vendor these questions

If a vendor won’t answer them, they might be selling sub-standard bricks.

1. Is the data that powers your model Human?
Only human footage collected via an outward facing device camera can tell us whether a human is paying attention. Javascript executed inside a browser cannot. Advanced viewability is still device data. Human data tells a human story that device data simply can’t. Even though we collect directly from human panels, more than 20% of the views we collect that have viewability markers ticked have zero attention paid to the ad.

2. Is the data collected in a privacy-safe manner?
Trust is paramount when dealing with human data. Demand evidence of meaningful opt in and out for collection participants, plus data security and retention policies. What is essentially malware, dressed up as gaze-based collection is already out there. We are moving audience measurement quite literally to the eye of the beholder and we have an ethical responsibility to recruit a fully informed panel whose data is protected and cherished.

3. Is it collected in a natural environment?
Humans need a familiar environment to exhibit natural behaviour. And they behave un-naturally (and concentrate harder) in unfamiliar environments. Better measurement precision is achieved when a viewer is exposed to advertising in a completely natural setting, so ask whether the attention data is:

  • collected on real (not just realistic) platforms,
  • collected via passive cameras (not gaze-tracking goggles),
  • is calibration free (not interrupted by models that need calibration check-ins for accuracy)
  • collected in the wild (not lab) setting.

4. Is the gaze estimation Accurate?
Gaze estimation reveals where a person is looking and accuracy needs extensive training data to train a model. Young models or models without ongoing access to facial footage will be less accurate. Demand transparency, granularity and rigour. Without it, how will you know whether attention is being paid to the ad or not?

It’s unlikely vendors will discuss the specifics around the millimetre error bounds (and they might not be able to if it is third party attention data) but ask specific questions around:

  • Granularity – can the model detect rolling eyes-on-ad not just eyes-on-screen
  • Continual improvement – is the model being continually improved from a stream of varied data, particularly as media platforms evolve their product offerings regularly?
  • Individual level data – is the data in the model trained on individual level views or at an aggregate?
  • Collection conditions – was it collected in a natural setting and to what extent are edge cases included?
  • Validation – is the model subject to validation and how are inherent biases in the model mitigated?

5. Has the model been proved to be consistent across boundary conditions?
An attention product is not meaningful if accuracy performance is not consistent across conditions. When its baseline holds over a range of conditions, it can then be used predictively and at scale. For example, ask whether the model is consistent across different countries, platforms, formats, devices, panel types and time.

Question vast differences in results. When results are vastly different than expected, it usually means they are not right.

Back in the first century BC, Marcus Vitruvius Pollio, a Roman civil engineer, wrote that the practice of architecture should be based on guiding rules and principles, both ideological and practical. He formulated the rules of Order, Arrangement, Eurythmy, Symmetry, Propriety and Economy, that would prevent structures from ‘falling to decay’. Vitruvius’ advice has been followed for centuries, with his work still included in foundation architecture courses at universities around the world.

As we build an attention-based audience measurement category, guiding rules around data quality must be followed.

As an architect in the attention economy, I believe if we adhere to the rules that are Human, Privacy Safe, Natural, Accurate and Consistent the attention economy will endure.

And we might just have a long-standing Colosseum on our hands.

Professor Karen Nelson-Field is a media science researcher and founder of Amplified Intelligence. Attention Revolution is a monthly column for Mediatel News in which she explores how brands can activate attention to measure online advertising as well as build a better digital ecosystem.

Email Karen: If you’d like to respond to this article or ask Prof Nelson-Field a question, please email: karen.nelson-field@mediatel.co.uk. Please note this email is monitored by Mediatel News

KarenNelson-Field, CEO, Amplified Intelligence, on 10 Dec 2021
“Couldn't agree more, this is why we only use ioS, not Android where the cameras (in 90% of phones) are substandard. Glasses, while good cameras, are not scalable nor natural.”
MarkkuMäntymaa, CEO, Founder, Viomba Attention Martech, on 04 Nov 2021
“Thank you Karen for these great guidelines to attention related matters. These points are critical to check when ever any self respecting marketer or brand is looking for visual attention related analytics. One thing I would still clarify further, is that it is important to understand that there are fundamental differences between using any type of camera for visual tracking versus using an actual eye tracking device. Such device is a high-performance sensor which is not made for detecting just if the eyes are open and appearing to gaze somewhere. Cameras can't physically deliver required tracking precision and robustness (e.g. ethnicity, glasses, different eye conditions, makeup, lights; you name it). To actually collect accurate enough eye tracking data from panelists requires to use non disturbing eye tracking devices which are accordingly calibrated to the user's eyes and equipped with custom projectors of near-infrared lights penetrating deep into our eyes, special image sensors and optics as well complex data processing with custom algorithms. Smartphones etc. don't have this technology integrated in them since it would make them too expensive. Only software based solutions cannot change that fact to reliable enough attention tracking. We know it will someday happen, but it will still take several more years to get there. Here's a good overview from the world's leading eye tracking tech provider on what is actually involved in reliable enough visual attention tracking data to start to create further analytics on top of it: https://www.tobii.com/group/about/this-is-eye-tracking/”

Media Jobs