|

Technology: Uniting an industry against ad fraud

Technology: Uniting an industry against ad fraud

Rich Astley, managing director UK at Videology, looks at how brands can make sure they deliver viewable ads in safe environments – and to real people.

Today’s evolving media landscape opens up a wealth of new opportunities for brands to understand, engage and reach audiences. But it’s also given rise to complex, fast-changing breeds of fraud which negatively impact the entire advertising ecosystem.

Brand advertisers are naturally anxious to protect their video investments, and for agencies and media partners, ensuring the safety of their clients’ brands has fast become the number-one priority. After all, only guarantees of premium-quality inventory and relevant environments will give brands the confidence to commit more of their ad budgets to digital.

So what does a brand advertiser need to do to ensure viewable ad delivery in safe environments, to real people?

A universal battle

Advertising fraud exists in many forms, but principally falls under three categories: unsuitable environments (i.e. ads used prior to illegal or adult content); non-viewable ad placements (below-the-fold or 1×1 pixel advertisements) and auto-generated impressions (or ‘bot traffic’).

As a result, brand safety, viewability and fraudulent traffic are top-of-mind for any brand looking to make an impact with video.

Each media player has a role in fostering transparency and trust in order to tackle fraud in a collaborative way. The industry is only as strong as its weakest link but more than ever, brands need assurances that they have the tools, processes and most importantly technology to make safer media decisions.

The stronger the safety measures put in place by the technology platforms they depend on, the more rein they have to focus on maturing and innovating with their online video strategies.

Tech providers must offer a three-pronged approach to delivering safe ad campaigns that spans publisher contracts, proprietary technology, third-party integration, and human vetting. Here’s how that works:

Contractual protection

Legal terms are the foundation for a framework of protection between buyer and seller. They provide the recourse for compensation and act as a deterrent for contravention of terms. Creating a clear set of terms with no room for interpretation on responsibilities is part one.

Part two is a strict procurement strategy that ensures you are working with suppliers that guarantee quality content, in-stream, with guaranteed player size, page position and viewability, and no illegal, adult or irrelevant UGC content. Furthermore traffic must be human.

Bot traffic is another major problem. Research from PwC predicts UK digital video ad spend will reach £717m by 2018. With some conservative estimates putting bot traffic at 10 percent, that’s almost £72 million wiped off the value of the industry.

Understandably advertisers are demanding they get what they pay for. It’s therefore vital publishers can show that the traffic on their sites is legitimate. But it works both ways, ad tech platforms need to be able to contractually show their clients they have the tools in place to detect and prevent illegal traffic sources.

Technology

It’s rare any one provider or platform can provide a solution for every aspect of brand safety – meaning its important a platform is invested in a combination of both proprietary and third party tools. At a minimum, a video platform should be detecting and decisioning on the referring URL and require their suppliers to pass that to them in the ad request.

Solid black and whitelisting tools are the technology enforcement later here and you should expect the flexibility to adjust these on the fly (most bots and illegal content sites emerge quickly and have a limited shelf life).

Baseline blacklists already exist – for example the City of London Police runs the Police Intellectual Property Crime Unit (PIPCU), with part of its remit to maintain and distribute a list of inappropriate sites to advertisers and platforms, who can then incorporate them into their vetting processes.

Independent auditors of quality and brand safety – such as TrustMetrics, who can review the URLs against a series of categories to make sure the platform is continuously focusing on the right content are an important third party check on any platform and provide advertisers with a seal of approval.

Accredition for video viewability and impression counting is another requirement to look out for – bodies like the Media Rating Council have stringent criteria for their audits and it’s a major investment by any ad platform to commit to their annual review.

Human vetting

There’s no denying that technology is vital to combating ad fraud, but you cannot underestimate the importance of stringent human safety checks. Manual review of URLs, player position and quality and content suitability against a quality matrix increases the chance of robust, high performing supply ecosystems.

By augmenting proprietary and third party technology with the human eye and judgement, you’re adding a layer of safety that can catch issues some technologies will simply never see.

United front

While it’s important to ensure brand advertisers ask the right questions of adtech platforms they work with – it’s not their sole responsibility. In order to ensure brand safety, quality placements and deal with fraud the whole industry – from brands to publishers – must be united, proving to each other that everyone’s taking the issues seriously and are prepared to act.

Whether it is a kitemark-like independent body or a working group formed from within the industry itself, perhaps it’s time for an industry-wide endorsement to hold each member of the ecosystem accountable and bring an end to some of the issues that have created such negative connotations with our industry in recent months.

Media Jobs