DOOH Advertiser Accountability – Audience Measurement Comparison Checklist

This article is the sixth in a series on DOOH Audience Impressions which explores what the factors are that can accelerate DOOH towards the “holy grail” of cross-platform media compatibility. In this article we will illustrate a way of comparing audience measurement providers and methods in a cross-platform (online and offline) way. This checklist will later be used to compare Quividi to other alternatives.

Much emphasis is put by the standards on the qualifications of measurement systems themselves are not all equal with regards to transparency and accountability. For the historical reasons mentioned earlier, legacy out-of-home measurement has some catching up to do to be at the level of data fidelity delivered digitally. 

For example, for much of OOH today, only audience modeling is used and there is no verification of actual audience delivery. It is assumed that if the media was showing, that the audience from the plan was delivered as-is. This has not been acceptable for online ads since viewability was introduced in 2014.

So how can we rate measurement frameworks in a cross-platform way?

Enter the Accountability Checklist!

Accountability Checklist

We can compare OOH measurement methods against an ideal set of requirements to deliver advertiser accountability which we have grouped into 3 dimensions of assessment. In total, the checklist is 10 points long as described below. The following section will look at this checklist in detail.

 

 

Provider accountability

Independence

An independent measurement provider is clear of biases from either the buyer or the seller of the media it is measuring.

Auditability

An auditable measurement provider keeps a paper trail of the underlying data used to estimate audience impressions for verification purposes.

Transparency

The model and methodology used by the provider is disclosed for scrutiny by the stakeholders in the transaction, or by a 3rd party auditor.

Responsibility

The provider has undertaken significant steps and investments to ensure all data throughout the chain is privacy-safe and protects consumer rights around the world.
 
 

 

Mechanism Accountability

Precision

The mechanism measures the audience of an ad while it is playing, with sub-second accuracy.
 
 

Integration

The measurement mechanism works in lock-step with the playback mechanism using a robust protocol.
 
 

Verification

The playback mechanism verifies the display is working and stops playback reporting if the display is not.
 

 

Model Accountability

Recency

The model uses data that was recently measured or otherwise has a reasonable degree of recency.
 

Density

The model has sufficient detection data volume compared to the final audience impression volume that any adjustments or extrapolations are statistically significant.

Fidelity

The model is based on cross-platform human-valid audience impressions instead of a raw reporting of the detection metric. Data fidelity is explained in more detail.

 

More On Audience Modeling: Recency, Density and Fidelity

For audience impression measurement, audience modeling is not very common online – unless you count fraud detection and filtration as a type of audience modeling. On the other hand, audience modeling is a standard foundation in an out-of-home audience measurement system.
 

What is Data Recency?

 An important consideration for comparing audience measurement methods is the data’s recency. When was the data measured? Is it based on live data measured locally in parallel with the ad being played? Or is it based on historical studies and averages?
 
On the left we have data based on what was collected server-side in the past. 
On the right, we have data based on what is collected locally, and in the present.

 

What Is Data Density?

When we use the metaphor of density, we mean the ratio of “hard” detection data to “soft” audience data. Hard data comes from the detection model. Soft data is extrapolated or otherwise engineered from the hard data.
 
Maximum Density: the detection model is perfect and can account for 100% of the audience. 
 
Minimum Density: the minimum statistically representative sample size needed to make a predictive audience model.
 
Medium Density: somewhere on a spectrum between Maximum and Minimum Density. This is where all out-of-home measurement sits. Quividi’s model approaches maximum density.
 
Data density includes 3 component parts which are factored together to get overall density. They are:
 
The 3 dimensions of data density. They are measured in percentages between full (100%) and none (0%). 
Overall density is the product of the 3 coefficients: coverage * granularity * capture rate.

 

What is Data Fidelity?

Is the audience impression metric being reported actually audience impressions (true to the definition) or are they reporting a detection metric disguised as an audience metric (mobile devices, circulation)? 
 
For example, out-of-home was historically traded on circulation numbers (and in some geographies, still is). The ESOMAR standard set the standard that circulation was not sufficient, and the audience has to be qualified to approach the actual audience (individuals that saw the ad) as closely as possible.
 

For mobile data, are device access logs being used as audience or is a reasonable effort being made to transform them into a qualified audience? For example, is geofencing being used to ensure the detected mobile devices are within the viewability zone of the display?

How close is the measured metric to the human-valid cross-platform audience impression?

 

Conclusion

In conclusion, in this article we presented a 10-point checklist to compare audience measurement providers both online and offline and we defined certain key concepts for performing this comparison such as granularity, density and fidelity.

EnglishChina