Experimental Design: Measuring Media Effectiveness
February 4th, 2016 | Grant Le Riche, Managing Director, Canada, TubeMogul
Advertisers are obsessed with measurement – and for good reason: you can’t improve what you can’t count.
For a long time, media measurement was relatively simple: how many people did my ad reach, and how many ads did those people see? This became the basis of the gross rating point (GRP), which became the currency on which most TV ads were bought and sold. Newspapers had their circulation count and outdoor ads used traffic figures and population. Life was good.
Then digital came along and upended everything. All of a sudden there were things like click rates, conversion rates, and their subsequent cost-per that put everything on a per-unit basis. More recently, metrics like viewability, audience on-target percentage and even upper-funnel metrics like awareness and favourability are quantifiable. We can measure both economic and campaign performance more accurately and more quickly today than at any point in history.
But despite all the progress, we still have a tougher time than we should at answering the most important question for marketers: did my advertising drive sales? Perhaps the most common way advertisers try and answer this question is by calculating CPAs, or cost-per-acquisition. This term is commonplace in direct-response marketing and is the common success metric used for campaigns designed to get consumers to take action (like clicking “buy now”).
But even in the digital age, CPA-based attribution models don’t take all the necessary variables into account in order to accurately understand how marketing spend impacts the bottom line. There are three common mistakes – or biases – that occur naturally within CPA-based models.
- In-market bias. Did your ad drive a sale? Or did you measure someone who was already going to buy? How do you know for sure it was your ad that incited action? Correlation does not equal causation. CPA-based models don’t have a way to tell if someone already intended to purchase. This is an important distinction, because certain audiences like brand loyalists or someone who recently visited a webpage will naturally cost less to reach.
In-market bias is compounded by the fact that people who are already going to buy a certain product act in ways that are easier to measure – visiting a webpage, signing up for emails or searching for a specific product. This is why search does so well from a cost perspective.
- Low-rate bias. Most marketers have probably heard of “cookie bombing.” This tactic refers to blasting out as many digital ads to as many sites and as many people as possible at the lowest possible cost. The more ads served, the more people are cookied – and when you measure success using a formula where the total cost is divided by the total number of people (cookies), the larger your denominator, the lower your final number.
Look at the below scenario:
Although the first scenario enjoys a lower cost-per-buyer, the second scenario drives the results that most marketers actually care about: two new buyers.
- Digital signal bias. While digital channels will represent about 36% of Canadian advertising spend in 2016, according to eMarketer, most sales and transactions still happen offline. However, most advertisers only include online ad spend in their attribution models because it’s fairly straightforward to correlate online ad spend to online sales. While it can seem complex, advertisers need to start thinking about ways they can factor both their online and offline sales figures in the “acquisition” portion of their attribution measurements.
Admittedly, these biases can be daunting for even the most sophisticated marketer. But there’s light at the end of the tunnel. By employing an academic principle called “experimental design” – which argues that for a measurement to be valid, marketers need to take all the variables into account before the experiment begins – marketers can start to get a better picture of their return-on-ad-spend (ROAS).
So what should marketers account for before a campaign starts? All the people who were going to buy anyway. These people are your control group – and by having a control group, marketers can stop guessing at whether or not their ad actually drove a sale and measure actual lift as opposed to just relative lift. In other words, instead of measuring how many total sales occurred, marketers measure how many additional sales occurred.
There are a few ways to establish these control groups, but the best way is through using placebos: ads that aren’t related to the actual brand that shouldn’t affect how an individual feels about that brand. Advertisers can also try combinations of different creatives; the same creative across different geographic markets; or different ad formats in the same market – but keep in mind that will only give you part of the story.
A lot of marketers have legacy systems and partners in place that will make experimental design tough to implement. A lot of marketers don’t have budget to test different solutions. But the marketers that do have the willingness to look for a better way to measure success will surely find more of it than they can count.