Experimentation is the key to ad effectiveness – 10 must-knows of digital ad testing

IAB Australia has released a new guide for measuring digital ad impact. Research director Natalie Stanbury walks us through the digital ad testing checklist.

Success means different things to different companies and brands. For some it’s all about ‘fail fast and learn’ while others may be aligned with a more conservative approach. But regardless of the preferred approach, the reality is that with digital now accounting for more than 50% of all advertising spend in Australia, it has never been more important for marketers to develop their understanding of digital’s impact on their campaigns and its overall role within the media mix.

Yet there is still a lot of confusion and conflict as to which metrics and methods to use and indeed what they really mean. That’s hardly surprising and it’s certainly worth taking some time to understand the strengths and weaknesses of each measurement approach and taking care to select the best metric to show whether you have achieved your objectives. But more than that, marketers need to step up to understand what’s going on beyond the results and to dig under the hood to understand which part of a campaign worked and which didn’t – and why.

It’s all about understanding the real impact of your digital ad. And it’s about going far further than using something like click through rates (CTR) to measure interactions with an ad. CTRs provide a very narrow view of success and can actually be damaging or, at a minimum, misleading for measuring brand campaigns. Instead, marketers need to choose more meaningful metrics that are matched to the campaign objectives and KPIs of the ad campaign.

One of the most effective tools for digital ad testing is controlled experiments. Sounds scientific, geeky and kind of fun, right? And indeed, they kind of are but controlled experiments offer the perfect opportunity to adopt best practice scientific methods to add confidence and precision to marketing investment decisions. They are flexible by nature and can be particularly helpful in testing the interplay between digital and offline activity. Their power is in understanding the true business impact of your advertising that wouldn’t have otherwise happened.

So where to begin?  IAB Australia’s latest ‘Guide to Designing Digital Impact Studies’ is a best practice guide to digital ad testing and is freely available. It’s a useful framework to help you measure varied creative assets; help you shake up your digital marketing mix and validate your existing marketing activities and fill gaps in your knowledge.

Here’s the IAB’s top 10 list of must-knows when it comes to digital ad testing:

  1. Set clear objectives and KPIs upfront that relate to your business challenge — avoid retrofitting your objectives after the test. Success means different things to different companies and brands so make sure your objectives are aligned with your goals.
  2. Create a clear hypothesis for your test that will you address the business problem. This is a testable prediction about what you expect to happen in your study.
  3. Choose appropriate metrics aligned to your objectives – don’t just rely on what is most accessible (CTR, for example).  Include long-term as well as short-term metrics.
  4. Understand the strengths and weaknesses of the measurement method that you use. Create a measurement strategy — a clear, consistent plan for which tools you are selecting and why.
  5. Measure for incrementality not correlation — so that you can understand what your advertising activity did that would not otherwise have happened.
  6. Design experiment cells to ensure best possible comparability of exposed and control. Ensure that you have: a) adequately sized groups for the precision needed in your test, b) random assignment of control and test groups, c) cells that match — for example, demographically, attitudinally, in propensity to use your product, and prior exposure – and finally, d) ensure your subjects are either in the control or exposed group, not both.
  7. Test and learn how to adjust your campaign to drive better results, apply and keep testing including building benchmarks to compare your results and set targets.
  8. Devote time to preparing the presentation of your results to stakeholders. Whether presenting to the executive management team or a board, answer the business question succinctly and understand your audience and their level of appetite for detail and technical language.
  9. Understand the measurement activities already happening in your organisation and external companies — don’t operate in a silo.
  10. Be brave. It’s an ever-changing world of new and the unknown but if you don’t push the boundaries and experiment then you run the risk of falling behind. Importantly, if you try and fail, simply start afresh and importantly, don’t be afraid to share your learnings as the more we dare to bare the better educated the whole market can become.

 

Natalie Stanbury is research director at IAB Australia.

 

 

Image credit Andrey Kiselev © 123RF