One benefit of ticking over a new year (beyond the chance to make a heap of resolutions which invariably are abandoned before February!) is a chance to reflect on the previous year – what happened, what we learned, what points of view different people had on particular hot topics and so on. With the continuing momentum around digital marketing, 2009 was in many ways a year for genuinely ‘new’ news.
However, in many ways 2009 was also a bit of a groundhog day, in particular when it came to criticism of market research and the accused ‘overuse’ of it. Every few years the same old points of view get wheeled out… it kills good ideas’, ‘it takes too long’, ‘its too expensive’, ‘consumers can’t evaluate creative’, ‘we should rely on our gut more’ and so on.
Typically these views are offered by ad agencies or sometimes clients – maybe the research isn’t telling them what they want to hear, an idea they were keen on might not have resonated with respondents, or maybe they’ve been exposed to poor quality research or the inappropriate use of particular approaches.
Sure, I’m a researcher at heart so I’m going to be a little biased, but I wanted to use my blog to respond to a couple of the points raised.
Firstly, lets start with the fundamental role research plays in the marketing cycle. People have different views on what this is, but I always see it as ‘reducing risk’ – marketers can be responsible for multi-million dollar decisions and a researcher’s role is to minimise any risk in the business decision through evidence-based advice.
Which brings me onto the first criticism of research, ‘its too expensive.’ Consider what a typical TV campaign may cost – say $1m would be a reasonable average for a typical major brand or product campaign. Launching such a campaign having not researched it properly beforehand would be like playing roulette with shareholder money. A robust pre-test picks up any issues with a campaign prior to launch and can offer improvements to both the execution and media plan to ensure the campaign is as effective as possible. Sometimes it might mean going back to the drawing board but it would’ve saved the company a million bucks and the potential fallout from a bad campaign. Not bad considering a pre-test would typically cost less than 5% of the media bill. Research should be considered an investment in optimising the campaign rather than an expensive gatekeeping process.
A second argument is that research ‘kills good ideas’. A good idea, a good campaign, will survive any well structured research process. The issue is that it sometimes kills the ideas that the marketing team likes. The ideas that people have slaved away on for weeks, months or even years. You can understand the hurt and rejection this might create – believe me, we sympathise – but ultimately the idea needs to resonate amongst the target audience for it to be deemed successful. I’ve seen ads win awards from the advertising industry, yet when you look at the tracking data, they do absolutely nothing for the brand. What is more important?
A third argument is that research is ‘not predictive’ of what happens in the real world. In some cases, this might be true – but that’s because the wrong approach or technique is being used. A couple of quick focus groups is not a reliable prediction of how an entire population will react or behave, but that is not what qualitative research should be used for. Researching new product launches, pre-testing of campaigns and political polling are just a few examples of how predictive research can and should be. As an example, we’re able to predict the exact sales volumes and profitability of new product launches from a few simple questions before they enter the market, to within an accuracy of +/- 5% in most cases. We’re even able to model the effect of word of mouth from social media campaigns on sales volumes. If this research is not predictive of what happens in the real world, show me a better approach!
Research shouldn’t be just considered as a stop-go process, rather a process of improvement and refinement. Advertisers and marketers can learn a lot from the insights derived out of testing and build these learnings into their knowledge bank.
One thought I would subscribe to though is that research should not be used as the sole source for any decision. It should be considered a piece of evidence to help inform a decision; a piece of the puzzle that a skilled marketer uses to inform campaign development. Rejecting research is rejecting the voice of the consumer – do so at own risk.