In relative terms, Net Promoter Score (NPS) is still a shiny new toy in the research cot. While there are plenty of other people developing and spruiking new measures and theories in the field, very few take hold as quickly and universally as NPS.

So why is the NPS still somewhat shrouded in mystery for so many marketing people? Yes, the majority of researchers know about it, but that’s a bubble. In the grand scheme of things, I’m still mystified by a few recurring situations:

  • In nine out of 10 presentations to marketing teams, I have to explain the concept of NPS (which is fine – keeps me in a job).
  • There are “NPS specialists” floating around the industry. I’m not sure if the demand for these people is created through their own means or the genuine demand of clients, but it baffles me either way. NPS is one of the simplest scores to calculate in history and anyone with a mere sliver of customer satisfaction experience can run an NPS study. Maybe they exist because of my next issue…
  • Google “NPS benchmarks” or something similar. Tumbleweeds will float (or tumble as the case may be) past your monitor before you get a useful source of information. Though truth be told, when I did this I came across this blog post from Affinitive in the US which is decidedly similar in theme and concept to this one (that’s a hat tip).

How did this happen? Cynics might suggest an excellent marketing campaign coupled with a barrage of lawyers by NPS’s creators. They’d have a point, but that still ignores the fact that NPS is an interesting and useful metric. I’ve worked with NPS for about five years now and two key aspects stand out:

1.     NPS most certainly has a positive relationship with brand sentiment, and

2.     It is highly variable across aspects such as product category, personal experience and even culture.

This high variability brings to a head the point of this article. Is NPS inconsistent (a bad thing), or simply sensitive (a good thing)?

There’s plenty of ways to look at this. For now I’m going to look at the simplest: Whether NPS actually has a relationship with what it’s purported to measure, people talking about your brand.

In other words, are people who say they’ll tell others about your brand actually doing this in real life, and does NPS accurately reflect this action?

Luckily, Soup has a mountain of data to answer this question. For every word-of-mouth campaign we’ve run (and we’ve run over 100) we ask people involved in the campaign how likely they are to purchase a brand (brand sentiment), how many people they’ve spoken to about a brand (conversations), and how likely they are to recommend that brand ongoing (NPS).

Looking at the results of these three questions across campaigns and we can answer the big question:

  • What is the relationship between NPS and brand related conversations.

Along with the smaller question that I think needs answering beforehand – if people are talking about your brand, you want to be sure it’s in a positive light:

  • What is the relationship between NPS and brand sentiment

If you’ve made it this far and are still waiting for me to explain the basics of NPS and how it’s calculated, unfortunately, it’s not going to happen here – but you can find out everything you need to know through this website then come back and rightfully feel in the know.

Does NPS relate to positive sentiment about a brand?

How I’ll look at this:

  • Examine NPS’s relationship with purchase likelihood
  • Are advocates more likely to purchase than Neutrals & Detractors?
  • If they are, what kind of difference is evident. How strong is the difference?

My prediction before looking seriously at the data was that there would be a clear relationship. The logical train of thought is that if you like something, you buy it and tell other people about it. Human nature 101. There might be outliers (a statistician’s term for people that don’t make sense) that don’t follow this logic, but overall, you’d expect these to be the exception to the rule.

Previous research by the Keller Fay Group supports this notion by finding that people are much more likely to talk about brands when they have an experience with them, for example , drink a Coke, watch Two and a Half Men (why??), etc. Goes without saying that if you like something, you tend to go out of your way to experience it and hence are more likely to talk about it – and vice-versa. Dislike something, you avoid it and talk less about it.

What the data says:

src=http://www.marketingmag.com.au/web_images/NPS1.jpg

A quick explanation of what data chart is collecting. Each grey line represents answers to a Soup WOM campaign from somewhere between 300 and 1000 respondents. The campaigns vary across categories but lean towards more FMCG products. The red line represents the average across the data collected.

As a researcher, I (and most other researchers for that matter) tend to question before accepting. Some call it cynicism, others realism. Whatever it is, I’ll get the address the voice of dissent in my head to clear the air before moving on to the implications of these findings.

Why are these purchase rates so high, especially among Detractors and even Neutrals (No brand in their right mind would pass up a 25% market share overall, not just with their detractors)?

It’s simply a consequence of who these respondents are and what they have experienced. Everyone included in this research had been selected specifically for being most likely to be positive about the product in question (that’s one of the key aspects of a word-of-mouth campaign). As such, there are very few ratings at the extreme lower end of the NPS scale (ie: 0 through 4). So Detractors probably skew a little more positive than if it were asked of the broader population.

On top of this, all respondents have had a direct experience with the product in question. Drawing on my own experience with NPS, it’s when people don’t experience a brand that their NPS ratings drop into the very low ranges.

There’s also a distinct pragmatism about Detractors who have experienced a brand and will still purchase it. They may be willing to purchase a product, but they don’t see it as something that their friends are into (a revelation that often occurs after having shared it with them).

This is why the Detractors are more than likely higher purchasers in this research than if you were to question a general population sample.

So basically, this research IS NOT representative of the population as a whole, and I make no claims of it being so… but, I still believe any overlying relationships and trends discovered with this research  are valid and relevant. It’s probably just that they would be a little less pronounced in the “real world”.

So, with that necessary digression out of the way, back to the chart…

Taking into account the red line, the relationship between NPS and purchase likelihood is most certainly as expected, a positive one. Whilst Detractors, as mentioned start the race quite high with a 26% purchase likelihood, Advocates blow them out of the water with a 96% purchase likelihood. Neutrals initially seem to murk the waters of this relationship a little with a 79% purchase likelihood, but looking that average closely you can see how diverse the scores it is based off actually are, ranging from 58% to 92%. This is particularly so when you compare it to the tight knit grouping that makes up the Advocates average.

All up, this gives a surprise double dose of kudos to NPS as a score. The NPS calculation is validated by sensibly ignoring the more variable group of Neutrals, and it’s shown to have a clear positive relationship with brand preference.

A victory to logic and the easy question (at least that’s what it was initially) is answered.

On to the difficult question…

Does advocacy relate to the very thing it’s trying to measure, conversations about a product or service?

How I’ll look at this:

  • Look at NPS’s relationship with the amount of people spoken to about a brand by an individual (for convenience’s sake, I’ve labelled this “Conversations”)
  • Position conversations by Detractors as a baseline, and look at proportion of conversations above or below this amongst Neutrals and Advocates. Basically, some brands are more talkable than others, so doing this makes it a little easier to compare across campaigns without losing the basic premise of what we’re measuring.
  • If there are differences evident, how strong are they?

My prediction again didn’t stray too much from logic and gave society the benefit of the doubt, in that if they say they’re going to do something in future, that’s what they’ve been doing in the past (a rare instance of optimism from a researcher there).

src=http://www.marketingmag.com.au/web_images/NPS2.jpg

What the data says:

Again, with the exception of a couple of campaigns where Neutrals talk less than Detractors, all is as you’d expect. On average, Neutrals speak about a brand 5% more than Detractors whilst Advocates jump 34% over Detractors. In other words, Advocates speak to much more people about a brand than Neutrals who in turn speak to more people than Detractors.

Interestingly, in combining the results of brand sentiment and conversations the old customer service adage of “give people a good experience they’ll tell 5 people, give people a bad experience they’ll tell 20” just doesn’t stand up to the facts. Simple as that. As two campaigns point out, there are exceptions to the rule, but on the whole people are much more likely to talk about things they like.

To sum it all up, in looking across the results of a series of WOM campaigns we’ve found the NPS calculation works a treat on two levels:

1.      There is a clearly defined distinction between Detractors and Advocates, and the sometimes grey area in between (Neutrals) is conveniently and logically ignored. In other words, the NPS calculation makes sense.

2.     The metric NPS claims to measure, conversations and people advocating a brand, plays out in real life. The higher the NPS score the more people are talking about your brand.

I must admit, I didn’t go into this research hoping to lay another brick in the defensive wall of NPS. I tend to go for the underdog and NPS is about as far from underdog status as you can get. But it seems like I may’ve done exactly that. In the end NPS is a consistent score (in relative terms) accurately reflecting what it purports to. You can’t ask for much more than that.

Does this mean NPS is the “only score you need to measure business success” as is often claimed? I and many other researchers wouldn’t say so, and the few campaigns that provide results a little off average support this. But to say that it should be disregarded and is inaccurate is flat out wrong. Understand what it reflects and validate it by further metrics and you’re well on the way to understanding your brand and customer.