NPS: A sword when you need a scalpel.
“What gets measured gets done”. I have heard this comment many times in my career. From clients, from business leaders, and of course from market research gurus on the Internet, who let’s admit it, have a vested interest. It is a truism. After all, if you have a baseline metric and then over time you repeatedly measure the same things, you can tell based on temporal precedence, whether the business or marketing decisions you took were successful or not. The logic is sound.
And the star of this measurement mega show is the Net Promoter Score — the magic number that is understood by everyone as the gold standard for measuring brand performance. In recent years, nothing has captured the imagination of the C-Suite than the NPS. Good NPS scores are touted as a measure of how well the business is doing, how much customers are satisfied. In many companies, executive pay and performance incentives are even tied to good NPS scores or improvements in NPS.
Measurement of NPS is simplicity itself. It is the answer to one simple question — “On a ten point scale with 1 being extremely unlikely to 10 being extremely likely, how likely are you to recommend this brand to a friend?” Scores from 1–6 are called detractor scores. And scores from 9–10 are promoter scores. Scores between 7–8 is called passives.
There seems to be some flexibility about this. I have seen NPS scores calculated where 1–4 are detractors, 8–10 are positives and the rest are passives. In any case, the Net Promoter score is the difference between the Promoters vs. the Detractors, and we ignore the middle band — the Passives. A higher NPS, therefore, indicates that there are more Promoters vs. Detractors and is, therefore, a good thing. A brand like Apple may have an NPS of 65 while a leading brand in a less self-expressive category may have an NPS of 40. All of which are considered to be good NPS scores.
From our experience of interacting with brands, what NPS tells and what customers feel are very different things. One brand we worked with provided decor services in people’s homes. The service team would visit their customers’ homes and would manage the project, including providing proper estimates, managing workmen, ensuring work was delivered on time, ensure the quality of output, etc. The NPS scores were very good. However, the client still felt that something wasn’t right. And when we met customers, there was a lot left to be desired in how each of these aspects of service was delivered.
Then why did customers give a high score? Why did NPS look very good when the reality in the homes was quite different? To understand this, we need to look at how human beings are. Once the project was complete and the house was decorated, usually, the work done meant that the homes were now looking better than before. Add to that, the customer is finally rid of the workmen in the house. They have their homes back. In this context, no customer really wants to complain about the poor service experience. Indians want to say, “Let bygones be bygones. Why hold a grudge? Let the poor chap keep his job.” We saw this even with middle-class customers in financial services who had been taken for a ride by their agents. Even then, they were reluctant to give shoddy ratings because they don’t want to make another human being suffer. If the customer knows the service provider, really knows them, then this “human factor” comes into play that corrupts the data. And so the NPS remains high while the experience remains poor.
Recently there have been articles on why ratings of homes in AirBnB ratings can’t be trusted as guests tend to give higher ratings than their experience suggests. This is because both the host and the guest are known to each other. And to publicly insult the host by giving poor ratings is a social no-no. So most decent folks don’t give an accurate rating. So NPS measurement had successfully camouflaged the true customer feelings.
Not only that, NPS doesn’t provide any guidance as to why the score is the way it is. At best if the NPS is terrible, then the client can now open a new investigation as to why the world is the way it is. I once asked a client who showed me their NPS scores and brand attributes scores, “what did you do now that you know this?” And there was no answer.
The reality was apparent to all of us. A good NPS is a pat on the back of the marketer for a job well done. A bad one is a stick. However, the way NPS is conducted, and the way human beings respond, means too many carrots are doled out. And too few sticks.