Ever had a customer service experience that ended with the representative pleading with you to fill out the “short survey” and only give scores of 9 or 10? Ever wonder what that was about? Today’s rant will explain the origins of that experience – the Net Promoter Score.
I recently worked for a company that was using a customer satisfaction tool known as the Net Promoter Score. The NPS is a pretty big deal in many companies, and can be integrated into CRM tools such as Salesforce.
Based on my personal experience, and the reported experience of many other people I know who also work for companies that are drunk on NPS, I have to say that the Net Promoter Score as it is currently implemented in many companies is seriously broken. Or to be more accurate the NPS is being misused.
NPS was “invented” in 2003 by Bain and Company, and for reasons that defy human understanding, has become the gold standard in customer satisfaction metrics. In its’s purest and simplest form, it seeks the answer to the question, “How likely is it that you would recommend our company/product/service to a friend or colleague?” Survey respondents are offered a ranking scale between Zero (not likely in any way) to 10 (highly likely.)
- Anyone offering a rating of 9 or 10 is a Promoter.
- Anyone scoring a 7 or 8 is a Passive.
- Anyone scoring between zero and 6 is a Detractor.
To get your NPS, first calculate the number of responses (not the scores!) in each group, and the percentage of each group. Then throw out the Passives %. Subtract the Detractors % from the Promoters %. The result is your Net Promoters Score. The following diagram explains the process, and you can calculate NPS numbers at the NPS Score website.
If you are looking for a method to determine customer satisfaction, I suppose this method would be as good as any, as long as you were happy with the relative strictness of the process. You survey your customers, and let the chips fall where they may. You use this information to improve your customer facing systems and processes, and for training your customer service staff. As long as you resist the urge to game the system to improve your results, this can be a reasonably accurate way to gather customer satisfaction information,
But let’s think back to other scoring systems we are familiar with – the grading systems used in most schools. If the NPS method was used, any score of six or less would be a fail. Scores of 7 and 8 would be the equivalent of a C, and any score of 9 or 10 would be an A. How many of us would have graduated under that sort of grading system?
Unfortunately, your customers don’t understand your playbook, and so NPS numbers can be very low. Left to their own devices, many customers answer surveys in the mid-range of 5, 6, or 7. Two of those scores detract, and one has neither a positive or negative effect on the score.
Another statistic problem with the NPS system is that a rating of 6 has EXACTLY the same weight as a rating of zero. So when calculating your detractors, you add up all the responses below 7 for a total number of responses in the Detractor category. This is not a weighted average based on adding up all the scores, you are just counting the number of respondents below a 7. Ten customers giving you a zero versus ten customers giving you a six produce the same result in the NPS system.
Of course, most customers are not going to fill out the customer satisfaction survey. Think back to your own experiences. Typically, complainers, or Detractors, always fill out the survey. Complainers love to complain, plus they have learned that there may be a reward for complaining. Often a well-placed complaint will be rewarded with some sort of apology, and perhaps a special discount on a future purchase.
The Promoters also often answer the survey, but not in the higher percentages that complainers do. All of these statistical flaws tend to increase the percentage of detractors, and lower the NPS.
The Passives percentage does have an effect on the NPS score, even though they are not counted, their numbers are used to calculate the percentages in each group. They can help improve your NPS rank, but not much.
The flaw here is this – in the real world, if I give you a zero, it means I am returning the product, seeking a full refund, and I will never return. A six actually means you did ok, nothing to write home about, but adequately average, and I will return for more. A seven or above means you were exceptional and the experience was special in some way. I rarely give away a 9 or 10 because you have to be PERFECT to earn this score, and nobody is perfect.
This is the point where most NPS programs fall off the rails. NPS scores are lower than desired, so in an effort to improve customer satisfaction, customer facing teams are given some form of incentive compensation that is tied to the NPS results of the company. Often, the low end of the compensation target begins at 60% and rewards increase as the score gets higher. But for all the reasons I mentioned above, the NPS scores stay doggedly around the 40% to 59% range. Customer service representatives, and their managers, find it impossible to earn the incentive compensation.
The problem is those pesky customers don’t understand the monetary importance of the survey, or how to properly answer the questions. Ever had a customer service experience that ended with the representative pleading with you to fill out the “short survey” and only give scores of 9 or 10? Ever wonder what that was about? Well, now you know.
And if your customer service team is doing anything like what I just described, the NPS you are “earning” is fictitious, or seriously altered, and no longer an accurate reflection of real customer satisfaction. It is a reflection on the skills your customer team has developed in coaching the customer responses to ensure they earn the bonus
In my own professional experience with my previous employer, there was a part of my compensation that was tied to the NPS scores from the students in my classes. The bonus was quarterly, and achieving and average NPS of 65 was required to earn any part of the bonus. Teaching the same class, often in back to back weeks, would generate wildly disparate results. One week I would get a 75, but the next only 35. Even when I started “coaching” the answers, at the suggestion of my manager, the results were still unreliably random. For instance, if I got 20 Passives, two Detractors, and two Promoters, the NPS score would be zero.
So my impression of the NPS system is that is is mostly being administered incorrectly, and the NPS ratings are highly suspect. Having spent half of my career in sales and sales management, I know that good sales people and customer service representatives will figure out how to manipulate any compensation plan to maximize their personal income. So will their managers, and anyone else who have a compensation plan tied to the NPS score. I don’t believe the NPS is an accurate reflection of real customer experience.
More information:
ShareJUN
About the Author:
I am a cybersecurity and IT instructor, cybersecurity analyst, pen-tester, trainer, and speaker. I am an owner of the WyzCo Group Inc. In addition to consulting on security products and services, I also conduct security audits, compliance audits, vulnerability assessments and penetration tests. I also teach Cybersecurity Awareness Training classes. I work as an information technology and cybersecurity instructor for several training and certification organizations. I have worked in corporate, military, government, and workforce development training environments I am a frequent speaker at professional conferences such as the Minnesota Bloggers Conference, Secure360 Security Conference in 2016, 2017, 2018, 2019, the (ISC)2 World Congress 2016, and the ISSA International Conference 2017, and many local community organizations, including Chambers of Commerce, SCORE, and several school districts. I have been blogging on cybersecurity since 2006 at http://wyzguyscybersecurity.com