NPS, or Net Promoter Score, is a tool companies use to measure customer loyalty. It’s based on the question, “Would you recommend (company X) to a friend or colleague?” and is traditionally asked on a 0-10 scale (shorter scales are becoming more popular, however.) Though respondents are only shown the extremes of the scale, they are broken down into three categories—detractors, passives, and promoters. The percentage of detractors is subtracted from the percentage of promoters to yield the Net Promoter Score.
I love the question, but have reservations about how it’s used and downright loathe how NPS is calculated—7-ish.
The question “Would you recommend us to a friend or colleague?” is powerful because it captures, in one stroke, both satisfaction and loyalty. Among the 24+ million ratings we’ve received, it’s the single question that ties most to spend. In retail, customers who rate the question highly spend on average 26% more than others (we get all the transaction details associated with each rating, so we can actually link satisfaction to spend—no voodoo math here.)
Of course, in isolation, the question is limited. It doesn’t tell you why, and it isn’t very actionable—you can’t go into stores and exhort associates to “make customers recommend us more!” Imagining that conversation just makes me sad.
But it is a monster for internal benchmarking, both over time and at the store level. Assuming adequate sample, the ebbs and flows of your customers’ ratings should mirror how effectively you’re driving their loyalty. So, for example, if a dip corresponds with an adjustment to your labor model or a category reset, that’s quite instructive. And, again, assuming adequate sample, stores with your highest scores are where your best practices and top performers are living. What are they doing to drive loyalty that others should be doing?
I like the question less for external benchmarking. Companies will generate their own scores by asking their customers directly and then rely on survey panels to gauge their competitors. In mixing methodologies, you introduce enough variables to make the comparisons moot—timing, survey design, selection bias.
Compounding the issue is the fact that respondents tend to interpret the question very literally. I was recently talking to the head of CX at an exceedingly well-managed children’s clothing retailer who bemoaned that her score lagged retail apparel NPS benchmarks, despite incredible brand equity and outsized financial performance. We realized that many of her customers are in fact less likely to recommend her company because a lower proportion of the population is in the position to receive a recommendation for a kid’s clothing company. We all wear clothes, but only some of us have kids. For internal benchmarking, variance in interpretation should impact your scores evenly. For external benchmarking, it will have a disproportionate impact on each company within your industry and introduce way too much noise into the data.
Honestly, we spend more time talking to our merchants about how an NPS score is calculated than we do discussing what we can learn from it. The math isn’t hard—it’s just different from the way we calculate everything else. What can be learned from a net score that can’t be learned from an average? From our analysis, nothing. Across all of our outlets, there’s a 93% correlation between their net and average scores (see below.) We even found that net calculations tended to dampen trends, obscuring crucial insights. So if you want to hide a decline in your customer’s loyalty, go for it. If you want to catch the decline ASAP, we recommend an average.
That dampening effect makes sense when you think about it. A 0 has the exact same effect on your NPS as a 5 does. So if your new labor model prompts someone who’d previously rated you a 5 to then rate you a 4, and then, as you make further tweaks, a 3, you’ll never see that you’re denigrating that customer’s experience until they’re gone, and it shows up in your bottom line. Averages, however, catch that change and are thus far more instructive for internal benchmarking.
So how likely am I to recommend NPS to a friend or colleague? I love the question as a barometer for how you’re doing today versus how you were doing yesterday. But it’s too noisy for external benchmarking and the net aspect diminishes its value. So I’d give it a 7—“passive” per the net reading of the scale, which is a bizarre label for someone who’s just written 782 words on it.