Back in 2020 I wrote a post titled “Successfully measuring / measuring success” that talked about a problem without meaningfully describing it, because at the time I didn’t believe that the problem itself was particularly relevant. I was more interested in the pattern, which at the time I thought was “measuring the wrong thing.”
In recent weeks I’ve found myself sharing this post more frequently, and have come to believe that the underlying pattern is actually something else. Let’s look at my beautiful 2010-era chart one more time.
In this chart, both series represent NSAT scores – basically they’re saying “this is how happy our customers are with a thing.” This sounds pretty simple, but the tricky part is in the details. Who are “our customers” and what is the “thing” they’re happy or unhappy with?
In the context of the chart above, the yellow series was “customer satisfaction with what they got.” The blue series was “customer satisfaction with what we shipped.” Implied in these descriptions and the difference in the two data series is that something happened between the time we shipped something and the time the customers got something, and that the thing that happened was key to the customers’ satisfaction.
Without going into too many details, we were shipping a product that was used by partners, and those partners used our product to deliver an experience to their customers. In the FY06 timeframe we started to change the product that we shipped, and the partners used the updated product to deliver an experience that met their customers’ needs. Now they just had to do a little more work to keep their customers happy. As the product changes continued, the partner load increased. They had to do more and more to fix what we gave them and to keep their customers happy. You can see the story play out in the blue series in the chart above.
We were looking at the yellow series, and falsely conflating “customer satisfaction” with “customer satisfaction in what we have shipped to partners.” We didn’t truly appreciate the importance of the party in the middle and their impact, and it ended up causing no end of problems. We were measuring customer satisfaction, but failing to appreciate what it was that customers were satisfied with – and how little that satisfaction related to what we had created.
And this is the pattern I’ve been seeing more often lately.
In my recent post on the role of the Power BI CAT team and how that role has parallels with Power BI centers of excellence, I described a success pattern where a central team serves as a value-add bridge between people with problems and people with solutions.
This pattern is one that I’ve seen provide significant value in multiple contexts… but it also introduces risk of measuring the wrong things, and overlooking real problems. This is a side-effect of the value that the central team provides. Customers are interacting with the work of the central team in addition to the work of the solution team, and it may be difficult for them to understand what part of their experience is dependent on what factors.
In this situation there is a risk of the solution team overlooking or failing to appreciate and prioritize the problems their customers are experiencing. This is a problem that the “curate” function in the diagram above is designed to mitigate, but the risk is real, and the mitigation takes ongoing effort.
When a member of a solution team works with customers directly, it’s hard to overlook their challenges and pain. When that solution team member hears about customer challenges from an interim party, the immediacy and impact can be lost. This effect is human nature, and the ongoing effort of the central team to curate customer feedback is vital to counteract it.
As mentioned at the top of the article, I’ve seen this pattern more often recently, where a solution team is failing to recognize problems or opportunities because they’re looking at the wrong thing. They’re focused on what their end customer is saying, instead of looking at the bigger picture and the downstream value chain that includes them and their customers, but isn’t limited to these two parties. It’s easy to mistake “customer being happy” for “customer being happy with what we produce” if you’re not keeping an eye on the big picture.
It’s easy to mistake “customer being happy” for “customer being happy with what we produce” if you’re not keeping an eye on the big picture.
If you’re in a situation like this, whether you’re a member of a solution team, a member of a central team, or a member of a customer/consumer team, you’ll do well to keep an eye open for this problem behavior playing out.
 You can read this article if you’re curious about NSAT as a metric and what the numbers mean and were too lazy to read the 2020 blog post I linked to above.
 I’m judging you, but not too harshly.
 I’m being deliberately vague here, trying to find a balance between establishing enough context to make a point and not sharing any decade-plus-old confidential information.
 Or in some circumstances, instead of.
 Please keep in mind that in most circumstances the central team is introduced when direct ongoing engagement between the solution team and customers can’t effectively scale. If you’re wondering why you’d want a central team in the first place, it may be because your current scenario doesn’t need it. If this is the case, please keep reading so you’ll be better prepared when your scenario gets larger or more complex and you need to start thinking about different solutions.