In 2008 I was hired to solve a problem.
At this point almost 12 years later, the problem itself is no longer relevant[1]. While digging around on an unrelated task today I found this chart, which is. You should look at it now.

The scope of the problem is measured by the blue series on this chart. You should look at it again. Just look at it!
Both the blue series and the yellow series are net satisfaction (NSAT) scores. There’s a lot of context behind the numbers[2], but for the purposes of this post let’s say that on this scale anything over 150 is “time for a team party and a big round of bonuses” and anything under 100 is “you probably won’t include this job on your resume, and you’re thinking about this a lot because you’ve been sending your resume out a lot this week.”
There are two stories that leap out from this chart.
The first story is pretty obvious: something changed in FY06. That change had a dramatically negative impact on the blue series, and a small (and probably acceptable) negative impact on the yellow series.
The second story may not be as obvious, but it’s vitally important: the yellow series was being used to track the impact of the change. Something changed in FY06, and the people that made the change were measuring its impact.
They were tracking the wrong thing.
Until I joined the team, no one had a chart like this. It wasn’t that the blue series wasn’t being tracked – it was. It just wasn’t recognized as the true success metric until things were well into resume-polishing territory.[3]
The lesson here isn’t that someone made a bad decision and didn’t realize it. The lesson is that sometimes the metric you’re tracking doesn’t mean what you think it means.
As is the case in my personal story, the problem is usually quite obvious in retrospect, but it’s also usually quite opaque in the moment. Although most large companies have a culture of measurement, it’s more rare to see a culture that consistently questions those measurements. Although this approach may not work for everyone, I recommend using this three-year-old approach to defining your most important metrics.
I don’t mean that the approach is three years old. I mean that you should approach the problem like a three-year-old would: by repeatedly asking “why?”
When someone[4] suggests measuring using a given metric, ask why. “Why do you think this is the right way to measure this thing?” When you get an answer, ask why again. “Why do you believe that?” Keep asking why – the more important the metric, the more times you should ask why and expect to get a well-considered answer[5]. And if the answers aren’t forthcoming or aren’t credible… that is an important point to recognize before you’ve invested too much in a project or solution, isn’t it?
[1] Which is why I’m not going to talk about the problem or the solution here, except in the most general, hand-wavey terms.
[2] You can read this article if you’re curious.
[3] I should also point out that I wasn’t the person who figured out that we’d been measuring the wrong thing. The person who hired me had figured it out, which was why I was hired. Credit where credit is due.
[4] This someone may or may not be you. But definitely question yourself in the same way, because it’s always hardest to see your own biases.
[5] The person who introduced me to this idea called it “five whys” but I wouldn’t read too much into that specific number. He also never explained what he meant by this, and for months I thought he was referring to some five word phrase where each word started with the letter Y. True story.
(responding to footnote 5)
The short answer on “five whys” is that it is an iterative technique commonly practiced in TQM and Six Sigma research, it’s a very effective strategy in digging down to root causes. It was a technique popularized by some dude named Toyoda.
The number five is used because that was the typical number of iterations to produce a good answer. There are similar concepts: 5Ms (manufacturing), 8Ps (products), 4Ss (services). Again, the numbers are not fixed, the importance is with the enumerations, almost always the best first cut on selecting dimensions for the data model, as well as related candidate KPIs.
LikeLike
Pingback: Measuring success and satisfaction – BI Polar