You can’t avoid problems you can’t see

The last post was about the dangers inherent in measuring the wrong thing – choosing a metric that doesn’t truly represent the business outcome[1] you think it does. This post is about different problems – the problems that come up when you don’t truly know the ins and outs of the the data itself… but you think you do.

This is another “inspired by Twitter” post – it is specifically inspired by this tweet (and corresponding blog post) from Caitlin Hudon[2]. It’s worth reading her blog post before continuing with this one – you go do that now, and I’ll wait.

Caitlin’s ghost story reminded me of a scary story of my own, back from the days before I specialized in data and BI. Back in the days when I was a werewolf hunter. True story.

Around 15 years ago I was a consultant, working on a project with a company that made point-of-sale hardware and software for the food service industry. I was helping them build a hosted solution for above-store reporting, so customers who had 20 Burger Hut or 100 McTaco restaurants[3] could get insights and analytics from all of them, all in one place. This sounds pretty simple in 2020, but in 2005 it was an exciting first-to-market offering, and a lot of the underlying platform technologies that we can take for granted today simply didn’t exist. In the end, we built a data movement service that took files produced by the in-store back-of-house system and uploaded them over a shared dial-up connection[4] from each restaurant to the data center where they could get processed and warehoused.

The analytics system supported a range of different POS systems, each of which produced files in different formats. This was a fun technical challenge for the team, but it was a challenge we expected. What we didn’t expect was the undocumented failure behavior of one of these systems. Without going into too much detail, this POS system would occasionally produce output files that were incomplete, but which did not indicate failure or violate any documented success criteria.

To make a long story short[5], because we learned about the complexities of this system very late in the game, we had some very unhappy customers and some very long nights. During a retrospective we engaged with of the project sponsors for the analytics solution because he had – years earlier – worked with the development group that built this POS system. (For the purposes of this story I will call the system “Steve” because I need a proper noun for his quote.)

The project sponsor reviewed all we’d done from a reliability perspective – all the validation, all the error handling, all the logging. He looked at this, then he looked at the project team and he said:

You guys planned for wolves. ‘Steve’ is werewolves.

Even after all these years, I still remember the deadpan delivery for this line. And it was so true.

We’d gone in thinking we were prepared for all of the usual problems – and we were. But we weren’t prepared for the horrifying reality of the data problems that were lying in wait. We weren’t prepared for werewolves.

Digging through my email from those days, I found a document I’d sent to this project sponsor, planning for some follow-up efforts, and was reminded that for the rest of the projects I did for this client, “werewolves” became part of the team vocabulary.

2019-10-30-12-37-25-597--WINWORD

What’s the moral of this story? Back in 2008 I thought the moral was to test early and often. Although this is still true, I now believe that what Past Matthew really needed was a data catalog or data dictionary with information that clearly said DANGER: WEREWOLVES in big red letters.

This line from Caitlin’s blog post could not be more wise, or more true:

The best defense I’ve found against relying on an oral history is creating a written one.

The thing that ended up saving us back in 2005 was knowing someone who knew something – we happened to have a project stakeholder who had insider knowledge about a key data source and its undocumented behavior. What could have better? Some actual <<expletive>> documentation.

Even in 2020, and even in mature enterprise organizations, having a reliable data catalog or data dictionary that is available to the people who could get value from it is still the exception, not the rule. Business-critical data sources and processes rely on tribal knowledge, time after time and team after team.

I won’t try to supplement or repeat the best practices in Caitlin’s post – they’re all important and they’re all good and I could not agree more with her guidance. (If you skipped reading her post earlier, this is the perfect time for you to go read it.) I will, however, supplement her wisdom with one of my favorite posts from the Informatica blog, from back in 2017.

I’m sharing this second link because some people will read Caitlin’s story and dismiss it because she talks about using Google Sheets. Some people will say “that’s not an enterprise data catalog.” Don’t be those people.

Regardless of the tools you’re using, and regardless of the scope of the data you’re documenting, some things remain universally true:

  • Tribal knowledge can’t be relied upon at any meaningful scale or across any meaningful timeline
  • Not all data is created equal – catalog and document the important things first, and don’t try to boil the ocean
  • The catalog needs to be known by and accessible to the people who need to use the data it described
  • Someone needs to own the catalog and keep it current – if its content is outdated or inaccurate, people won’t trust it, and if they don’t trust it they won’t use it
  • Sooner or later you’ll run into werewolves of your own, and unless you’re prepared in advance the werewolves will eat you

When I started to share this story I figured I would find a place to fit in a “unless you’re careful, your data will turn into a house when the moon is full” joke without forcing it too much, but sadly this was not the case. Still – who doesn’t love a good data werehouse joke?[6]

Maybe next time…


[1] Or whatever it is you’re tracking. You do you.

[2] Apparently I started this post last Halloween. Have I mentioned that the past months have been busy?

[3] Or Pizza Bell… you get the idea.

[4] Each restaurant typically had a single “data” phone line that used the same modem for processing credit card transactions. I swear I’m not making this up.

[5] Or at least short-ish. Brevity is not my forte.

[6] Or this data werehouse joke, for that matter?

Viral adoption: Self-service BI and COVID-19

I live 2.6 miles (4.2 km) from the epicenter of the coronavirus outbreak in Washington state. You know, the nursing home that’s been in the news, where over 10 people have died, and dozens more are infected.[1]

As you can imagine, this has started me thinking about self-service BI.

2020-03-10-17-44-57-439--msedge
Where can I find information I can trust?[2]
When the news started to come out covering the US outbreak, there was something I immediately noticed: authoritative information was very difficult to find. Here’s a quote from that last link.

This escalation “raises our level of concern about the immediate threat of COVID-19 for certain communities,” Dr. Nancy Messonnier, director of the CDC’s National Center for Immunization and Respiratory Diseases, said in the briefing. Still, the risk to the general public not in these areas is considered to be low, she said.

That’s great, but what about the general public in these areas?

What about me and my family?

When most of what I saw on Twitter was people making jokes about Jira tickets[3], I was trying to figure out what was going on, and what I needed to do. What actions should I take to stay safe? What actions were unnecessary or unhelpful?

Before I could answer these questions, I needed to find sources of information. This was surprisingly difficult.

Specifically, I needed to find sources of information that I could trust. There was already a surge in misinformation, some of it presumably well-intentioned, and some from deliberately malicious actors. I needed to explore, validate, confirm, cross-check, act, and repeat. And I was doing this while everyone around me seemed to be treating the emerging pandemic as a joke or a curiosity.

I did this work and made my decisions because I was a highly-motivated stakeholder, while others in otherwise similar positions were farther away from the problem, and were naturally less motivated at the time.[4]

And this is what got me thinking about self-service BI.

In many organizations, self-service BI tools like Power BI will spread virally. A highly-motivated business user will find a tool, find some data, explore, iterate, refine, and repeat. They will work with untrusted – and sometimes untrustworthy – data sources to find the information they need to use, and to make the decisions they need to make. And they do it before people in similar positions are motivated enough to act.

But before long, scraping together whatever data is available isn’t enough anymore. As the number of users relying on the insights being produced increases – even if the insights are being produced by a self-service BI solution – the need for trusted data increases as well.

Where an individual might successfully use disparate unmanaged sources successfully, a population needs a trusted source of truth.

At some point a central authority needs to step up, to make available the data that can serve as that single source of truth. This is easier said than done[5], but it must be done. And this isn’t even the hard part.

The hard part is getting everyone to stop using the unofficial and untrusted sources that they’ve been using to make decisions, and to use the trusted source instead. This is difficult because these users are invested in their current sources, and believe that they are good enough. They may not be ideal, but they work, right? They got me this far, so why should I have to stop using them just because someone says so?

This brings me back to those malicious actors mentioned earlier. Why would someone deliberately share false information about public health issues when lies could potentially cost people their lives? They would do it when the lies would help forward an agenda they value more than they value other people’s lives.

In most business situations, lives aren’t at stake, but people still have their own agendas. I’ve often seen situations where the lack of a single source of truth allows stakeholders to present their own numbers, skewed to make their efforts look more successful than they actually are. Some people don’t want to have to rebuild their reports – but some people want to use falsified numbers so they can get a promotion, or a bonus, or a raise.

Regardless of the reason for using untrusted sources, their use is damaging and should be reduced and eliminated. This is true of business data and analytics, and it is true of the current global health crisis. In both arenas, let’s all be part of the solution, not part of the problem.

Let us be a part of the cure, never part of the plague – we’ll only be remembered for what we create.[6]


[1] Before you ask, yes, my family and I are healthy and well. I’ve been working from home for over a week now, which is a nice silver lining; I have a small but comfortable home office, and can avoid the obnoxious Seattle-area commute.

[2] This article is the best single source I know of. It’s not authoritative source for the subject, but it is aggregating and citing authoritative sources and presenting their information in a form closer to the solution domain than to the problem domain.

[3] This is why I’ve been practicing social media distancing.

[4] This is the where the “personal pandemic parable” part of the blog post ends. From here on it’s all about SSBI. If you’re actually curious, I erred on the side of caution and started working from home and avoiding crowds before it was recommended or mandated. I still don’t know if all of the actions I’ve taken were necessary, but I’m glad I took them and I hope you all stay safe as well.

[5] As anyone who has ever implemented a single source of truth for any non-trivial data domain can attest.

[6] You can enjoy the lyrics even if Kreator’s awesome music isn’t to your taste.

Data culture and the centerline

I’m running behind on my own YouTube publishing duties[1], but that doesn’t keep me from watching[2] the occasional data culture YouTube video produced by others.

Like this one:

Ok… you may be confused. You may believe this video is not actually about data culture. This is an easy mistake to make, and you can be forgiven for making it, but the content of the video make its true subject very clear:

A new technology is introduced that changes the way people work and live. This new technology replaces existing and established technologies; it lets people do what they used to do in a new way – easier, faster, and further. It also lets people do things they couldn’t do before, and opens up new horizons of possibility.

The technology also brings risk and challenge. Some of this is because of the new capabilities, and some is because of the collision[3] between the new way and the old way of doing things. The old way and the new way aren’t completely compatible, but they use shared resources and sometimes things go wrong.

At the root of these challenges is users moving faster than any relevant authorities. Increasing numbers of people are seeing the value of the new technology, assuming the inherent risk[4], and embracing its capabilities while hoping for the best.

Different groups see the rising costs and devise solutions for these challenges. Some solutions are tactical, some are strategic. And eventually some champions emerge to push for the creation of standard solutions. Or standards plural, because there always seems to be more than one of those darned things.

Not everyone buys into the standards at first, but over time the standards are refined and… actually standardized.

This process doesn’t slow down the technology adoption. The process and the standards instead provide the necessary shape and structure for adoption to take place as safely as possible.

With the passage of time, users take for granted the safety standards as much as they take for granted the capabilities of the technology… and can’t imagine using one without the other.

For the life of me I can’t imagine why they kept doubling down on the “lane markings” analogy, but I’m actually happy they did. This approach may get more people paying attention – I can’t find any other data culture videos on YouTube with 488K views…

road-220058_640


[1] Part of this is because my wife has been out of town, and my increased parental responsibilities have reduced the free time I would normally spend filming and editing… but it’s mainly because I’m finding that talking coherently about data culture is harder for me than writing about data culture. I’ll get better, I assume. I hope.

[2] In this case, I watched while I was folding laundry. As one does.

[3] Yes, pun intended. No, I’m not sorry.

[4] Either through knowledge or through ignorance.

The Power BI Adoption Framework – it’s Power BI AF

You may have seen things that make you say “that’s Power BI AF” but none of them have come close to this. It’s literally the Power BI AF[1].

That’s right – this week Microsoft published the Power BI Adoption Framework on GitHub and YouTube. If you’re impatient, here’s the first video – you can jump right in. It serves as an introduction to the framework, its content, and its goals.

Without attempting to summarize the entire framework, this content provides a set of guidance, practices, and resources to help organizations build a data culture, establish a Power BI center of excellence, and manage Power BI at any scale.

Even though I blog a lot about Power BI dataflows, most of my job involves working with enterprise Power BI customers – global organizations with thousands of users across the business who are building, deploying, and consuming BI solutions built using Power BI.

Each of these large customers takes their own approach to adopting Power BI, at least when it comes to the details. But with very few exceptions[2], each successful customer will align with the patterns and practices presented in the Power BI Adoption Framework – and when I work with a customer that is struggling with their global Power BI rollout, their challenges are often rooted in a failure to adopt these practices.

There’s no single “right way” to be successful with Power BI, so don’t expect a silver bullet. Instead, the Power BI Adoption Framework presents a set of roles, responsibilities, and behaviors that have been developed after working with customers in real-world Power BI deployments.

If you look on GitHub today, you’ll find a set of PowerPoint decks broken down into five topics, plus a few templates.

2019-12-12-12-09-19-166--msedge

These slide decks are still a little rough. They were originally built for use by partners who could customize and deliver them as training content for their customers[3], rather than for direct use by the general public, and as of today they’re still a work in progress. But if you can get past the rough edges, there’s definitely gold to be found. This is the same content I used when I put together my “Is self-service business intelligence a two-edged sword?” presentation earlier this year, and for the most part I just tweaked the slide template and added a bunch of sword pictures.

And if the slides aren’t quite ready for you today, you can head over to the official Power BI YouTube channel where this growing playlist contains bite-size training content to supplement the slides. As of today there are two videos published – expect much more to come in the days and weeks ahead.

The real heroes of this story[4] are Manu Kanwarpal and Paul Henwood.  They’re both cloud solution architects working for Microsoft in the UK. They’ve put the Power BI AF together, delivered its content to partners around the world, and are now working to make it available to everyone.

What do you think?

To me, this is one of the biggest announcements of the year, but I really want to hear from you after you’ve checked out the Power BI AF. What questions are still unanswered? What does the AF not do today that you want or need it to do tomorrow?

Please let me know in the comments below – this is just a starting point, and there’s a lot that we can do with it from here…


[1] If you had any idea how long I’ve been waiting to make this joke…

[2] I can’t think of a single exception at the moment, but I’m sure there must be one or two. Maybe.

[3] Partners can still do this, of course.

[4] Other than you, of course. You’re always a hero too – never stop doing what you do.

BI is dead. Long live BI!

As I was riding the bus home from jury duty the other day[1] I saw this tweet come in from Eric Vogelpohl.

 

There’s a lot to unpack here. and I don’t expect to do it all justice in this post, but Eric’s thought-provoking tweet made me want to reply, and I knew it wouldn’t fit into 280 characters… but I can tackle some of the more important and interesting elements.

First and foremost, Eric tags me before he tags Marco, Chris, or Curbal. I am officially number one, and I will never let Marco or Chris forget it[2].

With that massive ego boost out of the way, let’s get to the BI, which is definitely dead. And also definitely not dead.

Eric’s post starts off with a bold and simple assertion: If you have the reactive/historical insights you need today, you have enough business intelligence and should focus on other things instead. I’m paraphrasing, but I believe this effectively captures the essence of his claim. Let me pick apart some of the assumptions I believe underlie this assertion.

First, this claim seems to assume that all organizations are “good w/ BI.” Although this may be true of an increasing number of mature companies, in my experience it is definitely not something that can be taken for granted. The alignment of business and technology, and the cultural changes required to initiate and maintain this alignment, are not yet ubiquitous.

Should they be? Should we be able to take for granted that in 2019 companies have all the BI they need? [3]

The second major assumption behind Eric’s first point seems to be that “good w/ BI” today translates to “good w/ BI” tomorrow… as if BI capabilities are a blanket solution rather than something scoped and constrained to a specific set of business and data domains. In reality[4], BI capabilities are developed and deployed incrementally based on priorities and constraints, and are then maintained and extended as the priorities and constraints evolve over time.

My job gives me the opportunity to work with large enterprise companies to help them succeed in their efforts related to data, business intelligence, and analytics. Many of these companies have built successful BI architectures and are reaping the benefits of their work. These companies may well be characterized as being “good w/ BI” but none of them are resting on their laurels – they are instead looking for ways to extend the scope of their BI investments, and to optimize what they have.

I don’t believe BI is going anywhere in the near future. Not only are most companies not “good w/ BI” today, the concept of being “good w/ BI” simply doesn’t make sense in the context in which BI exists. So long as business requirements and environments change over time, and so long as businesses need to understand and react, there will be a continuing need for BI. Being “good w/ BI” isn’t a meaningful concept beyond a specific point in time… and time never slows down.

If your refrigerator is stocked with what your family likes to eat, are you “good w/ food”? This may be the case today, but what about when your children become teenagers and eat more? What about when someone in the family develops food allergies? What about when one of your children goes vegan? What about when the kids go off to college? Although this analogy won’t hold up to close inspection[5] it hopefully shows how difficult it is to be “good” over the long term, even for a well-understood problem domain, when faced with easily foreseeable changes over time.

Does any of this mean that BI represents the full set of capabilities that successful organizations need? Definitely not. More and more, BI is becoming “table stakes” for businesses. Without BI it’s becoming more difficult for companies to simply survive, and BI is no longer a true differentiator that assures a competitive advantage. For that advantage, companies need to look at other ways to get value from their data, including predictive and prescriptive analytics, and the development of a data culture that empowers and encourages more people to do more things with more data in the execution of their duties.

And of course, this may well have been Eric’s point from the beginning…

 


[1] I’ve been serving on the jury for a moderately complex civil trial for most of August, and because the trial is in downtown Seattle during business hours I have been working early mornings and evenings in the office, and taking the bus to the courthouse to avoid the traffic and parking woes that plague Seattle. I am very, very tired.

[2] Please remind me to add “thought leader” to my LinkedIn profile. Also maybe something about blockchain.

[3] I’ll leave this as an exercise for the reader.

[4] At least in my reality. Your mileage may vary.

[5] Did this analogy hold up to even distant observation?

Are you building a BI house of cards?

Every few weeks I see someone asking about using Analysis Services as a data source for Power BI dataflows. Every time I hear this, I cringe, and then include advice something like this[1] in my response.

Using Analysis Services as a data source is an anti-pattern – a worst practice. It is not recommended, and any solution built using this pattern is likely to produce dissatisfied customers. Please strongly consider using other data sources, likely the data sources on which the AS model is built.

 

There are multiple reasons for this advice.

 

Some reasons are technical. Extraction of large volumes of data is not what an Analysis Services model is designed for. Performance for the ETL process is likely to be poor, and you’re likely end up with memory/caching issues on the Analysis Services server. Beyond this, AS models typically don’t include the IDs/surrogate keys that you need for data warehousing, so joining the AS data to other data sources will be problematic.[2]

 

For some specific examples and technical deep dives into how and why this is a bad idea, check out this excellent blog post from Shabnam Watson. The focus of the post is on SSAS memory settings, but it’s very applicable to the current discussion.

 

Some reasons for this advice are less technical, but no less important. Using analytics models as data sources for ETL processing are a strong code smell[3] (“any characteristic in the source code of a program that possibly indicates a deeper problem”) for business intelligence solutions.

 

Let’s look at a simple and familiar diagram:

 

01 good

 

There’s a reason this left-to-right flow is the standard representation of BI applications: it’s what works. Each component has specific roles and responsibilities that complement each other, and which are aligned with the technology used to implement the component. This diagram includes a set of logical “tiers” or “layers” that are common in analytics systems, and which mutually support each other to achieve the systems’ goals.
Although there are many successful variations on this theme, they all tend to have this general flow and these general layers. Consider this one, for example:

 

02 ok

This example has more complexity, but also has the same end-to-end flow as the simple one. This is pretty typical for  scenarios where a single data warehouse and analytics model won’t fulfill all requirements, so the individual data warehouses, data marts, and analytics models each contain a portion – often an overlapping portion – of the analytics data.

Let’s look at one more:

03 - trending badly

This design is starting to smell. The increased complexity and blurring of responsibilities will produce difficulties in data freshness and maintenance. The additional dependencies, and the redundant and overlapping nature of the dependencies means that any future changes will require additional investigation and care to ensure that there are no unintended side effects to the existing functionality.

As an aside, my decades of working in data and analytics suggest that this care will rarely actually be taken. Instead, this architecture will be fragile and prone to problems, and the teams that built it will not be the teams who solve those problems.

And then we have this one[4]:

04 - hard no

This is what you get when you use Analysis Services as the data source for ETL processing, whether that ETL and downstream storage is implemented in Power BI dataflows or different technologies. And this is probably the best case you’re likely to get when you go down this path. Even with just two data warehouses and two analytics models in the diagram, the complex and unnatural dependencies are obvious, and are painful to consider.

What would be better here?[5] As mentioned at the top of the post, the logical alternative is to avoid using the analytics model and to instead use the same sources that the analytics model already uses. This may require some refactoring to ensure that the duplication of logic is minimized. It may require some political or cross-team effort to get buy-in from the owners of the upstream systems. It may not be simple, or easy. But it is almost always the right thing to do.

Don’t take shortcuts to save days or weeks today that will cause you or your successors months or years to undo and repair. Don’t build a house of cards, because with each new card you add, the house is more and more likely to fall.

Update: The post above focused mainly on technical aspects of the anti-pattern, and suggests alternative recommended patterns to follow instead. It does not focus on the reasons why so many projects are pushed into the anti-pattern in the first place. Those reasons are almost always based on human – not technical – factors.

You should read this post next: http://workingwithdevs.com/its-always-a-people-problem/. It presents a delightful and succinct approach to deal with the root causes, and will put the post you just read in a different context.


[1] Something a lot like this. I copied this from a response I sent a few days ago.

[2] Many thanks to Chris Webb for some of the information I’ve paraphrased here. If you want to hear more from Chris on this subject, check out this session recording from PASS Summit 2017. The whole session is excellent; the information most relevant to this subject begins around the 26 minute mark in the recording. Chris also gets credit for pointing me to Shabnam Watson’s blog.

[3] I learned about code smells last year when I attended a session by Felienne Hermans at Craft Conference in Budapest. You can watch the session here. And you really should, because it’s really good.

[4] My eyes are itching just looking at it. It took an effort of will to create this diagram, much less share it.

[5] Yes, just about anything would be better.