You can’t avoid problems you can’t see

The last post was about the dangers inherent in measuring the wrong thing – choosing a metric that doesn’t truly represent the business outcome[1] you think it does. This post is about different problems – the problems that come up when you don’t truly know the ins and outs of the the data itself… but you think you do.

This is another “inspired by Twitter” post – it is specifically inspired by this tweet (and corresponding blog post) from Caitlin Hudon[2]. It’s worth reading her blog post before continuing with this one – you go do that now, and I’ll wait.

Caitlin’s ghost story reminded me of a scary story of my own, back from the days before I specialized in data and BI. Back in the days when I was a werewolf hunter. True story.

Around 15 years ago I was a consultant, working on a project with a company that made point-of-sale hardware and software for the food service industry. I was helping them build a hosted solution for above-store reporting, so customers who had 20 Burger Hut or 100 McTaco restaurants[3] could get insights and analytics from all of them, all in one place. This sounds pretty simple in 2020, but in 2005 it was an exciting first-to-market offering, and a lot of the underlying platform technologies that we can take for granted today simply didn’t exist. In the end, we built a data movement service that took files produced by the in-store back-of-house system and uploaded them over a shared dial-up connection[4] from each restaurant to the data center where they could get processed and warehoused.

The analytics system supported a range of different POS systems, each of which produced files in different formats. This was a fun technical challenge for the team, but it was a challenge we expected. What we didn’t expect was the undocumented failure behavior of one of these systems. Without going into too much detail, this POS system would occasionally produce output files that were incomplete, but which did not indicate failure or violate any documented success criteria.

To make a long story short[5], because we learned about the complexities of this system very late in the game, we had some very unhappy customers and some very long nights. During a retrospective we engaged with of the project sponsors for the analytics solution because he had – years earlier – worked with the development group that built this POS system. (For the purposes of this story I will call the system “Steve” because I need a proper noun for his quote.)

The project sponsor reviewed all we’d done from a reliability perspective – all the validation, all the error handling, all the logging. He looked at this, then he looked at the project team and he said:

You guys planned for wolves. ‘Steve’ is werewolves.

Even after all these years, I still remember the deadpan delivery for this line. And it was so true.

We’d gone in thinking we were prepared for all of the usual problems – and we were. But we weren’t prepared for the horrifying reality of the data problems that were lying in wait. We weren’t prepared for werewolves.

Digging through my email from those days, I found a document I’d sent to this project sponsor, planning for some follow-up efforts, and was reminded that for the rest of the projects I did for this client, “werewolves” became part of the team vocabulary.

2019-10-30-12-37-25-597--WINWORD

What’s the moral of this story? Back in 2008 I thought the moral was to test early and often. Although this is still true, I now believe that what Past Matthew really needed was a data catalog or data dictionary with information that clearly said DANGER: WEREWOLVES in big red letters.

This line from Caitlin’s blog post could not be more wise, or more true:

The best defense I’ve found against relying on an oral history is creating a written one.

The thing that ended up saving us back in 2005 was knowing someone who knew something – we happened to have a project stakeholder who had insider knowledge about a key data source and its undocumented behavior. What could have better? Some actual <<expletive>> documentation.

Even in 2020, and even in mature enterprise organizations, having a reliable data catalog or data dictionary that is available to the people who could get value from it is still the exception, not the rule. Business-critical data sources and processes rely on tribal knowledge, time after time and team after team.

I won’t try to supplement or repeat the best practices in Caitlin’s post – they’re all important and they’re all good and I could not agree more with her guidance. (If you skipped reading her post earlier, this is the perfect time for you to go read it.) I will, however, supplement her wisdom with one of my favorite posts from the Informatica blog, from back in 2017.

I’m sharing this second link because some people will read Caitlin’s story and dismiss it because she talks about using Google Sheets. Some people will say “that’s not an enterprise data catalog.” Don’t be those people.

Regardless of the tools you’re using, and regardless of the scope of the data you’re documenting, some things remain universally true:

  • Tribal knowledge can’t be relied upon at any meaningful scale or across any meaningful timeline
  • Not all data is created equal – catalog and document the important things first, and don’t try to boil the ocean
  • The catalog needs to be known by and accessible to the people who need to use the data it described
  • Someone needs to own the catalog and keep it current – if its content is outdated or inaccurate, people won’t trust it, and if they don’t trust it they won’t use it
  • Sooner or later you’ll run into werewolves of your own, and unless you’re prepared in advance the werewolves will eat you

When I started to share this story I figured I would find a place to fit in a “unless you’re careful, your data will turn into a house when the moon is full” joke without forcing it too much, but sadly this was not the case. Still – who doesn’t love a good data werehouse joke?[6]

Maybe next time…


[1] Or whatever it is you’re tracking. You do you.

[2] Apparently I started this post last Halloween. Have I mentioned that the past months have been busy?

[3] Or Pizza Bell… you get the idea.

[4] Each restaurant typically had a single “data” phone line that used the same modem for processing credit card transactions. I swear I’m not making this up.

[5] Or at least short-ish. Brevity is not my forte.

[6] Or this data werehouse joke, for that matter?

Viral adoption: Self-service BI and COVID-19

I live 2.6 miles (4.2 km) from the epicenter of the coronavirus outbreak in Washington state. You know, the nursing home that’s been in the news, where over 10 people have died, and dozens more are infected.[1]

As you can imagine, this has started me thinking about self-service BI.

2020-03-10-17-44-57-439--msedge
Where can I find information I can trust?[2]
When the news started to come out covering the US outbreak, there was something I immediately noticed: authoritative information was very difficult to find. Here’s a quote from that last link.

This escalation “raises our level of concern about the immediate threat of COVID-19 for certain communities,” Dr. Nancy Messonnier, director of the CDC’s National Center for Immunization and Respiratory Diseases, said in the briefing. Still, the risk to the general public not in these areas is considered to be low, she said.

That’s great, but what about the general public in these areas?

What about me and my family?

When most of what I saw on Twitter was people making jokes about Jira tickets[3], I was trying to figure out what was going on, and what I needed to do. What actions should I take to stay safe? What actions were unnecessary or unhelpful?

Before I could answer these questions, I needed to find sources of information. This was surprisingly difficult.

Specifically, I needed to find sources of information that I could trust. There was already a surge in misinformation, some of it presumably well-intentioned, and some from deliberately malicious actors. I needed to explore, validate, confirm, cross-check, act, and repeat. And I was doing this while everyone around me seemed to be treating the emerging pandemic as a joke or a curiosity.

I did this work and made my decisions because I was a highly-motivated stakeholder, while others in otherwise similar positions were farther away from the problem, and were naturally less motivated at the time.[4]

And this is what got me thinking about self-service BI.

In many organizations, self-service BI tools like Power BI will spread virally. A highly-motivated business user will find a tool, find some data, explore, iterate, refine, and repeat. They will work with untrusted – and sometimes untrustworthy – data sources to find the information they need to use, and to make the decisions they need to make. And they do it before people in similar positions are motivated enough to act.

But before long, scraping together whatever data is available isn’t enough anymore. As the number of users relying on the insights being produced increases – even if the insights are being produced by a self-service BI solution – the need for trusted data increases as well.

Where an individual might successfully use disparate unmanaged sources successfully, a population needs a trusted source of truth.

At some point a central authority needs to step up, to make available the data that can serve as that single source of truth. This is easier said than done[5], but it must be done. And this isn’t even the hard part.

The hard part is getting everyone to stop using the unofficial and untrusted sources that they’ve been using to make decisions, and to use the trusted source instead. This is difficult because these users are invested in their current sources, and believe that they are good enough. They may not be ideal, but they work, right? They got me this far, so why should I have to stop using them just because someone says so?

This brings me back to those malicious actors mentioned earlier. Why would someone deliberately share false information about public health issues when lies could potentially cost people their lives? They would do it when the lies would help forward an agenda they value more than they value other people’s lives.

In most business situations, lives aren’t at stake, but people still have their own agendas. I’ve often seen situations where the lack of a single source of truth allows stakeholders to present their own numbers, skewed to make their efforts look more successful than they actually are. Some people don’t want to have to rebuild their reports – but some people want to use falsified numbers so they can get a promotion, or a bonus, or a raise.

Regardless of the reason for using untrusted sources, their use is damaging and should be reduced and eliminated. This is true of business data and analytics, and it is true of the current global health crisis. In both arenas, let’s all be part of the solution, not part of the problem.

Let us be a part of the cure, never part of the plague – we’ll only be remembered for what we create.[6]


[1] Before you ask, yes, my family and I are healthy and well. I’ve been working from home for over a week now, which is a nice silver lining; I have a small but comfortable home office, and can avoid the obnoxious Seattle-area commute.

[2] This article is the best single source I know of. It’s not authoritative source for the subject, but it is aggregating and citing authoritative sources and presenting their information in a form closer to the solution domain than to the problem domain.

[3] This is why I’ve been practicing social media distancing.

[4] This is the where the “personal pandemic parable” part of the blog post ends. From here on it’s all about SSBI. If you’re actually curious, I erred on the side of caution and started working from home and avoiding crowds before it was recommended or mandated. I still don’t know if all of the actions I’ve taken were necessary, but I’m glad I took them and I hope you all stay safe as well.

[5] As anyone who has ever implemented a single source of truth for any non-trivial data domain can attest.

[6] You can enjoy the lyrics even if Kreator’s awesome music isn’t to your taste.

Real customers, real stories

This is my personal blog – I try to be consistently explicit in reminding all y’all about this when I post about topics that are related to my day job as a program manager on the Power BI CAT team. This is one of those posts.

If I had to oversimplify what I do at work, I’d say that I represent the voice of enterprise Power BI customers. I work with key stakeholders from some of the largest companies in the world, and ensure that their needs are well-represented in the Power BI planning and prioritization process, and that we deliver the capabilities that these enterprise customers need[1].

Looking behind this somewhat grandiose summary[2], a lot of what I do is tell stories. Not my own stories, mind you – I tell the customers’ stories.

Image by Daria Głodowska from Pixabay
It was the best of clouds, it was the worst of clouds.

On an ongoing basis, I ask customers to tell me their stories, and I help them along by asking these questions:

  • What goals are you working to achieve?
  • How are you using Power BI to achieve these goals?
  • Where does Power BI make it hard for you to do what you need to do?

When they’re done, I have a pretty good idea what’s going on, and do a bunch of work[3] to make sure that all of these stories are heard by the folks responsible for shipping the features that will make these customers more successful.

Most of the time these stories are never shared outside the Power BI team, but on occasion there are customers who want to share their stories more broadly. My amazing teammate Lauren has been doing the heavy lifting[4] in getting them ready to publish for the world to see, and yesterday the fourth story from her efforts has been published.

You should check them out:

  1. Metro Bank: Metro Bank quickly delivers business efficiency gains without requiring involvement from IT
  2. Cummins: Cummins uses self-service BI to increase productivity and reduce unnecessary costs
  3. Veolia: Environmental services company builds sustainable, data-driven solutions with Power BI and Azure
  4. Avanade: Microsoft platform–focused IT consulting company innovates with Power BI and Azure to improve employee retention
  5. Cerner: Global healthcare solutions provider moves to the cloud for a single source of truth in asset and configuration management

Update: Apparently the Cerner story was getting published while I was writing this post. Added to the list above.

I know that some people will look at these stories and discount them as marketing – there’s not a lot I can do to change that – but these are real stories that showcase how real customers are overcoming real challenges using Power BI and Azure. Being able to share these stories with the world is very exciting for me, because it’s an insight into the amazing work that these customers are doing, and how they’re using Power BI and Azure services to improve their businesses and to make people’s lives better. They’re demonstrating the art of the possible in a way that is concrete and real.

And for each public story, there are scores of stories that you’ll probably never hear. But the Power BI team is listening, and as long as they keep listening, I’ll keep helping the customers tell their stories…


[1] This makes me sound much more important than I actually am. I should ask for a raise.

[2] Seriously, if I do this, shouldn’t I be be a VP or Partner or something?

[3] Mainly boring work that is not otherwise mentioned here.

[4] This is just one more reason why having a diverse team is so important – this is work that would be brutally difficult for me, and she makes it look so easy!

 

One diagram to rule them all

A few weeks back MVP Paul Turley blogged on Power Query performance and diagnostics. It was a good, useful post, but I wasn’t really the target audience and I probably would have forgotten about it if it weren’t for one thing.

This diagram.

pbi
It really says it all, doesn’t it?

Look at it.

Look at it again, and pause to thoughtfully consider its elegance and beauty.

In the time since Paul shared this post, I’ve been involved in any number of conversations[1] where customer stakeholders had questions about Power BI application performance. This type of conversation isn’t particularly new, but now I’ve started using this diagram[2] as a point of reference.

The results have been very positive. Although nothing in the diagram is new or particularly interesting on its own, having this simple visual reference for the components that make up the canonical end-to-end flow in a Power BI application have made my conversations more useful and productive. Less time is required to get all stakeholders to a point of shared understanding – more time can be devoted to identifying and solving the problem.

I don’t know if Paul truly appreciates the beauty of what he’s created. But I do. And you should too.


[1] In case you’ve been wondering why my blog and YouTube output has dried up this month, it’s because real life has been kicking my ass. I think I can finally see the light at the end of the tunnel, so hopefully we’ll be back with regular content before too long. Hopefully.

[2] This beautiful, simple, elegant diagram.

Have you looked at the Power BI roadmap lately?

In case you missed it, Microsoft has published the “2020 release wave 1” release plan for the Power Platform, including Power BI.

You can find the goodness here: Power Platform: 2020 release wave 1 plan.

globe-trotter-1828079_640
I have the map, and the road… where are dataflows on this thing?

Even though you won’t see the term “roadmap” anywhere in the release plan[1] docs, this is how I think of them – because they’re the best, most current, and most complete public view of what Microsoft is planning for Power BI and the rest of the Power Platform.

Check it out today, and also check back in as the release plan is updated periodically[2] as the teams have more clarity and detail to share.


[1] Yes, these were called “release notes” not too long ago. No, I don’t know why picking a name and sticking with it is so hard. Yes, I will do my best to call these “roadmap” even though this isn’t their official name. Hashtag power rebel.

[2] I think the docs team publishes updates every week, but not every article gets modified in each update. I’m also not 100% sure about the weekly publishing schedule, which is why I buried this in a footnote that no one will actually read.

Power BI and ACLs in ADLSg2

In addition to using Azure Data Lake Storage Gen2 as the location for Power BI dataflows data, Power BI can also use ADLSg2 as a data source. As organizations choose ADLSg2 as the storage location for more and more data, this capability is key to enabling analysts and self-service BI users to get value from the data in the lake.

boje-2914324_640
Oh buoy, that is one big data lake!

But how do you do this in as secure a manner as possible, so that the right users have the minimum necessary permissions on the right data?

The short answer is that you let the data source handle secure access to the data it manages. ADLSg2 has a robust security model, which supports both Azure role-based access control (RBAC) and POSIX-like access control lists (ACLs)[1].

The longer answer is that this robust security model may make it more difficult to know how to set up permissions in the data lake to meet your analytics and security requirements.

Earlier this week I received a question from a customer on how to get Power BI to work with data in ADLSg2 that is  secured using ACLs. I didn’t know the answer, but I knew who would know, and I looped in Ben Sack from the dataflows team. Ben answered the customer’s questions and unblocked their efforts, and he said that I could turn them into a blog post. Thank you, Ben![2]

Here’s what you should know:

1 – If you’re using ACLs, you must at least specify a filesystem name in the URL to load in the connector (or if you access ADLS Gen2 via API or any other client).

i.e. Path in Power BI Connector must at least be: https://storageaccountname.dfs.core.windows.net/FileSystemName/

2 – For every file you want to read its contents, all parent folders and filesystem must have the “x” ACL. And the file must have a “r” ACL.

i.e. if you want to access the file: https://StorageAccountName.dfs.core.windows.net/FileSystemName/SubFolder1/File1.csv

3 – For files you want to list, all parent folders and filesystem must have the “x” ACL. The immediate parent folder must also have a “r” ACL.

i.e. if you want to view and access the files in this subfolder: https://StorageAccountName.dfs.core.windows.net/FileSystemName/SubFolder1/

4 – Default ACLs are great way to have ACLs propagate to child items. But they have to be set before creating subfolders and files, otherwise you need to explicitly set ACLs on each item.[3]

5 – If permission management is going to be dynamic, use groups as much as possible rather than assigning permissions to individual users[4]. First, ACL the groups to folders/files and then manage access via membership in the group.

6 – If you have an error accessing a path that is deep in the filesystem, work your way from the filesystem level downwards, fixing ACL settings in each step.

i.e. if you are having trouble accessing https:/StorageAccountName.dfs.core.windows.net/FileSystemName/SubFolder1/SubFolder2/File(s)

First try: https://StorageAccountName.dfs.core.windows.net/FileSystemName

Then: https://StorageAccountName.dfs.core.windows.net/FileSystemName/SubFolder1

And so on.

Update: James Baker, a Program Manager on the Azure Storage team has published on GitHub a PowerShell script to recursively set ACLs. Thanks to Simon for commenting on this post to make me aware of it, Josh from the Azure support team for pointing me to the GitHub repo, and of course to James for writing the actual script!


[1] This description is copied directly from the ADLSg2 documentation, which you should also read before acting on the information in this post.

[2] Disclaimer: This post is basically me using my blog as a way to get Ben’s assistance online so more people can get insights from it. If the information is helpful, all credit goes to Ben. If anything doesn’t work, it’s my fault. Ok, it may also be your fault, but it’s probably mine.

[3] This one is very important to know before you begin, even though it may be #3 on the list.

[4] This is a best practice pretty much everywhere, not just here.