One report for multiple audiences

Have you ever needed to build a Power BI application that make insights and data available for use used by multiple groups of users, where those user groups have some overlapping needs, but also some exclusive needs?

Maybe requirements like these?[1]

  • A dataset with sales data that needs to be used by store managers and district managers
  • A sales report that contains both store-level and district-level pages
  • Each district manager should see only data from stores in their districts
  • Each store manager should see only data from their store
  • District managers should see all pages
  • Store managers should see only store-level pages

Until we got to the last two requirements, you were probably thinking about row-level security, but then you may have started to have doubts. Hold that thought.

Image by Tayeb MEZAHDIA from Pixabay

First and foremost, protecting data at the source is the most reliable way to prevent unauthorized users from accessing what they are not entitled to see. Hiding elements in the UI layer is not a security mechanism and should never be treated as one.

Let’s make that stand out a little more with some fancy formatting.

Protecting data at the source is the most reliable way to prevent unauthorized users from accessing what they are not entitled to see. Hiding elements in the UI layer is not a security mechanism and should never be treated as one.

Sometimes security isn’t everything you need – sometimes you need a targeted or adaptive user experience as well. This just got a little easier with Power BI and some recently released capabilities for conditional formatting.

Here’s the short version:

Using some of the same techniques you use for row-level security, you define roles in your data model that say what users can see what pages. You hide the pages in your report, and you implement navigation controls in your report using buttons on report page surface. When you publish the report, you define row-level security rules in the report dataset to identify what users and groups belong to which roles, which in turn are used for the conditional formatting used to control report navigation.

Here’s the long version: Read these blog posts from Gilbert Quevauvilliers and Marc Lelijveld.

Marc and Gilbert both go through step-by-step detail and each one explains the approach slightly differently, so I recommend reading each of their posts. They’ve saved me the trouble of writing up something detailed myself.

This approach gives you another tool in your Power BI tool belt, and may be useful if you’re trying to reduce the number of reports you manage and want to customize a single common report for multiple audiences.


[1] This is a generic example with generic data and roles, but the actual scenario details seem to be common across industries and data domains. I’ve heard variations on this theme from enough customers to believe that this is a common use case.

Foster Kittens and Managed Self-Service BI

My family is a foster family for cats and kittens from the cat shelter where we’ve adopted some of our own cats. Usually a litter will stay with us for a month or two, but it depends on the kittens themselves, and on external factors.

While the kittens are with us, it’s our responsibility to help them grow, both physically and socially. The experts at the shelter are always available if we need help, but for the most part we have the knowledge and tools we need to be successful. In many cases we’re their first real exposure to humans, and we can prepare them to be loving and playful members of a family. Just not our family.

Once the kittens are ready to be adopted, we take them to the shelter, where they will be carefully matched with their forever family. This last part is important – it’s hard enough to let go, and knowing that each of them will find a good home is what makes it possible.

It’s really the best of both worlds – kind of like managed self-service BI with Power BI.[1]

 

Not unlike fostering kittens, managed self-service BI can be the best of both worlds. As an analyst working in Power BI, you can often pick up projects when the scope is still small and manageable, and when you can have fun playing around with the data and seeing what it’s likely to become.

I’m emphasizing the “managed” in managed self-service BI, because it’s best to not be completely on your own. Having someone backing you up, someone with the expertise and resources to get you through challenging spots with a helping hand, is just as important with BI as it is with kittens. An author on his own may make avoidable mistakes with long-term consequences, but a center of excellence or community of practice can provide training up front, and assistance along the way so the finished self-service solution is ready to grow up – and growing up is an important goal.

Just as my family includes our adult cats, that analyst working in Power BI has a day job. If we kept each litter of kittens we foster, things would soon become messy and unmanageable. If an analyst retained ownership of every Power BI solution he developed, he would struggle to stay on top of his core priorities.

Being able to hand off a self-service solution to a central BI team is what gives this story a happy ending. The BI team can give the analyst’s work the long-term home it deserves, and the analyst can get back to his job… while also keeping an eye open for the next self-service BI challenge to come along and steal his heart.

Of all the kittens I have loved, I miss Tiny the most.
Your head. I will bite it now.[2]
If you’re interested in learning more about the shelter where we volunteer, please visit the Meow Cat Rescue web site. Please also consider donating while you’re there – the global pandemic is making it harder for their awesome staff and volunteers to do what they do, and kitten season is upon us. If you appreciate the BI Polar blog and its companion YouTube channel, there’s no better way to say thank you than to donate to Meow. Even a small donation will help.


[1] I hope you saw that one coming

[2] This footage of Tiny attacking my head didn’t fit into the Power Kittens video, but I shared it on Twitter because it was just too cute to not share.

You can’t avoid problems you can’t see

The last post was about the dangers inherent in measuring the wrong thing – choosing a metric that doesn’t truly represent the business outcome[1] you think it does. This post is about different problems – the problems that come up when you don’t truly know the ins and outs of the the data itself… but you think you do.

This is another “inspired by Twitter” post – it is specifically inspired by this tweet (and corresponding blog post) from Caitlin Hudon[2]. It’s worth reading her blog post before continuing with this one – you go do that now, and I’ll wait.

Caitlin’s ghost story reminded me of a scary story of my own, back from the days before I specialized in data and BI. Back in the days when I was a werewolf hunter. True story.

Around 15 years ago I was a consultant, working on a project with a company that made point-of-sale hardware and software for the food service industry. I was helping them build a hosted solution for above-store reporting, so customers who had 20 Burger Hut or 100 McTaco restaurants[3] could get insights and analytics from all of them, all in one place. This sounds pretty simple in 2020, but in 2005 it was an exciting first-to-market offering, and a lot of the underlying platform technologies that we can take for granted today simply didn’t exist. In the end, we built a data movement service that took files produced by the in-store back-of-house system and uploaded them over a shared dial-up connection[4] from each restaurant to the data center where they could get processed and warehoused.

The analytics system supported a range of different POS systems, each of which produced files in different formats. This was a fun technical challenge for the team, but it was a challenge we expected. What we didn’t expect was the undocumented failure behavior of one of these systems. Without going into too much detail, this POS system would occasionally produce output files that were incomplete, but which did not indicate failure or violate any documented success criteria.

To make a long story short[5], because we learned about the complexities of this system very late in the game, we had some very unhappy customers and some very long nights. During a retrospective we engaged with of the project sponsors for the analytics solution because he had – years earlier – worked with the development group that built this POS system. (For the purposes of this story I will call the system “Steve” because I need a proper noun for his quote.)

The project sponsor reviewed all we’d done from a reliability perspective – all the validation, all the error handling, all the logging. He looked at this, then he looked at the project team and he said:

You guys planned for wolves. ‘Steve’ is werewolves.

Even after all these years, I still remember the deadpan delivery for this line. And it was so true.

We’d gone in thinking we were prepared for all of the usual problems – and we were. But we weren’t prepared for the horrifying reality of the data problems that were lying in wait. We weren’t prepared for werewolves.

Digging through my email from those days, I found a document I’d sent to this project sponsor, planning for some follow-up efforts, and was reminded that for the rest of the projects I did for this client, “werewolves” became part of the team vocabulary.

2019-10-30-12-37-25-597--WINWORD

What’s the moral of this story? Back in 2008 I thought the moral was to test early and often. Although this is still true, I now believe that what Past Matthew really needed was a data catalog or data dictionary with information that clearly said DANGER: WEREWOLVES in big red letters.

This line from Caitlin’s blog post could not be more wise, or more true:

The best defense I’ve found against relying on an oral history is creating a written one.

The thing that ended up saving us back in 2005 was knowing someone who knew something – we happened to have a project stakeholder who had insider knowledge about a key data source and its undocumented behavior. What could have better? Some actual <<expletive>> documentation.

Even in 2020, and even in mature enterprise organizations, having a reliable data catalog or data dictionary that is available to the people who could get value from it is still the exception, not the rule. Business-critical data sources and processes rely on tribal knowledge, time after time and team after team.

I won’t try to supplement or repeat the best practices in Caitlin’s post – they’re all important and they’re all good and I could not agree more with her guidance. (If you skipped reading her post earlier, this is the perfect time for you to go read it.) I will, however, supplement her wisdom with one of my favorite posts from the Informatica blog, from back in 2017.

I’m sharing this second link because some people will read Caitlin’s story and dismiss it because she talks about using Google Sheets. Some people will say “that’s not an enterprise data catalog.” Don’t be those people.

Regardless of the tools you’re using, and regardless of the scope of the data you’re documenting, some things remain universally true:

  • Tribal knowledge can’t be relied upon at any meaningful scale or across any meaningful timeline
  • Not all data is created equal – catalog and document the important things first, and don’t try to boil the ocean
  • The catalog needs to be known by and accessible to the people who need to use the data it described
  • Someone needs to own the catalog and keep it current – if its content is outdated or inaccurate, people won’t trust it, and if they don’t trust it they won’t use it
  • Sooner or later you’ll run into werewolves of your own, and unless you’re prepared in advance the werewolves will eat you

When I started to share this story I figured I would find a place to fit in a “unless you’re careful, your data will turn into a house when the moon is full” joke without forcing it too much, but sadly this was not the case. Still – who doesn’t love a good data werehouse joke?[6]

Maybe next time…


[1] Or whatever it is you’re tracking. You do you.

[2] Apparently I started this post last Halloween. Have I mentioned that the past months have been busy?

[3] Or Pizza Bell… you get the idea.

[4] Each restaurant typically had a single “data” phone line that used the same modem for processing credit card transactions. I swear I’m not making this up.

[5] Or at least short-ish. Brevity is not my forte.

[6] Or this data werehouse joke, for that matter?

Viral adoption: Self-service BI and COVID-19

I live 2.6 miles (4.2 km) from the epicenter of the coronavirus outbreak in Washington state. You know, the nursing home that’s been in the news, where over 10 people have died, and dozens more are infected.[1]

As you can imagine, this has started me thinking about self-service BI.

2020-03-10-17-44-57-439--msedge
Where can I find information I can trust?[2]
When the news started to come out covering the US outbreak, there was something I immediately noticed: authoritative information was very difficult to find. Here’s a quote from that last link.

This escalation “raises our level of concern about the immediate threat of COVID-19 for certain communities,” Dr. Nancy Messonnier, director of the CDC’s National Center for Immunization and Respiratory Diseases, said in the briefing. Still, the risk to the general public not in these areas is considered to be low, she said.

That’s great, but what about the general public in these areas?

What about me and my family?

When most of what I saw on Twitter was people making jokes about Jira tickets[3], I was trying to figure out what was going on, and what I needed to do. What actions should I take to stay safe? What actions were unnecessary or unhelpful?

Before I could answer these questions, I needed to find sources of information. This was surprisingly difficult.

Specifically, I needed to find sources of information that I could trust. There was already a surge in misinformation, some of it presumably well-intentioned, and some from deliberately malicious actors. I needed to explore, validate, confirm, cross-check, act, and repeat. And I was doing this while everyone around me seemed to be treating the emerging pandemic as a joke or a curiosity.

I did this work and made my decisions because I was a highly-motivated stakeholder, while others in otherwise similar positions were farther away from the problem, and were naturally less motivated at the time.[4]

And this is what got me thinking about self-service BI.

In many organizations, self-service BI tools like Power BI will spread virally. A highly-motivated business user will find a tool, find some data, explore, iterate, refine, and repeat. They will work with untrusted – and sometimes untrustworthy – data sources to find the information they need to use, and to make the decisions they need to make. And they do it before people in similar positions are motivated enough to act.

But before long, scraping together whatever data is available isn’t enough anymore. As the number of users relying on the insights being produced increases – even if the insights are being produced by a self-service BI solution – the need for trusted data increases as well.

Where an individual might successfully use disparate unmanaged sources successfully, a population needs a trusted source of truth.

At some point a central authority needs to step up, to make available the data that can serve as that single source of truth. This is easier said than done[5], but it must be done. And this isn’t even the hard part.

The hard part is getting everyone to stop using the unofficial and untrusted sources that they’ve been using to make decisions, and to use the trusted source instead. This is difficult because these users are invested in their current sources, and believe that they are good enough. They may not be ideal, but they work, right? They got me this far, so why should I have to stop using them just because someone says so?

This brings me back to those malicious actors mentioned earlier. Why would someone deliberately share false information about public health issues when lies could potentially cost people their lives? They would do it when the lies would help forward an agenda they value more than they value other people’s lives.

In most business situations, lives aren’t at stake, but people still have their own agendas. I’ve often seen situations where the lack of a single source of truth allows stakeholders to present their own numbers, skewed to make their efforts look more successful than they actually are. Some people don’t want to have to rebuild their reports – but some people want to use falsified numbers so they can get a promotion, or a bonus, or a raise.

Regardless of the reason for using untrusted sources, their use is damaging and should be reduced and eliminated. This is true of business data and analytics, and it is true of the current global health crisis. In both arenas, let’s all be part of the solution, not part of the problem.

Let us be a part of the cure, never part of the plague – we’ll only be remembered for what we create.[6]


[1] Before you ask, yes, my family and I are healthy and well. I’ve been working from home for over a week now, which is a nice silver lining; I have a small but comfortable home office, and can avoid the obnoxious Seattle-area commute.

[2] This article is the best single source I know of. It’s not authoritative source for the subject, but it is aggregating and citing authoritative sources and presenting their information in a form closer to the solution domain than to the problem domain.

[3] This is why I’ve been practicing social media distancing.

[4] This is the where the “personal pandemic parable” part of the blog post ends. From here on it’s all about SSBI. If you’re actually curious, I erred on the side of caution and started working from home and avoiding crowds before it was recommended or mandated. I still don’t know if all of the actions I’ve taken were necessary, but I’m glad I took them and I hope you all stay safe as well.

[5] As anyone who has ever implemented a single source of truth for any non-trivial data domain can attest.

[6] You can enjoy the lyrics even if Kreator’s awesome music isn’t to your taste.

Real customers, real stories

This is my personal blog – I try to be consistently explicit in reminding all y’all about this when I post about topics that are related to my day job as a program manager on the Power BI CAT team. This is one of those posts.

If I had to oversimplify what I do at work, I’d say that I represent the voice of enterprise Power BI customers. I work with key stakeholders from some of the largest companies in the world, and ensure that their needs are well-represented in the Power BI planning and prioritization process, and that we deliver the capabilities that these enterprise customers need[1].

Looking behind this somewhat grandiose summary[2], a lot of what I do is tell stories. Not my own stories, mind you – I tell the customers’ stories.

Image by Daria Głodowska from Pixabay
It was the best of clouds, it was the worst of clouds.

On an ongoing basis, I ask customers to tell me their stories, and I help them along by asking these questions:

  • What goals are you working to achieve?
  • How are you using Power BI to achieve these goals?
  • Where does Power BI make it hard for you to do what you need to do?

When they’re done, I have a pretty good idea what’s going on, and do a bunch of work[3] to make sure that all of these stories are heard by the folks responsible for shipping the features that will make these customers more successful.

Most of the time these stories are never shared outside the Power BI team, but on occasion there are customers who want to share their stories more broadly. My amazing teammate Lauren has been doing the heavy lifting[4] in getting them ready to publish for the world to see, and yesterday the fourth story from her efforts has been published.

You should check them out:

  1. Metro Bank: Metro Bank quickly delivers business efficiency gains without requiring involvement from IT
  2. Cummins: Cummins uses self-service BI to increase productivity and reduce unnecessary costs
  3. Veolia: Environmental services company builds sustainable, data-driven solutions with Power BI and Azure
  4. Avanade: Microsoft platform–focused IT consulting company innovates with Power BI and Azure to improve employee retention
  5. Cerner: Global healthcare solutions provider moves to the cloud for a single source of truth in asset and configuration management

Update: Apparently the Cerner story was getting published while I was writing this post. Added to the list above.

I know that some people will look at these stories and discount them as marketing – there’s not a lot I can do to change that – but these are real stories that showcase how real customers are overcoming real challenges using Power BI and Azure. Being able to share these stories with the world is very exciting for me, because it’s an insight into the amazing work that these customers are doing, and how they’re using Power BI and Azure services to improve their businesses and to make people’s lives better. They’re demonstrating the art of the possible in a way that is concrete and real.

And for each public story, there are scores of stories that you’ll probably never hear. But the Power BI team is listening, and as long as they keep listening, I’ll keep helping the customers tell their stories…


[1] This makes me sound much more important than I actually am. I should ask for a raise.

[2] Seriously, if I do this, shouldn’t I be be a VP or Partner or something?

[3] Mainly boring work that is not otherwise mentioned here.

[4] This is just one more reason why having a diverse team is so important – this is work that would be brutally difficult for me, and she makes it look so easy!

 

One diagram to rule them all

A few weeks back MVP Paul Turley blogged on Power Query performance and diagnostics. It was a good, useful post, but I wasn’t really the target audience and I probably would have forgotten about it if it weren’t for one thing.

This diagram.

pbi
It really says it all, doesn’t it?

Look at it.

Look at it again, and pause to thoughtfully consider its elegance and beauty.

In the time since Paul shared this post, I’ve been involved in any number of conversations[1] where customer stakeholders had questions about Power BI application performance. This type of conversation isn’t particularly new, but now I’ve started using this diagram[2] as a point of reference.

The results have been very positive. Although nothing in the diagram is new or particularly interesting on its own, having this simple visual reference for the components that make up the canonical end-to-end flow in a Power BI application have made my conversations more useful and productive. Less time is required to get all stakeholders to a point of shared understanding – more time can be devoted to identifying and solving the problem.

I don’t know if Paul truly appreciates the beauty of what he’s created. But I do. And you should too.


[1] In case you’ve been wondering why my blog and YouTube output has dried up this month, it’s because real life has been kicking my ass. I think I can finally see the light at the end of the tunnel, so hopefully we’ll be back with regular content before too long. Hopefully.

[2] This beautiful, simple, elegant diagram.