Being a PM at Microsoft – Part 2

Last year I published a blog post and video on Being a Program Manager at Microsoft, in which I shared some of my personal experience and advice based on my decade plus at Microsoft. I’ve received an incredible amount of feedback[1] on that 2020 post…

…and now it has an unexpected follow-up:

My teammate Kasper de Jonge has started a YouTube channel, and in addition to some great technical deep dive content, he has a series of interviews with members of the Power BI product team and the broader data platform community.

This week he’s published an interview with Power BI Group Program Manager Kim Manis, where they spend 45 minutes talking about being a PM in the Power BI team… and it’s fascinating.

In the interview Kim provides an insider’s perspective on topics ranging from how we decide what features to build, how we build and ship them, to what are you looking for in a PM candidate when you’re hiring.

I’ve been a PM at Microsoft for well over a decade, and have worked on Power BI for almost three years now… and I learned a lot from their conversation. If you’re reading this post I am very confident you’ll learn something too, and will enjoy the whole delightful conversation.


[1] I’ve come to believe that this video may have been the best thing I did in 2020. I’ve had dozens of conversations with people around the world who want to be program managers, which feels like a chance to make a difference in the world when the world is doing its best to make me feel powerless.

Will it fold? Look in Power Query Online!

Query folding is a key capability in Power Query. In short, query folding is the process by which the Power Query mashup engine takes the steps defined in the Power Query editor and translates them into processing to be performed by the data source from which the query is extracting data.

The use of query folding can produce significant performance improvements. Consider a query that includes a grouping step. Without query folding, all of the detailed data required must be extracted from the data source, and the aggregation performed by the mashup engine. With query folding, the mashup engine can add a GROUP BY clause[1] to the data source query it generates, and lets the data source perform the costly operation – only the aggregated data leaves the data source.

Because of the performance benefit that query folding provides, experienced query authors are typically very careful to ensure that their queries take advantage of the capabilities of their data sources, and that they fold as many operations as possible. But for less experienced query authors, telling what steps will fold and which will not has not always been simple…

Until now.

This week the Power Query team announced the introduction of query folding indicators in Power Query Online. If you’re authoring a dataflow in Power BI or Power Apps, you will now see visual indicators to let you know which steps will fold, and which will not.

Each step will include an icon[2] that shows how that step will be handled. The announcement blog post goes into more details (because connectivity and data access are complicated topics, and because of the huge diversity in the data sources to which Power Query can connect) but the short version of the story is that life just got simpler for people who build dataflows.

After the last 12 months anything that makes my life simpler is a good thing.


[1] Or whatever the syntax is for the data source being used.

[2] Or is it a logo?

Old videos, timeless advice

It’s 2021, which means my tech career turns 25[1] this year.

It me

Back in 1995 an awesome book was published: Dynamics of Software Development by Jim McCarthy. Jim was a director on the Visual C++ team at Microsoft back when that was a big deal[2]. The book is one of the first and best books I read as a manager of software development teams, and it has stood the test of time – I still have it on the bookshelf in my office[3].

Of course, I wasn’t managing a software development team in 1995. I discovered Jim’s wisdom in 1996 or 1997 when I was preparing to teach courses on the Microsoft Solutions Framework[4]. When I opened the box that the trainer kit came in, there was a CD-ROM with videos[5] of Jim McCarthy presenting to an internal Microsoft audience on his “rules of thumb” for reliably shipping great software. When I first watched them, it was eye-opening… almost life-changing. Although I did have a mentor at the time, I did not have a mentor like Jim.

The second time I watched the videos, it was with my whole team. Jim’s rules from these videos and his book became part of the team culture[6], and I’ve referred back to them many times over the decades.

Now you can too – the videos are available on YouTube:

Some of the rules may feel a bit dated if you’re currently shipping commercial software or cloud services[7] but even the dated ones are based on fundamental and timeless truths of humans and teams of humans.

I’m going to take some time this week to re-watch these videos, and to start off the new year with this voice from years gone by. If you build software and work with humans, maybe you should too.


[1] Damn, that is weird to type, since I’m sure I can’t even be 25 years old yet.

[2] It may also be a big deal today, but to me at least it doesn’t feel quite as much.

[3] Not that I’ve seen my office or my bookshelf in forever, so this assertion should be taken with a grain of 2020.

[4] Fun fact: if you remember MSF, you’re probably old too.

[5] Because in those days “video” and “internet” weren’t things that went together. This may or may not have been because the bits had to walk uphill both ways in the snow.

[6] To the extent the team could be said to have a shared culture. We were all very young, and very inexperienced, and were figuring things out as we went along.

[7] Back in 1995 CI/CD wasn’t part of the industry vocabulary, and I can count on zero hands the clients I worked with before joining Microsoft who had anything resembling a daily build.

Automatically refresh dataset when dataflow refresh completes – now with 100% less code!

Back in October I blogged about using PowerShell and/or the Power BI dataflows REST APIs to trigger a dataset refresh on dataflow refresh completion. This was a new capability and was well received… but lots of people wanted to do it without needing to write any code.

For those people, Christmas came early.

This week the Data Integration team at Microsoft announced the public preview of the dataflows connector for Power Automate.

This new Power Automate connector[1] will enable any authorized user to take action based on dataflow refresh completion, including:

  • Sending a notification – so you can keep additional users and groups informed
  • Trigger a dataflow or dataset refresh – so you don’t need to maintain separate refresh schedules or live with unnecessary processing delays
  • Trigger a Power Automate flow – so you can do pretty much whatever you want, because Power Automate has a ton of capabilities this blog post won’t even attempt to mention

You can also use the connector to refresh dataflows and monitor dataflow refresh status.

To make it even easier, the preview includes samples for these scenarios and more. You can read all about it and get started today in this post on the official Power BI Blog. What are you waiting for?


[1] Which uses the APIs mentioned in the earlier blog post.

Data Culture Presentation Resources

On Thursday, December 10th I joined the Glasgow Data User Group for their December festivities. Please don’t tell anyone[1] but I’ll be bringing the gift of data culture!

The session recording is now available on YouTube:

If you’re interested, you can also download my session slides here: Glasgow Data UG – Building a Data Culture with Power BI.


[1] I want it to be a surprise! Also this footnote makes even less sense now that the session is in the past…

A hat full of of dataflows knowledge

Life and work have been getting the best of me this month, and I haven’t found the time[1] to keep up on blogging and video now that my series on building a data culture has wrapped up. I’ve been watching the dataflows and Power Query teams releasing all sorts of exciting new capabilities, and realizing that I’m not going to be writing about them in a timely manner.

Thankfully Laura Graham-Brown is picking up the slack – and then some.

Laura is a Microsoft MVP whose “Hat Full of Data” blog has become one of my favorite morning reads, and whose YouTube channel seems to include all of the videos I’ve been thinking about making, but not actually finding the time to make them.

Like this one on the new Power Query Online diagram view that is now available in public preview for dataflows in Power BI:

If you’ve been waiting for new dataflows content, you should definitely head over to Laura’s blog today to check out the awesome work she’s been doing.

I hope to be writing more regularly in December after my work-related “crunch mode” has passed, but if 2020 has taught me anything[2] it’s that I have no idea what’s waiting around the corner. In the meantime, you should follow Laura, because she’s doing awesome work.


[1] Or the spare creative mental energies, which seem to be in sparser supply than spare minutes and hours.

[2] If 2020 has taught me anything, it’s that 2020 has taught me nothing.

“Why dataflows?” webcast recording now online

A lot of the questions I get about dataflows in Power BI boil down to the simplest[1] question: “Why dataflows?”

On Saturday November 7 I joined MVP Reid Havens for a YouTube live stream where we looked at this question and a bunch of other awesome dataflow questions from the 60+ folks who joined us.

The stream recording is now available for on-demand viewing. You should check it out!


[1] And therefore most difficult to answer concisely. That’s just how it goes.

Data Culture: Strategy is more important than tactics

Even though he lived 2,000 years ago, you’ve probably heard of the Chinese military strategist and general Sun Tzu. He’s known for a lot of things, but these days he’s best known for his work The Art of War[1], which captures military wisdom that is still studied and applied today

Even though Sun Tzu didn’t write about building a data culture[2], there’s still a lot we can learn from his writings. Perhaps the most relevant advice is this:

Building a data culture is hard. Keeping it going, and thriving, as the world and the organization change around you is harder. Perhaps the single most important thing[3] you can do to ensure long-term success is to define the strategic goals for your efforts.

Rather than doing all the other important and valuable tactical things, pause and think about why you’re doing them, and where you want to be once they’re done. This strategic reflection will prove invaluable, as it will help you prioritize, scope, and tune those tactical efforts.

Having a shared strategic vision makes everything else easier. At every step of the journey, any contributor can evaluate their actions against that strategic vision. When conflicts arise – as they inevitably will – your pre-defined strategic north star can help resolve them and to keep your efforts on track.


[1] Or possibly for the Sabaton album of the same name, which has a catchier bass line. And since Sabaton is a metal band led by history geeks, they also have this video that was released just a few weeks ago that looks at some of the history behind the album and song.

[2] Any more than Fiore wrote about business intelligence.

[3] I say “perhaps” because having an engaged executive sponsor is the other side of the strategy coin. Your executive sponsor will play a major role in defining your strategy, and in getting all necessary stakeholders on board with the strategy. Although I didn’t plan it this way, I’m quite pleased with the parallelism of having executive sponsorship be the first non-introductory video in the series, and having this one be the last non-summary video. It feels neat, and right, and satisfying.

Data Culture: Measuring Success

Building a data culture is hard. It involves technology and people, each of which is complicated enough on its own. When you combine them[1] they get even harder together. Building a data culture takes time, effort, and money – and because it takes time, you don’t always know if the effort and money you’re investing will get you to where you need to go.

Measuring the success of your efforts can be as hard as the efforts themselves.

Very often the work involved in building a data culture doesn’t start neatly and cleanly with a clearly defined end state. It often starts messily, with organic bottom-up efforts combining with top-down efforts over a period of change that’s driven as much by external forces as by any single decision to begin. This means that measuring success – and defining what “success” means – happens while the work is being done.

Measuring usage is the easiest approach, but it’s not really measuring success. Does having more reports or more users actually produce more value?

For many organizations[2], success is measured in the bottom line –  is the investment in building a data culture delivering the expected return from a financial perspective?

Having a data culture can make financial sense in two ways: it can reduce costs, and it can increase revenue.

Organizations often reduce costs by simplifying their data estate. This could involve standardizing on a single BI tool, or at least minimizing the number of tools used, migrating from older tools before they’re retired and decommissioned. This reduces costs directly by eliminating licensing expenses, and reduces costs indirectly by reducing the effort required for training, support, and related tasks. Measuring cost reduction can be straightforward – odds are someone is already tracking the IT budget – and measuring the reduction in the utilization of legacy tools can also take advantage of existing usage reporting.

Organizations can increase revenue by building more efficient, data-driven business processes. This is harder to measure. Typically this involves instrumenting the business processes in question, and proactively building the processes to correlate data culture efforts to business process outcomes.

In the video I mention the work of several enterprise Power BI customers who have build Power BI apps for their information workers and salespeople. These apps provide up-to-date data and insights for employees who would otherwise need to rely on days- or weeks-old batch data delivered via email or printout. By tracking which employees are using what aspects of the Power BI apps, the organizations can correlate this usage with the business outcomes of the employees’ work[3]. If a person or team’s efficiency increases as data usage increases, it’s hard to argue with that sort of success.

But.. this post and video assume that you have actually set explicit goals. Have you? If you haven’t defined that strategy, you definitely want to check out next week’s video…


[1] Especially since you usually have organizational politics thrown into the mix for good measure, and that never makes things any simpler.

[2] I originally typed “most organizations” but I don’t have the data to support that assertion. This is true of most of the mature enterprise organizations that I’ve worked with, but I suspect that for a broader population, most organizations don’t actually measure – they just cross their fingers and do what they can.

[3] Odds are someone is already tracking things like sales, so the “business outcomes” part of this approach might be simpler than you might otherwise assume. Getting access to the data and incorporating it in a reporting solution may not be straightforward, but it’s likely the data itself already exists for key business processes.

Data Culture: Experts and Expertise

Power BI lets business users solve more and more problems without requiring deep BI and data expertise. This is what self-service business intelligence is all about, as we saw when we looked at a brief history of business intelligence.

At other points in this series we also looked at how each app needs to be treated like the unique snowflake that it is, that successful data cultures have well-defined roles and responsibilities, and that sometimes you need to pick your battles and realize that some apps and data don’t need the management and care that others do.

But some apps do.

Some BI solutions are too important to let grow organically through self-service development. Sometimes you need true BI experts who can design, implement, and support applications that will scale to significant data volumes and number of concurrent users.

In this video we look at a specific approach taken by the BI team at Microsoft that developed the analytic platform used by Microsoft finance[1].

This is one specific approach, but it demonstrates a few fundamental facts that can be overlooked too easily:

  • Building an enterprise BI solution is building enterprise software, and it requires the rigor and discipline that building enterprise software demands
  • Each delivery team has dedicated teams of experts responsible for their part of the whole
  • Each business group with data and BI functionality included in the solution pays for what they get, with both money and personnel

Organizations that choose to ignore the need for experts tend to build sub-optimal solutions that fail to deliver on stakeholder expectations. These solutions are often replaced much sooner than planned, and the people responsible for their implementation are often replaced at the same time[2].

This isn’t the right place to go into the details of what sort of expertise you’ll need, because there’s too much to cover, and because the details will depend on your goals and your starting point. In my opinion the best place to go for more information is the Power BI whitepaper on Planning a Power BI Enterprise Deployment. This free resources delivers 250+ pages of wisdom from authors Melissa Coates and Chris Webb. You probably don’t need to read all of it, but odds are you will probably want to once you get started…


After this video and post were completed but before they were published, this story hit the global news wire: Botched Excel import may have caused loss of 15,841 UK COVID-19 cases | Ars Technica (arstechnica.com)

Wow. Although I am generally a big fan of Ars Technica’s journalism, I need to object to the sub-headline: “Lost data was reportedly the result of Excel’s limit of 1,048,576 rows.”

Yeah, no. The lost data was not the result of a  capability that has been well-known and documented for over a decade. The lost data was a result of using non-experts to do a job that experts should have done.

Choosing the wrong tool for a given job is often a symptom of not including experts and their hard-earned knowledge at the phases of a project where that expertise could have set everything up for success. This is just one example of many. Don’t let this happen to you.


[1] If you’re interested in a closer look at the Microsoft Finance COE approach, please check out this article in the Power BI guidance documentation.

[2] If you’ve been a consultant for very long, you’ve probably seen this pattern more than once. A client calls you in to replace or repair a system that never really worked, and all of the people who built it are no longer around.