Thoughts on community

This post is mainly a story about how community has affected my career, and how it might affect yours as well. The post has been sitting in my drafts for a few months waiting for inspiration to strike. That inspiration came in this recent[0] data culture video where I talked about communities of practice inside enterprise organizations, and the ways that positive behaviors can be promoted and encouraged for these internal communities of practice – because the same patterns often apply to public communities as well.

So let’s talk about community.

No, not that Community. Technical community. Community like this:

hand-1917895_640

I started off my tech career as a Microsoft Certified Trainer (MCT) in 1996. I spent much of the next decade as a trainer, mentor, and consultant, but even though I was learning and growing year after year, the pace of that growth felt slow. I quickly became a medium-size fish in a tiny pond[1] and that was pretty much that. My career growth was limited in part by the company I chose to keep.

To be fair, there was an online[3] MCT community where I would often lurk and occasionally post, but I never took real advantage of it. I didn’t feel confident enough to post often, and for some reason traveling to a conference or other in-person MCT event just didn’t feel like something that was available to me – this was something that other people did, not something I could do.[4]

In late 2004 I started thinking about getting back to more training after a few years focusing on software development and consulting. I decided that the upcoming SQL Server 2005 release would be a great opportunity for making this change. Microsoft had made available to MCTs the preview versions of two “What’s new in SQL Server 2005” courses – one for developers and one for admins – and I was going to get certified to teach these courses as soon as they were available to offer. I’d been working with SQL Server since version 6.5, and much of my consulting work included developing solutions on SQL Server 2000, so this was a logical direction that matched my experience, skills, and interest.

To make a long story slightly less long – and because I’ve shared some of the details while this post was sitting in my drafts – that focus on SQL Server 2005 was my jumping off point to become an active member of the MCT and SQL Server communities. I was still doing many of the same things I’d been doing all along, but now I was doing them where more people could benefit, and where more people could see me doing what I did. And people noticed.

People noticed that I knew what I was talking about on technical topics, but mainly people noticed that I engaged in a constructive and helpful manner. That I gave more than I took.[5]

And this brings us to the concept of visibility from that earlier blog post, and the opportunity that comes from being seen kicking ass – being seen doing the great work you do every day. For me, being actively engaged in the MCT community led to my first job at Microsoft. People who knew me because of my community involvement let me know that there was a job I should apply for, and they put a referral into the internal HR systems to make it more likely I would get called for an interview[6], and the rest is history.

For you, being actively engaged in your community – whether it’s the Power BI community or something else – will likely mean something very similar. The details will be different because the context is different, but adding your voice to the communities you frequent is how you let the community know who you are and what you do.

When you answer questions – and when you ask them too – other members of the community will begin to understand what you do, and how you work. When you write a blog post or record a video, more people will begin to understand how you think, and how you communicate. They’ll learn from you, and they’ll learn about you.

This is good.

As more people learn about who you are and what you do, and as more people begin to see and value your contributions, they’ll start thinking about how awesome it would be if you could contribute more directly to their projects and products. For me this meant being asked more frequently to teach SQL Server and SSIS classes, and an increased volume of inquiries about consulting availability.[7] I had more opportunities for more interesting and better paying work because of my engagement with the community, and it changed my life for the better in many ways.

For you it will look different because the context is different, but you can follow the same pattern. Your contributions to the community organically become your resume[8], your marketing web site, your demo reel. Potential employers and potential clients[9] will have a more complete picture of how you work than a job interview can provide, and that can make all the difference.

Implied in this pattern is the positive nature of your community contributions. If you’re a bully or a gatekeeper or engage in other types of abusive behavior… people notice that too. This can become its own opportunity for improvement, because if you’re more mindful about presenting the side of you that you want people to see, the more likely you are to become that better version of yourself.[10]

Engaging with the community changed my life. It can change yours too.


Interesting side note on timing: I joined Twitter ten years ago this month.

At that time I had just accepted a position on the SQL Server Integration Services  (SSIS) product team, and had asked one of my new colleagues where the SSIS community was centered, and he pointed me there. The rest sort of took care of itself…


[0] Referring to this data culture video as “recent” gives you some insight into how long this post languished in draft form after the introduction was written.

[1] This is kind of like saying “big fish in a small pond” but self-aware enough to realize I’ve never been a large fish on any scale.[2]

[2] I am so sorry, and yet not sorry at all.

[3] Two words: Usenet newsgroups.

[4] I’ve thought about this a lot in the intervening years, and believe that the root cause was probably a combination of my rural, poor, opportunity-sparse upbringing, and my mental health challenges, but I doubt I’ll ever know for sure.

[5] I’m generalizing here, and making some big assumptions based on the available evidence. This is what I think people noticed based on the way that they reacted, but only a handful said so. Of course, they typically said so when offering me work…

[6] This is something I have done a few times over the years myself, and there’s no better feeling than knowing I played some small part in helping Microsoft attract new talent, and in helping a member of the community achieve their career goals.

[7] For me it also meant that I roughly doubled my billing rate over the course of 18 months. I had so many inquiries that I was turning down almost all of them because I didn’t have the bandwidth to say yes, and decided to take advantage of the laws of supply and demand.

[8] This 2016-era post from MVP Brent Ozar presents a much more prescriptive approach to this theme.

[9] And potential employees as well, in the fullness of time.

[10] And let’s be honest – if you’re an insufferable asshole who refuses to change, participating in a community is a great way to let people know, so they can know that you’re someone they should avoid hiring, interviewing, or contracting. This is also good.

Free Power BI Intro Training From Microsoft

As anyone who is familiar with my series on building a data culture knows, training is an absolutely vital success factor for a community of practice. This isn’t limited to developers, administrators, and content creators – you need to empower every user for your data culture to truly thrive.

Although Microsoft has long offered a variety of training resources such as the “in a day” courses[1], until now these offerings were focused on more technical personas and topics, not on the business users who are consuming reports and dashboards.

Guess what this blog post is about.

That’s right! Starting this month any Power BI user can sign up for free online introductory training delivered by Microsoft instructors.

These are synchronous online classes with expert instructors who guide Power BI consumers through some of the most common and most important parts of the Power BI experience. Classes are scheduled for multiple days and multiple times of the day[2], and I expect more to be scheduled on an ongoing basis. Click the link above for the current schedule.

But it gets even better!

If you’re working for an organization with lots of Power BI users[3], Microsoft offers “closed classes” on your schedule, for your users. I don’t have a link to share for this one, but your Microsoft account team wants to help get you started. Talk to your customer success account manager (CSAM) or another member of your account team, and ask about the Microsoft Store Events training for Power BI.

Why wait? Sign up for a course or talk to your account team today!


[1] Take a look here for a list of courses including Dashboard in a Day, Administrator in a Day, Developer in a Day, and other training courses that sadly do not include the phrase “in a day” in their names.

[2] There’s even one course being delivered as I type this post!

[3] We’ve been making this training available for enterprise customers for the past few months, and have already reached tens of thousands of learners while receiving great feedback along the way.

Never underestimate the voice of the customer

Most of what I do in my day job involves listening.

A sculpture of men with ears pressed to a brick wall - Image by Couleur from Pixabay
It’s better to put your ear up to the wall than it is to bang your head against it.

Specifically, I spend a lot of time listening to people who use Power BI[1] talk about why they use it, how they use it, and where they struggle along the way. I ask a few questions to get them started, and then I ask a few follow-up questions to help them along as they tell their story – but mainly I listen.

I listen, and then I use what I hear to help make Power BI a better tool for these people and their organizations’ needs.

I listen for patterns and trends, and for a signal to emerge from the noise of hundreds of conversations. When it does, I make sure that the right decision-makers and leaders hear it and understand it, so they can take it into account as they decide where to invest and how to prioritize those investments.[2]

I listen with an informed ear. I spent almost 15 years building software and data solutions, so I understand a lot of the technical and organizational challenges these people face. I spent the next decade or so building data products and services as a program manager at Microsoft, so I understand a lot of the challenges involved in prioritizing and building product features. And perhaps most importantly, I am genuinely interested in listening to and learning from the people with whom I talk. I truly want to hear their stories.

Of course there’s only one of me, and I can’t spend my whole workday listening to people tell their stories… so I’ve helped build a team to listen[3], and it’s making a difference. The next time you see an announcement about new capabilities in Power BI that will improve the lives of large enterprise customers, odds are this team had a hand in making those capabilities a reality.

Why am I telling you all this? This is my personal blog, and I’m not looking for a job, so who cares, right?

You care.

You care because this type of listening is a thing that you can do to make better software. You can do this whether you’re building commercial software or if you’re part of a BI team that needs to build solutions for larger user populations.[4]

You care because letting the people who use your software – or the people you want to use your software – talk about what what’s important to them… will let you know what’s important to them. And if you’re really listening, you can use what you learn to make better decisions and deliver better products and features.

If you’re genuinely interested in the people who use the software you build, you should consider giving this approach a try.


[1] These people tend to be the business and IT owners of Power BI in large enterprise organizations, but they can be just about anyone involved in implementing, adopting, using, or supporting Power BI.

[2] If you’re interested in learning more about the Power BI product team’s planning process, I cannot recommend highly enough this 2019 presentation from Will Thompson.

[3] As part of building that team we all read and/or re-read Lean Customer Development by Cindy Alvarez. If you have not yet read this awesome book, there’s no time like the present.

[4] I honestly don’t know if this type of rigor is appropriate at a smaller scale, because I haven’t done it and haven’t personally seen it done. I suspect that it would be very valuable, but also suspect that there may be lighter-weight options that would provide a better return on investment.

Roche’s Maxim of Data Transformation

According to the internet, a maxim is a succinct formulation of a fundamental principle, general truth, or rule of conduct.[1] Maxims tend to relate to common situations and topics that are understandable by a broad range of people.

Topics like data transformation.

Update June 16 2021: The video from my June 11 DataMinutes presentation is now available, so if you prefer visual content, the video might be a good place to start.

Roche’s Maxim of Data Transformation[2] states:

Data should be transformed as far upstream as possible, and as far downstream as necessary.

In this context “upstream” means closer to where the data is originally produced, and “downstream” means closer to where the data is consumed.

By transforming data closer to its ultimate source costs can be reduced, and the value added through data transformation can be applied to a greater range of uses. The farther downstream a given transformation is applied, the more expensive it tends to be – often because operations are performed more frequently – and the smaller the scope of potential value through reuse.

I’ve been using this guideline for many years, but I only started recognizing it as a maxim in the past year or so. The more I work with enterprise Power BI customers the more I realize how true and how important it is – and how many common problems could be avoided if more people thought about it when building data solutions.

Please note that this maxim is generalizable to data solutions implementing using any tools or technology. The examples below focus on Power BI because that’s where I spend my days, but these principles apply to every data platform I have used or seen used.

In day-to-day Power BI conversations, perhaps the most common question to which Roche’s Maxim applies is about where to implement a given unit of logic: “Should I do this in DAX or in Power Query?”

Short answer: Do it in Power Query.

If you’re ever faced with this question, always default to Power Query if Power Query is capable of doing what you need – Power Query is farther upstream. Performing data transformation in Power Query ensures that when your dataset is refreshed the data is loaded into the data model in the shape it needs to be in. Your report logic will be simplified and thus easier to maintain, and will likely perform better[3] because the Vertipaq engine will need to do less work as users interact with the report.[4]

But what if you need data transformation logic that depends on the context of the current user interacting with the report – things like slicers and cross-filtering? This is the perfect job for a DAX measure, because Power Query doesn’t have access to the report context. Implementing this logic farther downstream in DAX makes sense because it’s necessary.

Another common question to which Roche’s Maxim applies is also about where to implement a given unit of logic: “Should I do this in Power BI or in the data warehouse?”

Short answer: Do it in the data warehouse.

If you’re ever faced with this question, always default to transforming the data into its desired shape when loading it into the data warehouse – the data warehouse is farther upstream. Performing data transformation when loading the data warehouse ensures that any analytics solution that uses the data has ready access to what it needs – and that every solution downstream of the warehouse is using a consistent version of the data.

From a performance perspective, it is always better to perform a given data transformation as few times as possible, and it is best to not need to transform data at all.[5] Data transformation is a costly operation – transforming data once when loading into a common location like a data warehouse, data mart, or data lake, is inherently less costly than transforming it once for every report, app, or solution that uses that common location.

A much less common question to which Roche’s Maxim applies might be “What about that whole ‘not transforming at all’ pattern you mentioned a few paragraphs back – how exactly does that dark magic work?”

Short answer: Have the data already available in the format you need it to be in.

That short answer isn’t particularly useful, so here are two brief stories to illustrate what I mean.

Many years ago I was working with an oil & gas company in an engagement related to master data management. This company had a fundamental data problem: the equipment on their drilling platforms around the world was not standardized, and different meters reported the same production data differently. These differences in measurement meant that all downstream reporting and data processing could only take place using the least common denominator across their global set of meters… and this was no longer good enough. To solve the problem, they were standardizing on new meters everywhere, and updating their data estate to take advantage of the new hardware. My jaw dropped when I learned that the cost of upgrading was upwards of one hundred million dollars… which was a lot of money at the time.

Much more recently I was working with a retail company with over 5,000 locations across North America. They had similar challenges with similar root causes: their stores did not have consistent point of sale (POS) hardware[6], which meant that different stores produced different data and produced some common data at different grain, and analytics could only take place using the least common denominator data from all stores. Their solution was also similar: they upgraded all POS systems in all stores. I don’t have a dollar amount to put with this investment, but it was certainly significant – especially in an industry where margins are traditionally small and budgets traditionally very conservative.

Both of these stories illustrate organizations taking Roche’s Maxim to the extreme: they transformed their key data literally as far upstream as possible, by making the necessary changes to produce the data in its desired form.[7]

Each of these stories included both technical and non-technical factors. The technical factors revolve around data. The non-technical factors revolve around money. Each company looked at the cost and the benefit and decided that the benefit was greater. They implemented an upstream change that will benefit every downstream system and application, which will simplify their overall data estate, and which corrects a fundamental structural problem in their data supply chain that could only be mitigated, not corrected, by any downstream change.

There’s one additional part of Roche’s Maxim that’s worth elaborating on – what does “necessary” mean? This post has looked at multiple scenarios that emphasize the “as far upstream as possible” part of the maxim – what about the “as far downstream as necessary” part?

Some factors for pushing transformations downstream are technical, like the DAX context example above. Other technical factors might be the availability of data – you can’t produce a given output unless you have all necessary inputs. Others may be organizational – if data is produced by a 3rd party, your ability to apply transformations before a given point may be constrained more by a contract than by technology.

Still other factors may be situational and pragmatic – if team priorities and available resources prevent you from implementing a unit of data transformation logic in the data warehouse, it may be necessary to implement it in your Power BI solution in order to meet project deadlines and commitments.

These are probably the most frustrating types of “necessary” factors, but they’re also some of the most common. Sometimes you need to deliver a less-than-ideal solution and incur technical debt that you would prefer to avoid. The next time you find yourself in such a situation, keep this maxim in mind, and remember that even though it may be necessary to move that data transformation logic downstream today, tomorrow is another day, with different constraints and new opportunities.

Callout: This may be my maxim, but this isn’t the first blog post on the topic. Stuart Box from the UK data consultancy BurningSuit blogged back in March and was in first with this excellent article.


[1] Or a men’s magazine. I really really wanted to use this more pop-culture meaning to make a “DQ” joke playing on the men’s magazine “GQ” but after watching this post languish in my drafts for many months and this joke not even beginning to cohere, I decided I should probably just let it go and move on.

But I did not let it go. Not really.

[2] If you think that sounds pretentious when you read it, imagine how it feels typing it in.

[3] The performance benefit here is not always obvious when working with smaller data volumes, but will become increasingly obvious as the data volume increases. And since the last thing you want to do in this situation is to retrofit your growing Power BI solution because you made poor decisions early on, why not refer to that maxim the next time you’re thinking about adding a calculated column?

[4] This post only spent a week or so in draft form, but during this week I watched an interesting work email conversation unfold. A Power BI customer was experiencing unexpected performance issues related to incremental refresh of a large dataset, and a DAX calculated column on a table with hundreds of millions of records was part of the scenario. The email thread was between members of the engineering and CAT teams, and a few points jumped out at me, including one CAT member observing “in my experience, calculated columns on large tables [can] increase processing times and also can greatly increase the time of doing a process recalc… it also depends on the complexity of the calculated column.”

I don’t have enough knowledge of the Veripaq engine’s inner workings to jump into the conversation myself, but I did sip my coffee and smile to myself before moving on with my morning. I checked back in on the conversation later on, and saw that a Power BI  group engineering manager (GEM) had shared this guidance, presented here with his approval:

“From a pure perf standpoint, its true that we can say:

  • The most efficient approach for a large fact table is to have all the columns be present in the source table (materialized views also might work), so that no extra processing is necessary during the import operation (either in Mashup or in DAX)
  • The next most efficient approach for a large fact table is usually going to be to have the computation be part of the M expression, so that it only needs to be evaluated for the rows in the partitions being processed
  • DAX calculated columns are a great option for flexibility and are particularly useful for dimension tables, but will be the least efficient compared to the above two options for large fact tables”

That sounds pretty familiar, doesn’t it? The GEM effectively summarized Roche’s Maxim, including specific guidance for the specific customer scenario. The details will differ from context to context, but I have never found a scenario to which the maxim did not apply.

Yes, this is a challenge for you to tell me where and how I’m wrong.

[5] Just as Sun Tzu said “To fight and conquer in all our battles is not supreme excellence; supreme excellence consists in breaking the enemy’s resistance without fighting,” supreme excellence in data transformation is not needing to transform the data at all.

[6] That’s “cash registers” to the less retail inclined readers.

[7] If you feel inclined to point out that in each of these stories there is additional data transformation taking place farther downstream, I won’t argue. You are almost certainly correct… but the Maxim still holds, as the key common transformations have been offloaded into the most upstream possible component in the data supply chain. Like a boss.[8]

[8] Like a supremely excellent[5] boss.

Data Governance and Self-Service Business Intelligence

When you hear someone say that governance and self-service BI don’t go together, or some variation on the tired old “Power BI doesn’t do data governance” trope, you should immediately be skeptical.

During a recent Guy in a Cube live stream there was a great discussion about self-service BI and data governance, and about how in most larger organizations Power BI is used for self-service and non-self-service BI workloads. The discussion starts around the 27:46 mark in the recording if you’re interested.

As is often the case, this discussion sparked my writing muse and I decided to follow up with a brief Twitter thread to share a few thoughts that didn’t fit into the stream chat. That brief thread turned out to be much larger and quite different than what I expected… big enough to warrant its own blog post. This post.

Please consider this well-known quote: “No plan survives contact with the enemy.”

Please also feel encouraged to read this fascinating history of the quote and its attributions on the Quote Investigator web site.

In his 1871 essay Helmuth von Moltke called out an obvious truth: battle is inherently unpredictable, and once enemy contact is made a successful commander must respond to actual conditions on the ground –  not follow a plan that is more outdated with every passing minute.

At the same time, that commander must have and must adhere to strategic goals for the engagement. Without these goals, how could they react and respond and plan as the reality of the conflict changes constantly and unpredictably?

Implementing managed self-service business intelligence – self-service BI hand-in-hand with data governance – exhibits many of the same characteristics.

Consider a battlefield, where one force has overwhelming superiority: More soldiers, more artillery, more tanks, and a commanding position of the terrain. The commander of that force knows that any enemy who faces him on this field will fail. The enemy knows this too.

And because the enemy knows this, they will not enter the field to face that superior force. They will fade away, withdraw from direct conflict, and strike unexpectedly, seeking out weaknesses and vulnerabilities. This is the nature of asymmetric warfare.

The commander of the more powerful force probably knows this too, and will act accordingly. The smart commander will present opportunities that their enemies will perceive as easily exploitable weaknesses, to draw them in and thus to bring that overwhelming force to bear.

And this brings us naturally back to the topic of data governance, self-service business intelligence, and dead Prussian field marshals.

Seriously.

In many large organizations, the goal of the data governance group is to ensure that data is never used improperly, and to mitigate (often proactively and aggressively mitigate) the risk of improper use.

In many large organizations, the data governance group has an overwhelming battlefield advantage. They make the rules. They define the processes. They grant or deny access to the data. No one gets in without their say-so, and woe unto any business user who enters that field of battle, and tries to get access to data that is under the protection of this superior force.

Of course, the business users know this. They’re outgunned and outmanned, and they know the dire fate that awaits them if they try to run the gauntlet that the data governance team has established. Everyone they know who has ever tried has failed.

So they go around it. They rely on the tried and true asymmetric tactics of self-service BI. The CSV export. The snapshot. The Excel files and SharePoint lists with manually-entered data.

Rather than facing the data governance group and their overwhelming advantages, they build a shadow BI solution.

These veteran business users choose not to join a battle they’re doomed to lose.

They instead seek and find the weak spots. They achieve their goals despite all of the advantages and resources that the data governance group has at their disposal.

Every time. Business users always find a way.

This is where a savvy data governance leader can learn from the battlefield. Just as a military commander can draw in their opponents and then bring their superior forces to bear, the data governance group can present an attractive and irresistible target to draw in business users seeking data.

This is the path to managed self-service business intelligence… and where the whole military analogy starts to break down. Even though data governance and self-service BI have different priorities and goals, these groups should not and must not be enemies. They need to be partners for either to succeed.

Managed self-service BI succeeds when it is easier for business users to get access to the data they need by working within the processes and systems established by the data governance group, rather than circumventing them.[1]

Managed self-service BI succeeds when the data governance group enables processes and systems to give business users the access they need to the data they need, while still maintaining the oversight and control required for effective governance.

Managed self-service BI succeeds when the data governance group stops saying “no” by default, and instead says “yes, and” by default.

  • Yes you can get access to this data, and these are the prerequisites you must meet.
  • Yes you can get access to this data, and these are the allowed scenarios for proper use.
  • Yes you can get access to this data, and these are the resources to make it easy for you to succeed.

What business user would choose to build their own shadow BI solution that requires manual processes and maintenance just to have an incomplete and outdated copy when they could instead have access to the real data they need – the complete, trusted, authoritative, current data they need – just by following a few simple rules?[2]

Managed self-service BI succeeds when the data governance group provides business users with the access they need to the data they need to do their jobs, while retaining the oversight and control the data governance group needs to keep their jobs.

This is a difficult balancing act, but there are well-known patterns to help organizations of any size succeed.

At this point you may be asking yourself what this has to do with plans not surviving contact with the enemy. Everything. It has everything to do with this pithy quote.

The successful data governance group will have a plan, and that plan will be informed by well-understood strategic goals. The plan is the plan, but the plan is made to change as the battle ebbs and flows. The strategy does not change moment to moment or day to day.

So as more business users engage, and as the initial governance plan shows its gaps and inadequacies, the data governance group changes the plan, keeping it aligned with the strategy and informed by the reality of the business.

This is a difficult balancing act, but it is being successfully performed by scores of enterprise organizations around the world using Power BI. Each organization finds the approach and the balance that best achieves their goals.

Although this post has used a martial metaphor to help engage the reader, this is not the best mental model to take away. Data governance and self-service business intelligence are not at war, even though they are often in a state of conflict or friction.

The right mental model is of a lasting peace, with shared goals and ongoing tradeoffs and compromises as each side gives and takes, and contributes to those shared goals.

This is what a successful data culture looks like: a lasting peace.

Multiple people replied to the original Twitter thread citing various challenges to succeeding with managed self-service business intelligence, balancing SSBI with effective data governance. Each of those challenges highlights the importance of the effective partnership between parties, and the alignment of business and IT priorities into shared strategic goals and principals that allow everyone to succeed together.

If you want to explore these concepts further and go beyond the highlights in this post, please feel encouraged to check out the full “Building a Data Culture with Power BI” series of posts and videos. Acknowledging the fact that data governance and self-service BI go beautifully together is just the beginning.


[1] This is really important

[2] Yes, yes, we all know that guy. Sometimes the data governance team needs the old stick for people who don’t find the new carrot attractive enough.. but those people tend to be in the minority if you use the right carrot.

 

Webcast: Unleashing Your Personal Superpower

Last week I delivered a presentation for the Data Platform Women In Tech‘s Mental Health and Wellness Day event.

The recording for my “Unleashing Your Personal Superpower” session is now online:

I hope you’ll watch the recording[1], but here’s a summary just in case:

  • Growth often results from challenge
  • Mental health issues like anxiety and depression present real challenges that can produce “superpowers” – skills that most people don’t have, and which can grow from the day-to-day experience of living with constant challenge
  • Recognizing and using these “superpowers” isn’t always easy – you need to be honest with yourself and the people around you, which in turn depends on being in a place of trust and safety to do so

In the presentation I mainly use an X-Men metaphor, and suggest that my personal superpowers are:

  1. Fear: Most social interactions[2] are deeply stressful for me, so to compensate I over-prepare and take effective notes for things I need to remember or actions I need to take
  2. Confusion: I don’t really understand how other people’s brains work, or the relationship between my actions and their reactions – to compensate I have developed techniques for effective written and verbal communication to eliminate ambiguity and drive clarity
  3. Chaos: My mind is made of chaos[3], which causes all sorts of challenges – to compensate I have developed a “process reflex” to understand complex problems and implement processes to address or mitigate them

I wrap up the session with a quick mention of the little-known years before Superman joined the Justice League, which he spent as a Kryptonite delivery guy, and absolutely hated his life. Once he found a team where he could use his strengths and not need to always fight to overcome his weaknesses, he was much happier and effective.

In related news, if I could only get these Swedes to return my calls, I’m thinking of forming a new superhero team…


[1] And the rest of the session recordings, because it was a great event.

[2] Think “work meetings” for starters and “work social events” for an absolute horror show.

[3] I have a draft blog post from two years ago that tries to express this, but I doubt I will ever actually finish and publish it…

T-Minus One Week Until MBAS 2021!

The 2021 Microsoft Business Applications Summit (MBAS) starts next week on Tuesday May 4th.[1] This year MBAS is a free online event, so if you’re not already registered please register right now – this blog post can wait.

Ok, now that you’re registered, let me tell you why I’m so excited about MBAS this year. The main reason for my excitement is these featured sessions:

Power BI (Peek into the future Part 1): Vision & Roadmap

Power BI (Peek into the future Part 2): Analytics for everyone

Power BI Announcement

The first two “peek into the future” sessions are all about the Power BI roadmap – how Power BI has grown, and how it will continue to grow in the months ahead.

The final “announcement” session… we’re not talking about yet. The session page says “We have a new feature launching that we can’t wait to tell you about” but we are going to have to wait until we get approval to publicly discuss this important new capability. I shouldn’t say any more, but this feels like A Big Deal™.

The next reason I’m so excited is because of the 12 or so “Real-world stories with Power BI” sessions you’ll find in the full Power BI session list. These sessions are led by my amazing teammate Lauren, who is working with Power BI customer organizations around the world to help showcase their successes, and the awesome work their teams have done using Power BI, Azure, and Microsoft 365.

Even though the “big news from Microsoft” sessions get most of the excitement at Microsoft conferences, these “what real customers are doing in the real world” sessions are the hidden gems of MBAS.

These sessions are an amazing opportunity to look behind the scenes of other organizations to see how they’re solving problems – what tools they use, how they use them, how they structure their teams, how they scope and deliver and evolve their solutions. This type of “strategic story” can be incredibly valuable for decision makers, architects, and other senior technical stakeholders, and represent the type of insights that are often difficult to obtain unless you have personal connections with peers inside those other organizations.

No matter what excites you the most, please register for MBAS today, and please spread the word!


[1] Please pause for a moment to appreciate the effort I’ve taken to keep this post free of Star Wars references. You’re welcome.

Problems, not solutions

Imagine walking into a restaurant.

No, not that one. Imagine walking a nicer restaurant than the one you thought of at first. A lot nicer.

restaurant-2697945_640
Even nicer than this.

Imagine walking into a 3-star Michelin-rated best-in-the-world restaurant, the kind of place where you plan international travel around reservations, the kind of place where the chef’s name is whispered in a kind of hushed awe by other chefs around the world.

Now imagine being seated and then insisting that the chef cook a specific dish in a specific way, because that’s what you’re used to eating, because you know what you like and what you want.

knife-1088529_640
I’ll just leave this here for no particular reason.

In this situation, one of three things is likely to happen:

  1. The chef will give you what you ask for, and your dining experience will be diminished because your request was granted.
  2. The chef will ask you to leave.
  3. The chef will instruct someone else to ask you to leave.[1]

Let’s step back from the culinary context of this imaginary scenario, and put it into the context of software development and BI.

Imagine a user emailing a developer or software team[2] and insisting that they need a feature developed that works in some specific way. “Just make it do this!” or maybe “It should be exactly like <legacy software feature> but <implemented in new software>!!”

I can’t really imagine the restaurant scene playing out – who would spend all that money on a meal just to get what they could get anywhere? But I don’t need to imagine the software scene playing out, because I’ve seen it day after day, month after month for decades, despite the fact that even trivial software customization can be more expensive than a world-class meal. I’ve also been on both sides of the conversation – and I probably will be again.

When you have a problem, you are the expert on the problem. You know it inside and out, because it’s your problem. You’ve probably tried to solve it – maybe you’ve tried multiple solutions before you asked for help. And while you were trying those ineffective solution approaches, you probably thought of what a “great” solution might look like.

So when you ask for help, you ask for the solution you thought of.

This is bad. Really bad.

“Give me this solution” or “give me this feature” is the worst thing to ask for. Because while you may be the expert on your problem, you’re not an expert on the solution. If you were, you wouldn’t be asking for help in the first place.

And to make matters worse, most of the people on the receiving end aren’t the IT equivalents of 3-star Michelin-rated chefs. They’re line cooks, and they give you what you asked for because they don’t know any better. And because the customer is always right, right?

Yeah, nah.

As a software professional, it’s your job to solve your customers’ problems, and to do so within constraints your customers probably know nothing about, and within an often-complex context your customers do not understand[3]. If you simply deliver what the customer asks for, you’ve missed the point, and missed an opportunity to truly solve the fundamental problem that needs to be solved.

If you’re a BI professional, every project and every feature request brings with it an opportunity. It’s the opportunity to ask questions.

Why do you need this?

When will you use it?

What are you doing today without the thing you’re asking for?

When will this be useful?

Who else will use it?[4]

As a software or BI professional, you’re the expert on the solution, just as your customer is the expert on the problem. You know where logic can be implemented, and the pros and cons of each option. You know where the right data will come from, and how it will need to be transformed. You know what’s a quick fix and what will require a lot of work – and might introduce undesirable side-effects or regressions in other parts of the solution.

With this expertise, you’re in the perfect position to ask the right questions to help you understand the problem that needs to be solved. You’re in the perfect position to take the answers to your questions and to turn them into what your customer really needs… which is often very different from what they’re asking for.

You don’t need to ask these questions every time. You may not even need to ask questions of your customers most of the time[5]. But if you’re asking these questions of yourself each time you’re beginning new work – and asking questions of your customers as necessary – the solutions you deliver will be better for it.

And when you find yourself on the requesting side (for example, when you find yourself typing into ideas.powerbi.com) you’re in the perfect position to provide information about the problem you need solved – not just the solution you think you need. Why not give it a try?

This is a complex topic. I started writing this post almost 100 years ago, way back in February 2020[6]. I have a lot more that I want to say, but instead of waiting another hundred years I’ll wrap up now and save more thoughts for another post or two.

If you’ve made it this far and you’re interested in more actual best practices, please read Lean Customer Development by Cindy Alvarez. This book is very accessible, and although it is targeted more at startups and commercial software teams it contains guidance and practices that can be invaluable for anyone who needs to deliver solutions to someone else’s problems.


 

[1] This seems like the most likely outcome to me.

[2] This could be a commercial software team or “the report guy” in your IT department. Imagine what works for you.

[3] If you’re interested in a fun and accessible look at how the Power BI team decides what features to build, check out this 2019 presentation from Power BI PM Will Thompson. It’s only indirectly related to this post, but it’s a candid look at some of the “often-complex context” in which Power BI is developed.

[4] Please don’t focus too much on these specific questions. They might be a good starting point, but they’re just what leaped to mind as I was typing, not a well-researched list of best practice questions or anything of the sort.

[5] If you’re a BI developer maintaining a Power BI application for your organization, you may have already realized that asking a ton of questions all the time may not be appreciated by the people paying your salary, so please use your own best judgment here.

[6] This probably explains why I so casually mentioned the idea of walking into a restaurant. I literally can’t remember the last time I was in a restaurant. Do restaurants actually exist? Did they ever?

Upcoming webcast: Unleashing your personal superpower

This damned pandemic[1] has been getting the best of me, but Talking About Mental Health is Important, and it’s more important now than ever. So when I learned that the Data Platform Women in Tech user group was hosting a free day-long online event focused on mental health and wellness, I knew I wanted to participate.

Today I am excited to announce that I will be presenting on “Unleashing your personal superpower” on Friday May 7th:

Building a successful career in tech is hard. Every day is a battle, and sometimes the barriers placed in your way seem insurmountable. Wouldn’t it all be easier if you had a superpower?

Maybe you do.

Greta Thunberg famously described her Asperger’s syndrome diagnosis as being a superpower: “I’m sometimes a bit different from the norm. And – given the right circumstances- being different is a superpower.” We can all learn something from Greta.

In this informal session, Microsoft program manager Matthew Roche will share his personal story – including the hard, painful parts he doesn’t usually talk about. Matthew will share his struggles with mental health, how he found his own superpower, how he tries to use it to make the world a better place… and how you might be able to do the same.

Please join the session, and join the conversation, because talking about mental health is important, and because the first step to finding your superpower is knowing where to look.

You can learn more about the event here, and you can sign up here.

I hope you’ll join me!


[1] Of course it’s more than just the damned pandemic, but when I first typed “this year” I realized this year has been going on for decades now, and I figured I would just roll with it because finding the perfect phrase wasn’t really important to the webcast announcement and anyway I expect things will probably suck indefinitely and isn’t this what run-on sentences in footnotes are for, anyway?

Deep thoughts on dataflows

As you may have noticed, life is complicated and keeps getting in the way of my plans to be a more active blogger and YouTuber[1]. I haven’t released a new dataflows video of my own in far too long[2], but my teammate Kasper is helping out by doing the hard work for me:

Last week I had an awesome conversation on “everything dataflows” with Kasper and the video is now available on his excellent Kasper On BI YouTube channel. In this hour-long video we talk about a lot of the “big picture” topics related to dataflows, including how dataflows fit into a larger data estate – and when you should use them, or avoid using them.

Please check it out, and let me know what you think!


[1] If this isn’t the name of a social media platform for potatoes, it should be.

[2] To add insult to the injury that is life in a global pandemic, my video editing PC died a few weeks ago. I now have it back up and running, but I lost all of my project templates and works in progress, which is likely to introduce more delays. FFS.