Quick Tip: Implementing “IsNumeric” in Power Query

I’ve had many people tell me that they have a blog so they will have a place to look to find the information that they need, and answers to their own future questions. This isn’t typically the case for me… but this post is an exception.

Every few months I find myself searching for an IsNumeric function[1] in Power Query. Each time I end up spending a few frustrating hours before I find the solution I’m looking for, because the search terms I’m using aren’t how people who have already solved the problem have shared it.

Last night, when I was once again failing to find the solution I needed I reached out to Twitter for help, and Imke Feldman delivered:.

If you’re not sure how you’d use this solution, consider the problem I was working on last night. My source data includes a “release-date” column that can include a wide range of values, and because I do not have any influence over the source system, I need to work with what it contains. Specifically, I need to find records that contain a year, and I need to extract that year into a new numeric column.

01 data

There are a few more pattern variations in the data including two-digit years, but this sample shows the basic problem I need to solve. And in order to solve it, I need to be able to determine if the two characters at the end of the text represent a number.

Using Imke’s approach, this is what I ended up doing:

  1. Extract the last two characters from the source column
  2. When the extracted characters contain a number, convert them to numeric
  3. Standardize the resulting value to represent the correct year

This last step is specific to my particular “date as year encoded in different text formats” problem, but I’m including it here in case someone else may find it useful.

It looks like this:

02 solution

Thanks to Imke for her quick response. Hopefully there are people out there other than me who will find this useful… but I know that I will.


[1] This is how I think of a function that accepts a text value returns true if that value contains a number, and false if it does not. This is probably because of my decades-long love affair with Transact-SQL, since T-SQL has an ISNUMERIC function built in.

Are you building a BI house of cards?

Every few weeks I see someone asking about using Analysis Services as a data source for Power BI dataflows. Every time I hear this, I cringe, and then include advice something like this[1] in my response.

Using Analysis Services as a data source is an anti-pattern – a worst practice. It is not recommended, and any solution built using this pattern is likely to produce dissatisfied customers. Please strongly consider using other data sources, likely the data sources on which the AS model is built.


There are multiple reasons for this advice.


Some reasons are technical. Extraction of large volumes of data is not what an Analysis Services model is designed for. Performance for the ETL process is likely to be poor, and you’re likely end up with memory/caching issues on the Analysis Services server. Beyond this, AS models typically don’t include the IDs/surrogate keys that you need for data warehousing, so joining the AS data to other data sources will be problematic.[2]


For some specific examples and technical deep dives into how and why this is a bad idea, check out this excellent blog post from Shabnam Watson. The focus of the post is on SSAS memory settings, but it’s very applicable to the current discussion.


Some reasons for this advice are less technical, but no less important. Using analytics models as data sources for ETL processing are a strong code smell[3] (“any characteristic in the source code of a program that possibly indicates a deeper problem”) for business intelligence solutions.


Let’s look at a simple and familiar diagram:


01 good


There’s a reason this left-to-right flow is the standard representation of BI applications: it’s what works. Each component has specific roles and responsibilities that complement each other, and which are aligned with the technology used to implement the component. This diagram includes a set of logical “tiers” or “layers” that are common in analytics systems, and which mutually support each other to achieve the systems’ goals.
Although there are many successful variations on this theme, they all tend to have this general flow and these general layers. Consider this one, for example:


02 ok

This example has more complexity, but also has the same end-to-end flow as the simple one. This is pretty typical for  scenarios where a single data warehouse and analytics model won’t fulfill all requirements, so the individual data warehouses, data marts, and analytics models each contain a portion – often an overlapping portion – of the analytics data.

Let’s look at one more:

03 - trending badly

This design is starting to smell. The increased complexity and blurring of responsibilities will produce difficulties in data freshness and maintenance. The additional dependencies, and the redundant and overlapping nature of the dependencies means that any future changes will require additional investigation and care to ensure that there are no unintended side effects to the existing functionality.

As an aside, my decades of working in data and analytics suggest that this care will rarely actually be taken. Instead, this architecture will be fragile and prone to problems, and the teams that built it will not be the teams who solve those problems.

And then we have this one[4]:

04 - hard no

This is what you get when you use Analysis Services as the data source for ETL processing, whether that ETL and downstream storage is implemented in Power BI dataflows or different technologies. And this is probably the best case you’re likely to get when you go down this path. Even with just two data warehouses and two analytics models in the diagram, the complex and unnatural dependencies are obvious, and are painful to consider.

What would be better here?[5] As mentioned at the top of the post, the logical alternative is to avoid using the analytics model and to instead use the same sources that the analytics model already uses. This may require some refactoring to ensure that the duplication of logic is minimized. It may require some political or cross-team effort to get buy-in from the owners of the upstream systems. It may not be simple, or easy. But it is almost always the right thing to do.

Don’t take shortcuts to save days or weeks today that will cause you or your successors months or years to undo and repair. Don’t build a house of cards, because with each new card you add, the house is more and more likely to fall.


[1] Something a lot like this. I copied this from a response I sent a few days ago.

[2] Many thanks to Chris Webb for some of the information I’ve paraphrased here. If you want to hear more from Chris on this subject, check out this session recording from PASS Summit 2017. The whole session is excellent; the information most relevant to this subject begins around the 26 minute mark in the recording. Chris also gets credit for pointing me to Shabnam Watson’s blog.

[3] I learned about code smells last year when I attended a session by Felienne Hermans at Craft Conference in Budapest. You can watch the session here. And you really should, because it’s really good.

[4] My eyes are itching just looking at it. It took an effort of will to create this diagram, much less share it.

[5] Yes, just about anything would be better.

Quick Tip: Power Query column by example

It seems like everyone knows about how Power Query in Power BI Desktop lets you use web pages as data sources, and lets you build queries by entering values from the web page. This is probably because so many people use the “from web page by example” feature for demos[1].

It also seems like no one knows about the “column from examples” feature in Power Query, even though this is just as exciting, and much more useful. They’re both built on the same underlying intelligence, but this one lets you work with data from any source.

Here’s the quick overview:

  1. In the Power Query editor in Power BI Desktop, choose “Column from Examples” from the “Add Column” tab.
  2. Enter the values that the new column should have for rows that are already in your data set.
  3. Review the values that Power Query is suggesting for the other rows, and when they are all correct, choose OK, and then say “Ooooooohhhhh” when Power Query does all the work for you.

Like this[2]:

column by example

In this example, I want to extract an ID value from a URL column where the URLs all follow the same pattern. I’ve copied the ID value from the first row, and when I paste it in to the new column, Power Query immediately generates the M code to extract the ID and adds a new step the query.

= Table.AddColumn(#”Renamed Columns”, “Text After Delimiter”, each Text.AfterDelimiter([Band Page URL], “/”, 4), type text)

This code isn’t too complex. You could have written this code. I could have written it, but odds are it would have taken me a few minutes to look up the syntax for the Text.AfterDelimiter function, and to iterate a few times getting it right. Using this awesome feature I get the same code in seconds. For more complex scenarios you may need to enter more than one desired value before Power Query has enough information to get the function just right.

The best part is that the step that is added is just another query step. You can delete it, you can edit it, you can add more steps after it. This is just a fast and simple way to add the step without needing to write any code or even know what functions to use.

There have been a few occasions where Power Query couldn’t quite figure out what I needed it to do, but it hasn’t happened often, or lately. This may be because I’ve been getting better, but it’s probably because the Power Query team has continued to improve the feature since it was introduced in April 2017. That’s right – this feature has been available for over two years, and you’re just learning about it now.

Or are you[3]?

Either way, check it out the next time you’re working in Power Query, and let me know what you think.


[1] To be fair, it does make for an awesome demo.

[2] There also used to be a GIF here, but apparently my GIF-making skills need some refinement before I start using GIFs in blog posts.

[3] When I demo Power Query in Power BI Desktop to customers, maybe 10% have already seen this feature already. I suspect that the high quality demographic that this blog is certain to attract will be more aware and more engaged than the general population, so I’m genuinely curious to know if you already knew about this gem.

Power BI dataflows enhanced compute engine

This month the Power BI dataflows team announced an exciting new capability available in preview for dataflows in workspaces backed by Power BI Premium dedicated capacity[1]. This compute engine builds on the Excel-like capabilities delivered through linked and computed entities to dramatically increase refresh performance for dataflow entities that get their data from other dataflow entities.

This post won’t go deep into the technical details – I’ll defer to the folks who built the feature to share that – but it will share important highlights to answer the most common questions I’ve seen since the preview was announced.

Let’s begin with a quick review. Consider this typical scenario:

excel white

  1. Data is loaded into entities in the “Ingest from Dynamics Sales” and “Product Telemetry in Azure” dataflows from data sources external to Power BI.
  2. The “Clean and enrich sales data” and “Add Telemetry Customer Attributes” reference these dataflows as their sources, and the “Final Business View” dataflow references them.
  3. The first two dataflows are configured for scheduled refresh. The other three dataflows will automatically refresh[2] to keep the data in their entities up to date.

But what is actually happening in this scenario?

The boxes and arrows in this type of diagram could represent any number of technologies, but for Power BI dataflows the key technology is the Mashup Engine. The Mashup Engine is the runtime engine behind Power Query – it takes the Power Query “M” script and performs the extract, transform, and load (ETL) processes that the script defines. The Mashup Engine runs in Excel, in Power BI Desktop, in the Power BI on-premises data gateway, and in the Power BI service, and it is a cornerstone for data connectivity in Power BI.

A key characteristic of Power Query and the Mashup Engine is query folding. Query folding is a process by which the Mashup Engine offloads the heavy lifting of the data transformation to the data source, as enabled by the capabilities of the data source. For example, if a Power Query user built a query that extracted three columns from one table in a SQL Server database, joined them with four columns from another table in the same database, and then groups the join results by one of the columns. The Mashup Engine will automatically send the appropriate JOIN and GROUP BY statements to the server, and the server will return the results. If the same query is executed against two folders full of CSV files[4], there is no “server” to perform the required join and grouping operations, so the Mashup Engine performs them in memory.

To summarize this example, the query is largely the same, and the Mashup Engine uses query folding to choose the best location[5] for the required compute operations to be performed.

This brings us back to dataflows, CDM folders, and why the new advanced compute engine is exciting. When a Power BI dataflow is refreshed, the data produced by the Mashup Engine’s execution of the dataflow entities’ queries is stored in Azure in the CDM folder format, which uses CSV files in folders for the storage. This means that when one entity references another entity, it is accessing CSV data and is using the Mashup Engine for its processing[6]. This can result in slow refresh performance for some linked entity scenarios when the data volume or transformations can’t be optimized by the Mashup Engine.

The new advanced compute engine addresses this problem by loading the CDM folder data into an Azure SQL DB-based cache, and using query folding against this cache. This can produce significant performance improvements for the downstream refresh of computed entities. If we consider again the scenario shown above, the new advanced compute engine will change the underlying processing without requiring any change to the dataflows or entities themselves.

  1. When data is loaded into entities in the dataflows, the data is also loaded into the SQL-based cache.
  2. This may increase the refresh times for the “Ingest from Dynamics Sales” and “Product Telemetry in Azure” dataflows, because the caching is an additional operation not otherwise required.
  3. When the “Clean and enrich sales data” and “Add Telemetry Customer Attributes” and “Final Business View” dataflows refresh, the Mashup Engine can use query folding against the SQL-based cache. This can result in dramatic improvements in performance for the refresh of computed entities.

If you want to know more, you should watch this session from the Microsoft Business Applications Summit and read this blog post after you watch the session. Both of these excellent resources are from the architect responsible for much of the technical underpinnings of Power BI, and both go into more technical detail than I’ve gone into here.

As mentioned above, this post is designed to address some questions I’ve heard since this new preview capability was announced. Please let me know if this post is helpful, or if you have any questions of your own that aren’t answered here.


[1] Yes – this is a Premium-only feature.

[2] In case you didn’t click through already, this is why we call this behavior “Excel-like.” The Power BI service maintains the dependencies between dataflow entities like Excel maintains dependencies between cells and ranges.

[3] If you want to know more about query folding, check out this awesome post from MVP Matt Allington.

[4] This is one of the aspects of Power Query that are most exciting for me, as only the “source” function in the query would need to be changed. This data source agnostic nature of Power Query is enabled by query folding.

[5] Or locations, because a single query can combine data from multiple data sources.

[6] This is also why dataflows are an import-only data source in Power BI Desktop.

 

Ignorance as a finite resource (or “Data visualization resources”)

I’m not a data visualization professional, nor do I play one on TV. Even though I’ve been working with data professionally for over 20 years[1], I’ve generally been on the back end of systems, working on data platforms and not data visualization. I’ve been known to say on occasion[2] that I’ll make your data sing, but you probably don’t want me responsible for making it pretty. If I’m your report designer, building them will take five times as long, and the reports will suck ten times as much.

Before I joined Microsoft, I worked for many years as a consultant. When working with potential clients, I would often present my ignorance as an asset. This may sound strange, but think about the truth in this simple pitch: “You and your team know too much. There are some questions you will never think to ask, because they’re just too obvious. One of the strengths I will bring to this engagement is my ignorance – I will ask those questions and together we will find answers and build solutions.”

Sometimes ignorance doesn’t feel like an asset – sometimes it just gets in the way. In the past few months this has been the case as I’ve been trying to get build Power BI reports for some of the data I work with day to day. To be more specific, I’ve been trying to build reports that don’t suck. I already have functional reports, but I believe that there are insights waiting in the data that could be discovered and shared more easily if I had stronger data visualization skills. I read documentation, and followed examples and reached out to multiple friends and colleagues or advice and… and often found that not only did I not have the knowledge to effectively visualize my data, I often struggled to effectively communicate my goals. I was both mega-ignorant and meta-ignorant.

Which brings me to the point of this post: I’m hoping to use this monumental ignorance as an asset.

Earlier this week I reached out on Twitter asking for help. I wanted to find resources that could help me build my data visualization skills, and to build my vocabulary of terms and concepts so I could ask better questions. I wanted to address both the mega-ignorance and the meta-ignorance. So I asked both Alberto Cairo[3] and Alyssa Fowers[4] by name, and asked for help in general, and boy did I get it.

I learned one immediate lesson from the replies to this post: I did not do an effective job in communicating my goals. Previously I had struggled when asking for tactical data visualization help; here I was struggling when asking for strategic help, and most of the responses weren’t even close to what I was asking for[5]. This was frustrating, but it validated my understanding of the problem, and reinforced my belief that I need a deeper understanding of the concepts and nomenclature of data visualization if I’m going to improve.

Of the many resources that were shared with me, this one looks the most promising: Andy Kirk’s “Data Visualization: A Handbook for Data Driven Design.” Based on the thoughtful description[6] and glowing reviews, I think this is where I will begin. I’ve pre-ordered the second edition and when it arrives next month I’ll begin my journey…

book

…which is sort of where I was going with this whole post[7]. Ignorance as a finite resource, remember?

I’ve long observed that when one person is struggling with something, he’s typically not alone. When one person asks a question, a dozen other people breathe a silent sign of relief, because they had the same question but didn’t dare ask it. Working under the assumption that I’m not alone in this starting point, I’m hoping to use my current state of ignorance, and the upcoming process of destroying that ignorance, to start a “data visualization as a second language” series of posts.

If you’re looking for excellent tools that you can use today to build on your existing foundation of data visualization knowledge and skills, click on the Twitter link above because there are over a dozen web sites, books, and other resources – and I’m too lazy to copy them all here. But if you’re interested in joining me on my learning journey, stay tuned until August.

Do you think this type of content has an audience? Are you that audience? I’d love to hear from you


[1] How young is too young to play the “old man” card so consistently? Asking for a friend.

[2] Typically these are occasions when someone is looking at one of my ugly reports, and I’m feeling self-conscious.

[3] You may know Mr. Cairo as a leading expert and author on data visualization, as the Knight Chair at the University of Miami, or as the guy who is too busy to replace the lorem ipsum in his bio page, but I will always think of him as the other person who regularly presents on the intersection of data and heavy metal. \m/

[4] If you don’t follow Ms. Fower’s “Data and Dragons” blog you definitely should, because she already has two parts in her “Sword Graphs” series.

[5] Yes, I know this is how Twitter works. I’m trying to be generous here – work with me.

[6] Click on the “Who is this book for” link and tell me that you don’t wish every technical book had this information so readily available.

[7] Brevity will never be my forte.

[8] I was originally thinking of “Data Viz 101” but that sounded too advanced, and if I went with the second idea of  “Data Viz 001”  I know I would forever be making “James Bond’s analyst colleague” jokes. DVSL wasn’t my first choice, but I think I like it.

Status Check: Power BI dataflows and ADLSg2

In the last few weeks I’ve seen a spike in questions related to the integration of Power BI dataflows and Azure Data Lake Storage Gen2. Here’s a quick “status check” on the current state of this feature to get the answers out for as many people as possible.

  • Power BI dataflows are generally available (GA) for capabilities that use the built-in Power BI-managed storage.
  • Power BI dataflows integration with Azure is currently in preview – this includes the “Bring your own storage account” capabilities where you can configure Power BI to use your ADLSg2 storage account for dataflows storage, instead of using the built-in Power BI-managed storage.
  • During preview, there are several known limitations:
    • Only a single ADLSg2 storage account can be configured for a Power BI tenant.
    • The storage account, once configured, cannot be changed.
    • The setup process to connect Power BI with ADLSg2 is somewhat lengthy and step-intensive.
    • To grant users other than the owner of the dataflow access to the dataflow in Power BI, you must grant them permissions to access the workspace in Power BI and grant them access to the CDM folder in ADLSg2.

These limitations will be addressed when this capability hits GA, but you should definitely be aware of them in the meantime. (You may also want to take a look at this MBAS session for an overview of the roadmap for the rest of this calendar year.)

I’ve seen customers take different approaches:

  1. Some customers delay their integration of Power BI and ADLSg2, and are waiting for these limitations to be removed before they move forward.
  2. Some customers adopt within the constraints of the preview, and choose a workload or set of workloads where the current limitations are acceptable.
  3. Some customers set up a demo tenant of Power BI and use it to test and validate the ADLSg2 integration while deciding on option 1 or option 2.

I hope this helps. If you or your customers have any questions on this topic that aren’t answered here, please let me know!


[1] And they’re all documented. Nothing in this blog post is new, but hopefully it will help to have this summary online and sharable.