Quick Tip: Implementing “IsNumeric” in Power Query

I’ve had many people tell me that they have a blog so they will have a place to look to find the information that they need, and answers to their own future questions. This isn’t typically the case for me… but this post is an exception.

Every few months I find myself searching for an IsNumeric function[1] in Power Query. Each time I end up spending a few frustrating hours before I find the solution I’m looking for, because the search terms I’m using aren’t how people who have already solved the problem have shared it.

Last night, when I was once again failing to find the solution I needed I reached out to Twitter for help, and Imke Feldman delivered:.

If you’re not sure how you’d use this solution, consider the problem I was working on last night. My source data includes a “release-date” column that can include a wide range of values, and because I do not have any influence over the source system, I need to work with what it contains. Specifically, I need to find records that contain a year, and I need to extract that year into a new numeric column.

01 data

There are a few more pattern variations in the data including two-digit years, but this sample shows the basic problem I need to solve. And in order to solve it, I need to be able to determine if the two characters at the end of the text represent a number.

Using Imke’s approach, this is what I ended up doing:

  1. Extract the last two characters from the source column
  2. When the extracted characters contain a number, convert them to numeric
  3. Standardize the resulting value to represent the correct year

This last step is specific to my particular “date as year encoded in different text formats” problem, but I’m including it here in case someone else may find it useful.

It looks like this:

02 solution

Thanks to Imke for her quick response. Hopefully there are people out there other than me who will find this useful… but I know that I will.


[1] This is how I think of a function that accepts a text value returns true if that value contains a number, and false if it does not. This is probably because of my decades-long love affair with Transact-SQL, since T-SQL has an ISNUMERIC function built in.

Are you building a BI house of cards?

Every few weeks I see someone asking about using Analysis Services as a data source for Power BI dataflows. Every time I hear this, I cringe, and then include advice something like this[1] in my response.

Using Analysis Services as a data source is an anti-pattern – a worst practice. It is not recommended, and any solution built using this pattern is likely to produce dissatisfied customers. Please strongly consider using other data sources, likely the data sources on which the AS model is built.


There are multiple reasons for this advice.


Some reasons are technical. Extraction of large volumes of data is not what an Analysis Services model is designed for. Performance for the ETL process is likely to be poor, and you’re likely end up with memory/caching issues on the Analysis Services server. Beyond this, AS models typically don’t include the IDs/surrogate keys that you need for data warehousing, so joining the AS data to other data sources will be problematic.[2]


For some specific examples and technical deep dives into how and why this is a bad idea, check out this excellent blog post from Shabnam Watson. The focus of the post is on SSAS memory settings, but it’s very applicable to the current discussion.


Some reasons for this advice are less technical, but no less important. Using analytics models as data sources for ETL processing are a strong code smell[3] (“any characteristic in the source code of a program that possibly indicates a deeper problem”) for business intelligence solutions.


Let’s look at a simple and familiar diagram:


01 good


There’s a reason this left-to-right flow is the standard representation of BI applications: it’s what works. Each component has specific roles and responsibilities that complement each other, and which are aligned with the technology used to implement the component. This diagram includes a set of logical “tiers” or “layers” that are common in analytics systems, and which mutually support each other to achieve the systems’ goals.
Although there are many successful variations on this theme, they all tend to have this general flow and these general layers. Consider this one, for example:


02 ok

This example has more complexity, but also has the same end-to-end flow as the simple one. This is pretty typical for  scenarios where a single data warehouse and analytics model won’t fulfill all requirements, so the individual data warehouses, data marts, and analytics models each contain a portion – often an overlapping portion – of the analytics data.

Let’s look at one more:

03 - trending badly

This design is starting to smell. The increased complexity and blurring of responsibilities will produce difficulties in data freshness and maintenance. The additional dependencies, and the redundant and overlapping nature of the dependencies means that any future changes will require additional investigation and care to ensure that there are no unintended side effects to the existing functionality.

As an aside, my decades of working in data and analytics suggest that this care will rarely actually be taken. Instead, this architecture will be fragile and prone to problems, and the teams that built it will not be the teams who solve those problems.

And then we have this one[4]:

04 - hard no

This is what you get when you use Analysis Services as the data source for ETL processing, whether that ETL and downstream storage is implemented in Power BI dataflows or different technologies. And this is probably the best case you’re likely to get when you go down this path. Even with just two data warehouses and two analytics models in the diagram, the complex and unnatural dependencies are obvious, and are painful to consider.

What would be better here?[5] As mentioned at the top of the post, the logical alternative is to avoid using the analytics model and to instead use the same sources that the analytics model already uses. This may require some refactoring to ensure that the duplication of logic is minimized. It may require some political or cross-team effort to get buy-in from the owners of the upstream systems. It may not be simple, or easy. But it is almost always the right thing to do.

Don’t take shortcuts to save days or weeks today that will cause you or your successors months or years to undo and repair. Don’t build a house of cards, because with each new card you add, the house is more and more likely to fall.


[1] Something a lot like this. I copied this from a response I sent a few days ago.

[2] Many thanks to Chris Webb for some of the information I’ve paraphrased here. If you want to hear more from Chris on this subject, check out this session recording from PASS Summit 2017. The whole session is excellent; the information most relevant to this subject begins around the 26 minute mark in the recording. Chris also gets credit for pointing me to Shabnam Watson’s blog.

[3] I learned about code smells last year when I attended a session by Felienne Hermans at Craft Conference in Budapest. You can watch the session here. And you really should, because it’s really good.

[4] My eyes are itching just looking at it. It took an effort of will to create this diagram, much less share it.

[5] Yes, just about anything would be better.

Quick Tip: Power Query column by example

It seems like everyone knows about how Power Query in Power BI Desktop lets you use web pages as data sources, and lets you build queries by entering values from the web page. This is probably because so many people use the “from web page by example” feature for demos[1].

It also seems like no one knows about the “column from examples” feature in Power Query, even though this is just as exciting, and much more useful. They’re both built on the same underlying intelligence, but this one lets you work with data from any source.

Here’s the quick overview:

  1. In the Power Query editor in Power BI Desktop, choose “Column from Examples” from the “Add Column” tab.
  2. Enter the values that the new column should have for rows that are already in your data set.
  3. Review the values that Power Query is suggesting for the other rows, and when they are all correct, choose OK, and then say “Ooooooohhhhh” when Power Query does all the work for you.

Like this[2]:

column by example

In this example, I want to extract an ID value from a URL column where the URLs all follow the same pattern. I’ve copied the ID value from the first row, and when I paste it in to the new column, Power Query immediately generates the M code to extract the ID and adds a new step the query.

= Table.AddColumn(#”Renamed Columns”, “Text After Delimiter”, each Text.AfterDelimiter([Band Page URL], “/”, 4), type text)

This code isn’t too complex. You could have written this code. I could have written it, but odds are it would have taken me a few minutes to look up the syntax for the Text.AfterDelimiter function, and to iterate a few times getting it right. Using this awesome feature I get the same code in seconds. For more complex scenarios you may need to enter more than one desired value before Power Query has enough information to get the function just right.

The best part is that the step that is added is just another query step. You can delete it, you can edit it, you can add more steps after it. This is just a fast and simple way to add the step without needing to write any code or even know what functions to use.

There have been a few occasions where Power Query couldn’t quite figure out what I needed it to do, but it hasn’t happened often, or lately. This may be because I’ve been getting better, but it’s probably because the Power Query team has continued to improve the feature since it was introduced in April 2017. That’s right – this feature has been available for over two years, and you’re just learning about it now.

Or are you[3]?

Either way, check it out the next time you’re working in Power Query, and let me know what you think.


[1] To be fair, it does make for an awesome demo.

[2] There also used to be a GIF here, but apparently my GIF-making skills need some refinement before I start using GIFs in blog posts.

[3] When I demo Power Query in Power BI Desktop to customers, maybe 10% have already seen this feature already. I suspect that the high quality demographic that this blog is certain to attract will be more aware and more engaged than the general population, so I’m genuinely curious to know if you already knew about this gem.

Power BI dataflows enhanced compute engine

This month the Power BI dataflows team announced an exciting new capability available in preview for dataflows in workspaces backed by Power BI Premium dedicated capacity[1]. This compute engine builds on the Excel-like capabilities delivered through linked and computed entities to dramatically increase refresh performance for dataflow entities that get their data from other dataflow entities.

This post won’t go deep into the technical details – I’ll defer to the folks who built the feature to share that – but it will share important highlights to answer the most common questions I’ve seen since the preview was announced.

Let’s begin with a quick review. Consider this typical scenario:

excel white

  1. Data is loaded into entities in the “Ingest from Dynamics Sales” and “Product Telemetry in Azure” dataflows from data sources external to Power BI.
  2. The “Clean and enrich sales data” and “Add Telemetry Customer Attributes” reference these dataflows as their sources, and the “Final Business View” dataflow references them.
  3. The first two dataflows are configured for scheduled refresh. The other three dataflows will automatically refresh[2] to keep the data in their entities up to date.

But what is actually happening in this scenario?

The boxes and arrows in this type of diagram could represent any number of technologies, but for Power BI dataflows the key technology is the Mashup Engine. The Mashup Engine is the runtime engine behind Power Query – it takes the Power Query “M” script and performs the extract, transform, and load (ETL) processes that the script defines. The Mashup Engine runs in Excel, in Power BI Desktop, in the Power BI on-premises data gateway, and in the Power BI service, and it is a cornerstone for data connectivity in Power BI.

A key characteristic of Power Query and the Mashup Engine is query folding. Query folding is a process by which the Mashup Engine offloads the heavy lifting of the data transformation to the data source, as enabled by the capabilities of the data source. For example, if a Power Query user built a query that extracted three columns from one table in a SQL Server database, joined them with four columns from another table in the same database, and then groups the join results by one of the columns. The Mashup Engine will automatically send the appropriate JOIN and GROUP BY statements to the server, and the server will return the results. If the same query is executed against two folders full of CSV files[4], there is no “server” to perform the required join and grouping operations, so the Mashup Engine performs them in memory.

To summarize this example, the query is largely the same, and the Mashup Engine uses query folding to choose the best location[5] for the required compute operations to be performed.

This brings us back to dataflows, CDM folders, and why the new advanced compute engine is exciting. When a Power BI dataflow is refreshed, the data produced by the Mashup Engine’s execution of the dataflow entities’ queries is stored in Azure in the CDM folder format, which uses CSV files in folders for the storage. This means that when one entity references another entity, it is accessing CSV data and is using the Mashup Engine for its processing[6]. This can result in slow refresh performance for some linked entity scenarios when the data volume or transformations can’t be optimized by the Mashup Engine.

The new advanced compute engine addresses this problem by loading the CDM folder data into an Azure SQL DB-based cache, and using query folding against this cache. This can produce significant performance improvements for the downstream refresh of computed entities. If we consider again the scenario shown above, the new advanced compute engine will change the underlying processing without requiring any change to the dataflows or entities themselves.

  1. When data is loaded into entities in the dataflows, the data is also loaded into the SQL-based cache.
  2. This may increase the refresh times for the “Ingest from Dynamics Sales” and “Product Telemetry in Azure” dataflows, because the caching is an additional operation not otherwise required.
  3. When the “Clean and enrich sales data” and “Add Telemetry Customer Attributes” and “Final Business View” dataflows refresh, the Mashup Engine can use query folding against the SQL-based cache. This can result in dramatic improvements in performance for the refresh of computed entities.

If you want to know more, you should watch this session from the Microsoft Business Applications Summit and read this blog post after you watch the session. Both of these excellent resources are from the architect responsible for much of the technical underpinnings of Power BI, and both go into more technical detail than I’ve gone into here.

As mentioned above, this post is designed to address some questions I’ve heard since this new preview capability was announced. Please let me know if this post is helpful, or if you have any questions of your own that aren’t answered here.


[1] Yes – this is a Premium-only feature.

[2] In case you didn’t click through already, this is why we call this behavior “Excel-like.” The Power BI service maintains the dependencies between dataflow entities like Excel maintains dependencies between cells and ranges.

[3] If you want to know more about query folding, check out this awesome post from MVP Matt Allington.

[4] This is one of the aspects of Power Query that are most exciting for me, as only the “source” function in the query would need to be changed. This data source agnostic nature of Power Query is enabled by query folding.

[5] Or locations, because a single query can combine data from multiple data sources.

[6] This is also why dataflows are an import-only data source in Power BI Desktop.

 

Power BI dataflows and CDM Sessions from MBAS 2019

Last week Microsoft held its annual Microsoft Business Applications Summit (MBAS) event in Atlanta. This two-day technical conference covers the whole Business Applications platform – including Dynamics, PowerApps, and Flow – and not just Power BI, but there was a ton of great Power BI content to be had. Now that the event is over, the session recordings and resources are available to everyone.

MBAS 2019 Banner

There’s a dedicated page on the Power BI community site with all of the sessions, but I wanted to call out a few sessions on dataflows and the Common Data Model that readers of this blog should probably watch[1].

Power BI dataflows sessions

Microsoft Power BI: Democratizing self-service data prep with dataflows

This session is something of a “deep technical introduction” to dataflows in Power BI. If you’re already familiar with dataflows a lot of this will be a repeat, but there are some gems as well.

Microsoft Power BI: Enterprise-grade BI with Power BI dataflows

This session is probably my favorite dataflows session from any conference. This is a deep dive into the dataflows architecture, including the brand-new-in-preview compute engine for performance and scale.

Common Data Model sessions

As you know, Power BI dataflows build on CDM and CDM folders. As you probably know, CDM isn’t just about Power BI – it’s a major area of investment across Azure data services as well. The session lineup at MBAS reflected this importance with three dedicated CDM sessions.

Common Data Model: All you need to know

This ironically-named session[2] provides a comprehensive overview of CDM. It’s not really everything you need, but it’s the right place to begin if you’re new to CDM and want to the big-picture view.

Microsoft Power BI: Common Data Model and Azure Data Services

This session covers how CDM and CDM folders are used in Power BI and Azure data services. If you’ve been following dataflows and CDM closely over the past six months much of this session might be review, but it’s an excellent “deep overview” nonetheless.

Microsoft Power BI: Advanced concepts in the Common Data Model

This session is probably the single best resource on CDM available today. The presenters are the key technical team behind CDM, and goes into details and concepts that aren’t available in any other presentation I’ve found. I’ve been following CDM pretty closely for the past year or more, and I learned a lot from this session. You probably will too.

Once you’re done watching these sessions, remember that there’s a huge library of technical sessions you can watch on-demand. Also some less-technical sessions.


[1] I have a list of a dozen or more sessions that I want to watch, and only a few of them are dataflows-centric. If you look through the catalog you’ll likely find some unexpected gems.

[2] If this is all you need to know, why do we have these other two sessions?

[3] Including Jeff Bernhardt, the architect behind CDM. Jeff doesn’t have the rock star reputation he deserves, but he’s been instrumental in the design and implementation of many of the products and services on which I’ve built my career. Any time Jeff is talking, I make a point to listen closely.

Power BI Dataflows and Additional Data Sources

A few recent posts have touched on different aspects of how dataflows have capabilities that aren’t exposed in the current preview user experience in Power Query Online. This is true of Power Query “M” functions, and of using custom functions, and it is also true of data sources.

Not every data source that is supported in Power BI Desktop is supported today in dataflows, but these data sources should work[1]:

  • SAP Business Warehouse
  • SAP HANA
  • Azure Analysis Services
  • Google Analytics
  • Adobe Analytics
  • ODBC
  • OLE DB
  • Folder
  • SharePoint Online folder
  • SharePoint folder
  • Hadoop HDFS
  • Azure HDInsight (HDFS)
  • Hadoop file HDFS
  • Informix (beta)

 

For any of these data sources, you should be able to build a query in Power BI Desktop, copy it and paste it into Power Query Online and have it work, even though the data sources are not yet listed in the UX.

Let’s look at an example using the Folder source, both because the Folder data source is not yet supported in the Power Query Online user experience, and because of how to relates to yesterday’s post on using custom functions[2]. We’ll begin by choosing a folder that contains our data files.

 

 

Once we’ve connected to the folder, we’ll edit the query to load the data from all CSV files that the folder contains, and then combine the contents of the files.

 

 

Once we’re done, Power Query in Power BI Desktop will give us this:

2018-12-09_17-53-17

There are three points worth mentioning in this image:

  1. The data itself – this is probably what grabs your attention, because it’s big and data
  2. The set of queries built by Power BI Desktop includes multiple queries and a custom function
  3. The main query loaded into the PBIX data model references the other query and the custom function

Now let’s take it all from Power BI Desktop and use it in a dataflow. As covered in the post on authoring Power BI dataflows in Power BI Desktop, right-clicking on the query and choosing “Copy” will copy all of the script we need. Like this:

// Demo File Source
let
Source = Folder.Files("C:\Demo File Source"),
#"Filtered Rows" = Table.SelectRows(Source, each Text.EndsWith([Name], ".csv")),
#"Filtered Hidden Files1" = Table.SelectRows(#"Filtered Rows", each [Attributes]?[Hidden]? <> true),
#"Invoke Custom Function1" = Table.AddColumn(#"Filtered Hidden Files1", "Transform File from Demo File Source", each #"Transform File from Demo File Source"([Content])),
#"Renamed Columns1" = Table.RenameColumns(#"Invoke Custom Function1", {"Name", "Source.Name"}),
#"Removed Other Columns1" = Table.SelectColumns(#"Renamed Columns1", {"Source.Name", "Transform File from Demo File Source"}),
#"Expanded Table Column1" = Table.ExpandTableColumn(#"Removed Other Columns1", "Transform File from Demo File Source", Table.ColumnNames(#"Transform File from Demo File Source"(#"Sample File"))),
#"Changed Type" = Table.TransformColumnTypes(#"Expanded Table Column1",{{"Source.Name", type text}, {"Date received", type date}, {"Product", type text}, {"Sub-product", type text}, {"Issue", type text}, {"Sub-issue", type text}, {"Consumer complaint narrative", type text}, {"Company public response", type text}, {"Company", type text}, {"State", type text}, {"ZIP code", type text}, {"Tags", type text}, {"Consumer consent provided?", type text}, {"Submitted via", type text}, {"Date sent to company", type date}, {"Company response to consumer", type text}, {"Timely response?", type text}, {"Consumer disputed?", type text}, {"Complaint ID", Int64.Type}})
in
#"Changed Type"

// Transform File from Demo File Source
let
Source = (#"Sample File Parameter1") => let
Source = Csv.Document(#"Sample File Parameter1",[Delimiter=",", Columns=18, Encoding=1252, QuoteStyle=QuoteStyle.Csv]),
#"Promoted Headers" = Table.PromoteHeaders(Source, [PromoteAllScalars=true])
in
#"Promoted Headers"
in
Source

// Sample File
let
Source = Folder.Files("C:\Demo File Source"),
#"Filtered Rows" = Table.SelectRows(Source, each Text.EndsWith([Name], ".csv")),
Navigation1 = #"Filtered Rows"{0}[Content]
in
Navigation1

Isn’t that sweet?

We can then paste each of these queries into the “Blank query” template in Power Query Online. Power Query Online is smart enough to automatically prompt for a gateway and credentials. The only edit we’ll need to make is to remove the one line that’s highlighted in red[3], and then… it just works.

2018-12-09_18-04-57

There’s certainly a little irony involved in using the phrase “it just works” after going through all of these steps… but it does work.

While Power BI dataflows and Power Query Online are in preview, there are likely to be some rough edges. But because dataflows are built on the same Power Query function language and connectivity platform, which means that in many scenarios you can use dataflows for tasks that are not yet fully supported in the interface.

During preview there are still functions and data sources that aren’t yet supported in dataflows, but this technique can unlock data sources and capabilities that aren’t yet exposed through the UI. Check it out.


[1] As the rest of the blog makes clear, I have not yet tested all of these. If you try any of them and they don’t work, remember this: Getting Help for Power BI Dataflows.

[2] But mainly because I don’t have any of the other listed data sources handy to use.

[3] Because this is not yet supported in dataflows.

Using Custom Functions in Power BI Dataflows

You can use custom Power Query “M” functions in Power BI dataflows, even though they’re not exposed and supported in the preview Power Query Online editor to the same extent they are supported in Power BI Desktop.[1]

As mentioned in a recent post on Authoring Power BI Dataflows in Power BI Desktop, the Power Query “M” queries that define your dataflow entities can contain a lot more than what can be created in Power Query Online. One example of this is support for custom functions in a dataflow. Functions work the same way in dataflows as they work in Power Query Desktop – there’s just not the same UX support.

Let’s see how this works. Specifically, let’s build a dataflow that contains a custom function and which invokes it in one of the dataflow entities. Here’s what we’ll do:

  1. We’ll define a custom function that accepts start and end dates, and returns a table with one row for each day between these dates. Specifically, we’ll use the date dimension approach that Matt Masson published five years ago[2], when Power Query functions were new.
  2. We’ll pull in sales order data from the SalesOrderHeader table in the AdventureWorks sample database to be an “Order” entity in the dataflow.
  3. We’ll use the min and max of the various date columns in the SalesOrderHeader table to get the parameter values to pass into the custom function. We’ll then call the custom function to build a Date entity in the dataflow.
  4. We’ll close our eyes and imagine doing the rest of the work to load other entities in the dataflow to make what we’d need to build a full star schema in a Power BI dataset, but we won’t actually do the work.

Let’s go. Since we’re just copying the code from Matt‘s blog, we’ll skip the code here, but the result in Power Query Online is worth looking at.

2018-12-08_12-58-57

Even though Power Query Online doesn’t have a dedicated “create function” option, it does recognize when a query is a function, and does include a familiar UX for working with a function. You will, however, need to clear the “Enable load” option for the query, since a function can’t be loaded directly.

The Order entity is super simple – we’re just pulling in a table and removing the complex columns that Power Query adds to represent related tables in the database. Here’s the script:

let
Source = Sql.Database("myserver.database.windows.net", "adventureworks"),
SalesLT_SalesOrderHeader = Source{[Schema = "SalesLT", Item = "SalesOrderHeader"]}[Data],
#"Removed columns" = Table.RemoveColumns(SalesLT_SalesOrderHeader, {"SalesLT.Address(BillToAddressID)", "SalesLT.Address(ShipToAddressID)", "SalesLT.Customer", "SalesLT.SalesOrderDetail"})
in
#"Removed columns"

Now we need to put the two of them together. Let’s begin by duplicating the Order entity. If we referenced the Order entity instead of duplicating it, we would end up with a computed entity, which would require Power BI Premium capacity to refresh.

This is what the query looks like before we invoke the custom function. With all of the awesome options on the “Add Column” tab in Power BI Desktop, implementing this logic was surprisingly easy.

let
Source = Sql.Database("myserver.database.windows.net", "adventureworks"),
SalesLT_SalesOrderHeader = Source{[Schema="SalesLT",Item="SalesOrderHeader"]}[Data],
#"Removed Other Columns" = Table.SelectColumns(SalesLT_SalesOrderHeader,{"OrderDate", "DueDate", "ShipDate"}),
#"Added Custom" = Table.AddColumn(#"Removed Other Columns", "Custom", each "Group by all"),
#"Grouped Rows" = Table.Group(#"Added Custom", {"Custom"}, {{"Min Order Date", each List.Min([OrderDate]), type datetime}, {"Max Order Date", each List.Max([OrderDate]), type datetime}, {"Min Due Date", each List.Min([DueDate]), type datetime}, {"Max Due Date", each List.Max([DueDate]), type datetime}, {"Min Ship Date", each List.Min([ShipDate]), type datetime}, {"Max Ship Date", each List.Max([ShipDate]), type datetime}}),
#"Changed Type" = Table.TransformColumnTypes(#"Grouped Rows",{{"Min Order Date", type date}, {"Max Order Date", type date}, {"Min Due Date", type date}, {"Max Due Date", type date}, {"Min Ship Date", type date}, {"Max Ship Date", type date}}),
#"Inserted Earliest" = Table.AddColumn(#"Changed Type", "Min Date", each List.Min({[Min Order Date], [Min Due Date], [Min Ship Date]}), type date),
#"Inserted Latest" = Table.AddColumn(#"Inserted Earliest", "Max Date", each List.Max({[Max Order Date], [Max Due Date], [Max Ship Date]}), type date),
#"Removed Other Columns1" = Table.SelectColumns(#"Inserted Latest",{"Min Date", "Max Date"})
in
#"Removed Other Columns1"

At the end of this interim query, we have two columns to pass in to the custom function. And once we do, it looks like this:

2018-12-08_13-48-40

And here’s the final script used to define the Date entity.

 
let
Source = Sql.Database("myserver.database.windows.net", "adventureworks"),
SalesLT_SalesOrderHeader = Source{[Schema="SalesLT",Item="SalesOrderHeader"]}[Data],
#"Removed Other Columns" = Table.SelectColumns(SalesLT_SalesOrderHeader,{"OrderDate", "DueDate", "ShipDate"}),
#"Added Custom" = Table.AddColumn(#"Removed Other Columns", "Custom", each "Group by all"),
#"Grouped Rows" = Table.Group(#"Added Custom", {"Custom"}, {{"Min Order Date", each List.Min([OrderDate]), type datetime}, {"Max Order Date", each List.Max([OrderDate]), type datetime}, {"Min Due Date", each List.Min([DueDate]), type datetime}, {"Max Due Date", each List.Max([DueDate]), type datetime}, {"Min Ship Date", each List.Min([ShipDate]), type datetime}, {"Max Ship Date", each List.Max([ShipDate]), type datetime}}),
#"Changed Type" = Table.TransformColumnTypes(#"Grouped Rows",{{"Min Order Date", type date}, {"Max Order Date", type date}, {"Min Due Date", type date}, {"Max Due Date", type date}, {"Min Ship Date", type date}, {"Max Ship Date", type date}}),
#"Inserted Earliest" = Table.AddColumn(#"Changed Type", "Min Date", each List.Min({[Min Order Date], [Min Due Date], [Min Ship Date]}), type date),
#"Inserted Latest" = Table.AddColumn(#"Inserted Earliest", "Max Date", each List.Max({[Max Order Date], [Max Due Date], [Max Ship Date]}), type date),
#"Removed Other Columns1" = Table.SelectColumns(#"Inserted Latest",{"Min Date", "Max Date"}),
#"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns1", "fn_DateTable", each fn_DateTable([Min Date], [Max Date], null)),
fn_DateTable1 = #"Invoked Custom Function"{0}[fn_DateTable],
#"Changed Type1" = Table.TransformColumnTypes(fn_DateTable1,{{"Year", Int64.Type}, {"QuarterOfYear", Int64.Type}, {"MonthOfYear", Int64.Type}, {"DayOfMonth", Int64.Type}, {"DateInt", Int64.Type}, {"DayInWeek", Int64.Type}, {"MonthName", type text}, {"MonthInCalendar", type text}, {"QuarterInCalendar", type text}, {"DayOfWeekName", type text}})
in
#"Changed Type1"

 

Most of the complexity in this approach is in the work required to get min and max values from three columns in a single table. The topic of the post – calling a custom function inside a dataflow entity definition – is trivial.

When we’re done, the list of entities only shows Order and Date, because these are the only two queries that are being loaded into the dataflow’s CDM folder storage. But the definition of the Date query includes the use of a custom function, which allows us to have rich and possibly complex functionality included in the dataflow code, and referenced by one or more entities as necessary.


[1] I was inspired to write this post when I saw this idea on ideas.powerbi.com. If this capability is obscure enough to get its own feature request and over a dozen votes, it probably justifies a blog post.

[2] And the same code, copied and pasted, like real developers do.