Recipe: Spicy Cheese Sauce

I’m on vacation for the holidays, and plan to spend much of the next few weeks in the kitchen. This will result in more posts about food and fewer posts about Power BI. You’ve been warned.

This is my favorite cheese sauce. The recipe is adapted from one originally published by Modernist Cuisine, and made much better by the application of chilies.

Please note that the recipe uses proportions by weight, rather than specific amounts. You can easily scale the recipe up or down so long as the proportion is the same, but you will need an accurate kitchen scale for this one.

Ingredients

  • 100% extra sharp orange cheddar cheese
  • 94% jalapeno juice
  • 4% sodium citrate

Procedure

  • Juice 10-50 jalapeno chili peppers and weigh the juice.
  • Grate an equal weight of cheddar cheese.
  • In a saucepan over medium-high heat, combine the juice and sodium citrate and bring to a boil.
  • Gradually add grated cheddar to the liquid, blending with an immersion blender to incorporate each batch of cheese before adding more.
  • That’s it. It doesn’t get much simpler than this.

Storage

Store in an airtight container and refrigerate.

Applications

  • Dip tortilla chips into the cheese sauce and eat them. Eat all of them!!
  • Use on nachos.
  • Use on chili cheese fries, with homemade french fries and green chili sauce from El Patio De Albuquerque.

Notes

  • If you really love spicy food, you may be tempted to use habaneros or another spicier chili to start with. Don’t do that. Start with jalapenos and scale the heat up as appropriate by adding a few spicier chilies to the mix. Trust me on this one.
  • Alternately you can scale down the heat by replacing some of the liquid with water. Do what is right for you.
  • So long as you keep the same proportions, you can use any liquid and any cheese you want. The possibilities are limitless.
  • You can also use jalapeno juice in other recipes where you want to add flavor and heat. My other favorite application is in yeast breads, where you can replace the water with chili juice. I’ve made this recipe multiple times with great results using this technique.

Recipe: Sea Salt Caramels

This is my favorite caramel. The recipe is adapted from one originally published in Dessert Professional Magazine. As with any caramel you want to be careful with this one – don’t take your eyes off hot caramel!

IMG_20181103_130641_Bokeh.jpg

Ingredients

  • 123 grams glucose syrup
  • 821 grams granulated sugar
  • 10 grams Fleur de Sel sea salt
  • 657 grams unsalted butter
  • 388 grams heavy cream

Procedure

  • Line a 9×13″ baking/casserole dish with parchment paper. Have the prepared dish ready by the stove.
  • Combine the glucose, sugar and salt in a large pot over medium-high heat and cook until it reaches 293 F, stirring frequently.
  • Meanwhile, combine the butter and cream in a medium pot and bring to a boil.
  • Once the sugar mixture reaches 293 F, slowly and carefully pour the cream and butter into the pot with the sugar, whisking constantly.
  • Bring the mixture back up to 122 C / 252 F, stirring constantly.
  • Pour carefully into the prepared dish.
  • Cool the caramel to room temperature, 3-4 hours or overnight.
  • Cut the cooled caramels into squares.

Storage

Store in an airtight container at room temperature.

Vacuum seal and freeze indefinitely.

Applications

  • Eat them as is
  • Enrobe with tempered chocolate, and top with a few flakes of sea salt
  • Bake peanut butter cookies with a small thumbprint depression in their tops, and fill the depression with a generous lump of caramel

 

What is Your Gift?

I have a theory that everyone has a gift. Not a gift that they have been given, but a gift that they can give to others.

blue-box-container-675970.jpg

Your gift is something you’re good at, and which enables you to expend minimal effort to achieve disproportionally large benefit in return.

If you’re an IT professional, you’ve probably encountered situations where you could spend a few minutes configuring a loved one’s hardware or software. For you, this was a trivial task, done in minutes or seconds without any real effort, but for your father or aunt it would have been a stressful ordeal, with no guarantee of success.

Regardless of the context, there’s something that you’re better at. Cooking, baking, drawing, painting, photography, singing, rhyming, writing, carpentry, plumbing… something. As you read this list, hopefully something leaped to mind. For me, this list include a few things where I have serious skills, and a bunch of things where I feel hopelessly challenged[1].

Once you recognize and acknowledge your gift, you just need to keep your eyes open for opportunities to give it. Each day, look for the places where you could invest a few minutes to spare someone a few hours. Look for the chance to invest an hour to save someone days or weeks. And once you identify the opportunities, choose which of them to act upon. It’s not your responsibility to solve every problem you can, but your gift is only valuable when you share it.

What is your gift?

Who will you share it with?


[1] One day I’ll encounter a situation where someone desperately needs a skilled swordsman. But it is not likely to be this day.

Power BI Dataflows and Additional Data Sources

Important: This post was written and published in 2018, and the content below no longer represents the current capabilities of Power BI. Please consider this post to be an historical record and not a technical resource. All content on this site is the personal output of the author and not an official resource from Microsoft.

A few recent posts have touched on different aspects of how dataflows have capabilities that aren’t exposed in the current preview user experience in Power Query Online. This is true of Power Query “M” functions, and of using custom functions, and it is also true of data sources.

Not every data source that is supported in Power BI Desktop is supported today in dataflows, but these data sources should work[1]:

  • SAP Business Warehouse
  • SAP HANA
  • Azure Analysis Services
  • Google Analytics
  • Adobe Analytics
  • ODBC
  • OLE DB
  • Folder
  • SharePoint Online folder
  • SharePoint folder
  • Hadoop HDFS
  • Azure HDInsight (HDFS)
  • Hadoop file HDFS
  • Informix (beta)

 

For any of these data sources, you should be able to build a query in Power BI Desktop, copy it and paste it into Power Query Online and have it work, even though the data sources are not yet listed in the UX.

Let’s look at an example using the Folder source, both because the Folder data source is not yet supported in the Power Query Online user experience, and because of how to relates to yesterday’s post on using custom functions[2]. We’ll begin by choosing a folder that contains our data files.

 

 

Once we’ve connected to the folder, we’ll edit the query to load the data from all CSV files that the folder contains, and then combine the contents of the files.

 

 

Once we’re done, Power Query in Power BI Desktop will give us this:

2018-12-09_17-53-17

There are three points worth mentioning in this image:

  1. The data itself – this is probably what grabs your attention, because it’s big and data
  2. The set of queries built by Power BI Desktop includes multiple queries and a custom function
  3. The main query loaded into the PBIX data model references the other query and the custom function

Now let’s take it all from Power BI Desktop and use it in a dataflow. As covered in the post on authoring Power BI dataflows in Power BI Desktop, right-clicking on the query and choosing “Copy” will copy all of the script we need. Like this:

// Demo File Source
let
Source = Folder.Files("C:\Demo File Source"),
#"Filtered Rows" = Table.SelectRows(Source, each Text.EndsWith([Name], ".csv")),
#"Filtered Hidden Files1" = Table.SelectRows(#"Filtered Rows", each [Attributes]?[Hidden]? <> true),
#"Invoke Custom Function1" = Table.AddColumn(#"Filtered Hidden Files1", "Transform File from Demo File Source", each #"Transform File from Demo File Source"([Content])),
#"Renamed Columns1" = Table.RenameColumns(#"Invoke Custom Function1", {"Name", "Source.Name"}),
#"Removed Other Columns1" = Table.SelectColumns(#"Renamed Columns1", {"Source.Name", "Transform File from Demo File Source"}),
#"Expanded Table Column1" = Table.ExpandTableColumn(#"Removed Other Columns1", "Transform File from Demo File Source", Table.ColumnNames(#"Transform File from Demo File Source"(#"Sample File"))),
#"Changed Type" = Table.TransformColumnTypes(#"Expanded Table Column1",{{"Source.Name", type text}, {"Date received", type date}, {"Product", type text}, {"Sub-product", type text}, {"Issue", type text}, {"Sub-issue", type text}, {"Consumer complaint narrative", type text}, {"Company public response", type text}, {"Company", type text}, {"State", type text}, {"ZIP code", type text}, {"Tags", type text}, {"Consumer consent provided?", type text}, {"Submitted via", type text}, {"Date sent to company", type date}, {"Company response to consumer", type text}, {"Timely response?", type text}, {"Consumer disputed?", type text}, {"Complaint ID", Int64.Type}})
in
#"Changed Type"

// Transform File from Demo File Source
let
Source = (#"Sample File Parameter1") => let
Source = Csv.Document(#"Sample File Parameter1",[Delimiter=",", Columns=18, Encoding=1252, QuoteStyle=QuoteStyle.Csv]),
#"Promoted Headers" = Table.PromoteHeaders(Source, [PromoteAllScalars=true])
in
#"Promoted Headers"
in
Source

// Sample File
let
Source = Folder.Files("C:\Demo File Source"),
#"Filtered Rows" = Table.SelectRows(Source, each Text.EndsWith([Name], ".csv")),
Navigation1 = #"Filtered Rows"{0}[Content]
in
Navigation1

Isn’t that sweet?

We can then paste each of these queries into the “Blank query” template in Power Query Online. Power Query Online is smart enough to automatically prompt for a gateway and credentials. The only edit we’ll need to make is to remove the one line that’s highlighted in red[3], and then… it just works.

2018-12-09_18-04-57

There’s certainly a little irony involved in using the phrase “it just works” after going through all of these steps… but it does work.

While Power BI dataflows and Power Query Online are in preview, there are likely to be some rough edges. But because dataflows are built on the same Power Query function language and connectivity platform, which means that in many scenarios you can use dataflows for tasks that are not yet fully supported in the interface.

During preview there are still functions and data sources that aren’t yet supported in dataflows, but this technique can unlock data sources and capabilities that aren’t yet exposed through the UI. Check it out.


[1] As the rest of the blog makes clear, I have not yet tested all of these. If you try any of them and they don’t work, remember this: Getting Help for Power BI Dataflows.

[2] But mainly because I don’t have any of the other listed data sources handy to use.

[3] Because this is not yet supported in dataflows.

Using Custom Functions in Power BI Dataflows

Important: This post was written and published in 2018, and the content below no longer represents the current capabilities of Power BI. Please consider this post to be an historical record and not a technical resource. All content on this site is the personal output of the author and not an official resource from Microsoft.

You can use custom Power Query “M” functions in Power BI dataflows, even though they’re not exposed and supported in the preview Power Query Online editor to the same extent they are supported in Power BI Desktop.[1]

As mentioned in a recent post on Authoring Power BI Dataflows in Power BI Desktop, the Power Query “M” queries that define your dataflow entities can contain a lot more than what can be created in Power Query Online. One example of this is support for custom functions in a dataflow. Functions work the same way in dataflows as they work in Power Query Desktop – there’s just not the same UX support.

Let’s see how this works. Specifically, let’s build a dataflow that contains a custom function and which invokes it in one of the dataflow entities. Here’s what we’ll do:

  1. We’ll define a custom function that accepts start and end dates, and returns a table with one row for each day between these dates. Specifically, we’ll use the date dimension approach that Matt Masson published five years ago[2], when Power Query functions were new.
  2. We’ll pull in sales order data from the SalesOrderHeader table in the AdventureWorks sample database to be an “Order” entity in the dataflow.
  3. We’ll use the min and max of the various date columns in the SalesOrderHeader table to get the parameter values to pass into the custom function. We’ll then call the custom function to build a Date entity in the dataflow.
  4. We’ll close our eyes and imagine doing the rest of the work to load other entities in the dataflow to make what we’d need to build a full star schema in a Power BI dataset, but we won’t actually do the work.

Let’s go. Since we’re just copying the code from Matt‘s blog, we’ll skip the code here, but the result in Power Query Online is worth looking at.

2018-12-08_12-58-57

Even though Power Query Online doesn’t have a dedicated “create function” option, it does recognize when a query is a function, and does include a familiar UX for working with a function. You will, however, need to clear the “Enable load” option for the query, since a function can’t be loaded directly.

The Order entity is super simple – we’re just pulling in a table and removing the complex columns that Power Query adds to represent related tables in the database. Here’s the script:

let
Source = Sql.Database("myserver.database.windows.net", "adventureworks"),
SalesLT_SalesOrderHeader = Source{[Schema = "SalesLT", Item = "SalesOrderHeader"]}[Data],
#"Removed columns" = Table.RemoveColumns(SalesLT_SalesOrderHeader, {"SalesLT.Address(BillToAddressID)", "SalesLT.Address(ShipToAddressID)", "SalesLT.Customer", "SalesLT.SalesOrderDetail"})
in
#"Removed columns"

Now we need to put the two of them together. Let’s begin by duplicating the Order entity. If we referenced the Order entity instead of duplicating it, we would end up with a computed entity, which would require Power BI Premium capacity to refresh.

This is what the query looks like before we invoke the custom function. With all of the awesome options on the “Add Column” tab in Power BI Desktop, implementing this logic was surprisingly easy.

let
Source = Sql.Database("myserver.database.windows.net", "adventureworks"),
SalesLT_SalesOrderHeader = Source{[Schema="SalesLT",Item="SalesOrderHeader"]}[Data],
#"Removed Other Columns" = Table.SelectColumns(SalesLT_SalesOrderHeader,{"OrderDate", "DueDate", "ShipDate"}),
#"Added Custom" = Table.AddColumn(#"Removed Other Columns", "Custom", each "Group by all"),
#"Grouped Rows" = Table.Group(#"Added Custom", {"Custom"}, {{"Min Order Date", each List.Min([OrderDate]), type datetime}, {"Max Order Date", each List.Max([OrderDate]), type datetime}, {"Min Due Date", each List.Min([DueDate]), type datetime}, {"Max Due Date", each List.Max([DueDate]), type datetime}, {"Min Ship Date", each List.Min([ShipDate]), type datetime}, {"Max Ship Date", each List.Max([ShipDate]), type datetime}}),
#"Changed Type" = Table.TransformColumnTypes(#"Grouped Rows",{{"Min Order Date", type date}, {"Max Order Date", type date}, {"Min Due Date", type date}, {"Max Due Date", type date}, {"Min Ship Date", type date}, {"Max Ship Date", type date}}),
#"Inserted Earliest" = Table.AddColumn(#"Changed Type", "Min Date", each List.Min({[Min Order Date], [Min Due Date], [Min Ship Date]}), type date),
#"Inserted Latest" = Table.AddColumn(#"Inserted Earliest", "Max Date", each List.Max({[Max Order Date], [Max Due Date], [Max Ship Date]}), type date),
#"Removed Other Columns1" = Table.SelectColumns(#"Inserted Latest",{"Min Date", "Max Date"})
in
#"Removed Other Columns1"

At the end of this interim query, we have two columns to pass in to the custom function. And once we do, it looks like this:

2018-12-08_13-48-40

And here’s the final script used to define the Date entity.

 
let
Source = Sql.Database("myserver.database.windows.net", "adventureworks"),
SalesLT_SalesOrderHeader = Source{[Schema="SalesLT",Item="SalesOrderHeader"]}[Data],
#"Removed Other Columns" = Table.SelectColumns(SalesLT_SalesOrderHeader,{"OrderDate", "DueDate", "ShipDate"}),
#"Added Custom" = Table.AddColumn(#"Removed Other Columns", "Custom", each "Group by all"),
#"Grouped Rows" = Table.Group(#"Added Custom", {"Custom"}, {{"Min Order Date", each List.Min([OrderDate]), type datetime}, {"Max Order Date", each List.Max([OrderDate]), type datetime}, {"Min Due Date", each List.Min([DueDate]), type datetime}, {"Max Due Date", each List.Max([DueDate]), type datetime}, {"Min Ship Date", each List.Min([ShipDate]), type datetime}, {"Max Ship Date", each List.Max([ShipDate]), type datetime}}),
#"Changed Type" = Table.TransformColumnTypes(#"Grouped Rows",{{"Min Order Date", type date}, {"Max Order Date", type date}, {"Min Due Date", type date}, {"Max Due Date", type date}, {"Min Ship Date", type date}, {"Max Ship Date", type date}}),
#"Inserted Earliest" = Table.AddColumn(#"Changed Type", "Min Date", each List.Min({[Min Order Date], [Min Due Date], [Min Ship Date]}), type date),
#"Inserted Latest" = Table.AddColumn(#"Inserted Earliest", "Max Date", each List.Max({[Max Order Date], [Max Due Date], [Max Ship Date]}), type date),
#"Removed Other Columns1" = Table.SelectColumns(#"Inserted Latest",{"Min Date", "Max Date"}),
#"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns1", "fn_DateTable", each fn_DateTable([Min Date], [Max Date], null)),
fn_DateTable1 = #"Invoked Custom Function"{0}[fn_DateTable],
#"Changed Type1" = Table.TransformColumnTypes(fn_DateTable1,{{"Year", Int64.Type}, {"QuarterOfYear", Int64.Type}, {"MonthOfYear", Int64.Type}, {"DayOfMonth", Int64.Type}, {"DateInt", Int64.Type}, {"DayInWeek", Int64.Type}, {"MonthName", type text}, {"MonthInCalendar", type text}, {"QuarterInCalendar", type text}, {"DayOfWeekName", type text}})
in
#"Changed Type1"

 

Most of the complexity in this approach is in the work required to get min and max values from three columns in a single table. The topic of the post – calling a custom function inside a dataflow entity definition – is trivial.

When we’re done, the list of entities only shows Order and Date, because these are the only two queries that are being loaded into the dataflow’s CDM folder storage. But the definition of the Date query includes the use of a custom function, which allows us to have rich and possibly complex functionality included in the dataflow code, and referenced by one or more entities as necessary.


[1] I was inspired to write this post when I saw this idea on ideas.powerbi.com. If this capability is obscure enough to get its own feature request and over a dozen votes, it probably justifies a blog post.

[2] And the same code, copied and pasted, like real developers do.

Dataflows in Power BI: Overview Part 8 – Using an Organizational Azure Data Lake Resource

Important: This post was written and published in 2018, and the content below no longer represents the current capabilities of Power BI. Please consider this post to be an historical record and not a technical resource. All content on this site is the personal output of the author and not an official resource from Microsoft.

One key aspect of Power BI dataflows is that they store their data in CDM folders in Azure Data Lake Storage gen2.[1] When a dataflow is refreshed, the queries that define the dataflow entities are executed, and their results are stored in the underlying CDM folders in the data lake storage that’s managed by the Power BI service.

By default the Power BI service hides the details of the underlying storage. Only the Power BI service can write to the CDM folders, and only the Power BI service can read from them.

NARRATOR:

But Matthew knew that there are other options beyond the default…

Please note: At the time this post is published, the capabilities it describes are being rolled out to Power BI customers around the world. If you do not yet see these capabilities in your Power BI tenant, please understand that the deployment process may take several days to reach all regions.

In addition to writing to the data lake storage that is included with Power BI, you can also configure Power BI to write to an Azure Data Lake Storage gen2 resource in your own Azure subscription. This configuration opens up powerful capabilities for using data created in Power BI as the source for other Azure services. This means that data produced by analysts in a low-code/no-code Power BI experience can be used by data scientists in Azure Machine Learning, or by data engineers in Azure Data Factory or Azure Databricks.

Let that sink in for a minute, because it’s more important that it seemed when you just read it. Business data experts – the people who may not know professional data tools and advanced concepts in depth, but who are intimately involved with how the data is used to support business processes – can now use Power BI to produce data sets that can be easily used by data professionals in their tools of choice. This is a Big Deal. Not only does this capability deliver the power of Azure Data Lake Storage gen2 for scale and computing capability, it enables seamless collaboration between business and IT.

The challenge of operationalization/industrialization that has been part of self-service BI since self-service BI has been around has typically been solved by business handing off to IT the solution that they created. Ten years ago the artifact being handed off may have been an Excel workbook full of macros and VLOOKUP. IT would then need to reverse-engineer and re-implement the logic to reproduce it in a different tool and different language. Power Query and dataflows have made this story simpler – an analyst can develop a query that can be re-used directly by IT. But now an analyst can easily produce data that can be used – directly and seamlessly – by IT projects. Bam.

Before I move on, let me add a quick sanity check here. You can’t build a production data integration process on non-production data sources and expect it to deliver a stable and reliable solution, and that last paragraph glossed over this fact. When IT starts using a business-developed CDM folder as a data source, this needs to happen in the context of a managed process that eventually includes the ownership of the data source transitioning to IT. The integration of Power BI dataflows and CDM folders in Azure Data Lake Storage gen2 will make this process much simpler, but the process will still be essential.

Now let’s take a look at how this works.

I’m not going to go into details about the data lake configuration requirements here – but there are specific steps that need to be taken on the Azure side of things before Power BI can write to the lake. For information on setting up Azure Data Lake Storage gen2 to work with Power BI, check the documentation.

The details are in the documentation, but once the setup is complete, there will be a filesystem[2] named powerbi, and the Power BI service will be authorized to read it and write to it. As the Power BI service refreshes dataflows, it writes entity data in a folder structure that matches the content structure in Power BI. This approach – which has folders named after workspaces, dataflows, and entities, and files named after entities, makes it easier for all parties to understand what data is stored where, and how the file storage in the data lake relates to the the objects in Power BI.

To enable this feature, a Power BI administrator first needs to use the Power BI admin portal to connect Power BI to Azure Data Lake Storage gen2. This is a tenant-level setting. The administrator must enter the Subscription ID, the Resource Group ID, and the Storage Account name for the Azure Data Lake Storage gen2 resource that Power BI will use. The administrator needs to turn it on. In the admin portal there is an option labeled “Allow workspace admins to assign workspaces to this storage account.” Once this is turned on, we’re ready to go.

And of course, by “we” I mean ” workspace admins” and by “go” I mean “configure our workspaces storage settings.”

When creating a new app workspace, in the “Advanced” portion of the UI, you can see the “Dataflow storage (Preview)” option. When this option is enabled, any dataflow in the workspace will be created in the ADLSg2 resource configured by the Power BI admin, rather than in the default internal ADLSg2 storage that is managed by the Power BI service.

workspace settings

There are a few things worth mentioning about this screen shot:

  1. This is not a Premium-only feature. Although the example above shows a workspace being created in dedicated Premium capacity, this is not required to use your own data lake storage account.
  2. If no Power BI administrator has configured an organizational data lake storage account, this option will not be visible.
  3. Apparently I need to go back and fix every blog post I’ve made up until now to replace “gen2” with “Gen2” because we’re using an upper-case G now.

There are a few limitations mentioned in the screen shot, and a few that aren’t, that are worth pointing out as well:

  1. Because linked and computed entities use in-lake compute, you need to be using the same lake for them to work.
  2. You can’t change this setting for a workspace that already has dataflows in it. This option is always available when creating a new workspace, and will also be available in existing workspaces without dataflows, but if you have defined dataflows in a workspace you cannot change its storage location.
  3. Permissions… get a little complicated.

…so let’s look at permissions a little[3].

When you’re using the default Power BI storage, the Power BI service manages data access through the workspace permissions. Power BI service is the only reader and the only writer for the underlying CDM folders, and the Power BI service controls any access to the data the CDM folders contain.

When you’re using your organization’s data lake resource, ADLSg2 manages data access through the ACLs set on the folders and files. The Power BI service will grant permissions to the dataflow creator, but any additional permissions must be manually set on the files and folders in ADLSg2[4]. This means that for any user to access the dataflow through Power BI or the CDM folder through ADLSg2, they need to be granted permissions on all files and folders in ADLSg2.

Between the ability to store dataflow data in your organization’s Azure Data Lake Storage gen2 resource, and the ability to attach external CDM folders as dataflows, Power BI now enables a wide range of collaboration scenarios


[1] This time I just copied the opening sentence from the last blog post. Since I was writing them at the same time, that was much easier.

[2] Basically a root folder, says the guy who doesn’t really know much about Azure Data Lake Storage gen2.

[3] I’m planning a post dedicated to dataflows security, but it’s not ready yet. Hopefully this will be useful in the interim.

[4] This early experience will improve as the integration between Power BI and ADLSg2 continues to evolve.

More Resources: Power BI Dataflows and Azure Data

Important: This post was written and published in 2018, and the content below no longer represents the current capabilities of Power BI. Please consider this post to be an historical record and not a technical resource. All content on this site is the personal output of the author and not an official resource from Microsoft.

I’m not the only one who’s been busy sharing news and content this weekend about the integration of Power BI dataflows and Azure data services. Check out these additional resources and share the news.

  • Power BI Blog: This is the main Power BI announcement for the availability of Power BI dataflows integration with Azure Data Lake Storage Gen2.
  • Azure SQL Data Warehouse Blog: This is the main Azure announcement for the new integration capabilities, with lots of links to additional information for data professionals.
  • End-to-end CDM Tutorial on GitHub: This is the big one! Microsoft has published an end to end tutorial that includes Azure Data Factory, Azure Databricks, Azure SQL Data Warehouse, Azure SQL Database, and Azure Machine Learning.
  • CDM Documentation for ADLSg2: This is the official documentation for the Common Data Model including the model.json metadata file created for Power BI dataflows.

If you’re as excited as I am about today’s announcements, you’ll want to take the time to read all of these posts and to work through the tutorial as well. And probably do a happy dance of some sort.

Dataflows in Power BI: Overview Part 7 – External CDM Folders

Important: This post was written and published in 2018, and the content below no longer represents the current capabilities of Power BI. Please consider this post to be an historical record and not a technical resource. All content on this site is the personal output of the author and not an official resource from Microsoft.

One key aspect of Power BI dataflows is that they store their data in CDM Folders in Azure Data Lake Storage gen2.[1] When a dataflow is refreshed, the queries that define the dataflow entities are executed, and their results are stored in the underlying CDM Folders in the data lake.

By default the Power BI service hides the details of the underlying storage. Only the Power BI service can write to the CDM folders, and only the Power BI service can read from them.

NARRATOR:

But Matthew knew that there are other options beyond the default…

Because the CDM folder format is an open standard, any service or application can create them. A CDM folder can be produced by Azure Data Factory, Azure Databricks, or any other service that can output text and JSON files. Once the CDM folder exists, we just need to let Power BI know that it’s there.

Like this.

When creating a new dataflow, select the “Attach an external CDM folder” option. If you don’t see the “Attach an external CDM folder” and “Link entities from other dataflows” options, the most likely reason is that you’re not using a new “v2” workspace. These capabilities are available only in the new workspaces, which are currently also in preview.

2018-11-27_10-40-30

You’ll then be prompted to provide the same metadata you would enter when saving a standard Power BI dataflow (required name and optional description) and also to enter the path to the CDM folder in Azure Data Lake Storage gen2.

Just as you need permissions to access your data sources when building a dataflow in Power BI, you also need permission on the CDM folder in Azure Data Lake in order to attach the CDM folder as an external dataflow.

2018-11-27_18-17-22

And that’s it!

The other steps that would normally be required to build a new dataflow are not required when attaching an external CDM folder. You aren’t building queries to define the entities, because a service other than Power BI will be writing the data in the CDM folder.

Once this is done, users can work with this external CDM folder as if it were a standard Power BI dataflow. An analyst working with this data in Power BI Desktop will likely never know (or care) that the data came from somewhere outside of Power BI. All that they will notice is that the data source is easy to discover and use, because it is a dataflow.

One potential complication[2] is that Power BI Desktop users must be granted permissions both in Power BI and in Azure Data Lake in order to successfully consume the data. In Power BI, the user must be a member of the workspace that contains the dataflow. If this is not the case, the user will not see the workspace in the list of workspaces when connecting to Power BI dataflows in Power BI Desktop. In Azure Data Lake, the user must be granted read permissions on the CDM folder and the files it contains. If this is not the case, the user will receive an error when attempting to connect to the dataflow.

One additional consideration to keep in mind is that linked entities are not supported when referencing dataflows created from external CDM folders. This shouldn’t be a surprise given how linked entities work, but it’s important to mention nonetheless.

Now that we’ve seen how to set up external folders, let’s look at why we should care. What scenarios does this feature enable? The biggest scenario for me is the ability to seamlessly bridge the worlds of self-service and centralized data, at the asset level.

Enabling business users to work with IT-created data obviously isn’t a new thing – this is the heart of many “managed self-service” approaches to BI. But typically this involves a major development effort, and it involves the sharing of complete models. IT builds data warehouses and cubes, and then educates business users on how to find the data and connect to it. But with external CDM folders, any data set created by a data professional in Azure can be exposed in Power BI without any additional IT effort. The fact that the data is in CDM folder format is enough. Once the CDM folder is attached in Power BI, any authorized user can easily discover and consume the data from directly within Power BI Desktop. And rather than sharing complete models, this approach enables the sharing of more granular reusable building blocks that can be used in multiple models and scenarios.

There doesn’t even need to be a multi-team or multi-persona data sharing aspect to the scenario. If a data engineer or data scientist is creating CDM folders in Azure, she may need to visualize that data, and Power BI is an obvious choice. Although data science tools typically have their own visualization capabilities, their options for distributing insights based on those visuals tend to fall short of what Power BI delivers. For data that is in CDM folders in Azure Data Lake Store gen2, any data producer in Azure can easily have a seamless way to have their data easily exposed and shared with Power BI.

And of course, there are certainly many possibilities that I haven’t even thought of. I can’t wait to hear what you imagine!

Please also check out the blog post from Ben Sack on the dataflows team, because he goes into some details I do not.


[1] If you click through to read the CDM folders post you’ll see that I used almost exactly the same opening sentence, even though I hadn’t read that post since I wrote it over a month ago. That’s just weird.

[2] At least during the preview. I plan on going into greater depth on dataflows security in a future post, and you should expect to see things get simpler and easier while this feature is in preview.

Recipe: Candied Orange Peels

This is my favorite holiday treat. The recipe is adapted from one originally published in the mid-90s in Chocolatier Magazine. I’ve made it scores of times, and the results are always wonderful. For me and my family, the smell of candied orange peels is the smell of Christmas[1].

img_20181209_135455_bokeh.jpg

Ingredients

  • 12 fresh ripe navel oranges
  • 1 gallon water (3,785 ml)
  • 1 tablespoon salt (18 g)
  • 4 pounds granulated sugar (1.8 kg)
  • 1 quart water (946 ml)

Procedure

Phase 1:

  • Combine one gallon of water and one tablespoon salt, and stir to combine
  • Using a fork, thoroughly perforate the peels of each orange, stabbing through the peel into the flesh
  • Using a sharp knife, divide each orange into quarters
  • Using your thumb, remove the peel from the flesh of each orange
  • Place the peels in the salt water, and use the flesh for some other purpose
  • Cover the bowl with plastic wrap to prevent the peels from rising above the surface of the salt water, and allow to rest for 8 hours or overnight

Phase 2:

  • Combine four pounds sugar with one quart water in a large pot over high heat and bring to a boil, stirring occasionally
  • Drain the orange peels
  • Add the orange peels to the sugar syrup and bring the syrup back to a boil
  • Reduce the heat to medium and simmer for 45 minutes, stirring frequently
  • Remove the syrup and peels from the heat, and allow to cool to room temperature
  • Cover the syrup with plastic wrap and allow to rest for 8 hours or overnight

Phase 3:

  • Remove the peels from the syrup and place on baking racks over baking sheets
  • Allow the peels to rest for 24-48 hours or until the excess syrup has drained

Storage

The peels will keep refrigerated for weeks, and will keep indefinitely in the freezer, although it is rare that they last very long at all before being eaten.

Applications

  • Cut peels into strips, and dip into melted (ideally tempered) dark chocolate
  • Chop peels into small chunks, and add to oatmeal cookies with white chocolate and dried cherries
  • Chop peels into small chunks and mix into chocolate ganache with toasted nuts and dried fruit – refrigerate and cut into squares

Honestly, these are good in just about anything.

Notes

  • The recipe easily doubles or triples if you have a big enough pot, and enough racks to drain the candied peels
  • Select oranges for the thickness of their peels
  • Since the finished product is vegan, gluten-free, and everything-but-oranges-sugar-and-chocolate-free, chocolate-dipped peels make great gifts for people with common dietary restrictions, unlike cookies
  • If you accidentally stab your hand with the fork while you’re perforating the oranges, you’ll quickly realize just how much you liked not having fresh orange juice in a stab wound

 


[1] If you ever wondered why the blog was titled “BI Polar” at least one reason is that the topics covered will swing dramatically from time to time. Please try to keep up.

Choosing Between Power BI Premium and Azure Analysis Services

Yesterday I posted an article comparing Power BI dataflows, Power BI datasets, and Azure Analysis Services. Although I’d like to believe that the article was useful, I used the disclaimer “I’m not an expert” in multiple places where I was talking about the differences between models in Power BI and AAS. I may not be an expert, but I do know quite a few people who are.

Specifically I know Gabi Münster and Oliver Engels from oh22data AG in Germany, and Paul Turley from Intelligent Business LLC in the United States.

Gabi and Oliver presented last month at the PASS Summit conference in Seattle on this very topic. Their session “Azure Analysis Services or Power BI – Which service fits you best?” looked at the history of the two services, their current capabilities and strengths, and future capabilities that Microsoft has announced in its business intelligence release notes. They even included a decision flowchart!

If you weren’t able to attend their session in November, I have good news and I have bad news and I have more good news.

The good news is that this session is included in the PASS Summit session recordings, which you can purchase and download today.

The bad news is that the session recordings cost $699, which may be difficult to justify if this is the only session you’re interested in[1].

The good news is that Oliver and Gabi were kind enough to share the slide deck and let me share it with you. You can download it here: AAS or PBI – Which service fits – from PASS Summit 2018.

And I’m very happy to see that their conclusions line up pretty well with my previous post.

PBI AAS

Paul has also presented a conference session[2] related to this topic, and has also recently blogged with an excellent feature comparison table between the different options in SQL Server Analysis Services, Azure Analysis Services, and Power BI.

If you’re in a position where you need to select a BI platform, I highly recommend checking out these resources, as they includes both valuable information, and a framework for using that information in different common scenarios.

Update: Check out this new post from James Fancke at selfservedbi.com: Why Azure Analysis Services is a great value proposition. This article provides a great counterpoint to my post, and drill-down specifically into Azure Analysis Services, and is well worth a read.

Update: Also check out this new post from Dave Ruijter at the moderndata.ai: Guide To Selecting Between Azure Analysis Services And Power BI Premium.

And if after reading these posts and this slide deck you still have unanswered questions, please seek professional help. Specifically, please find a Microsoft Partner who specializes in business intelligence, or a similar expert consultant who can help evaluate your specific needs and how the different available technical options can be applied to address them.


[1] The conference had many awesome sessions, so this should not be the only one you’re interested in.

[2] A delightfully themed conference session, at that.