Design a site like this with
Get started

Power Query Editor – Avoiding the creation of all the helper Queries and Folders in power BI

A current project being worked on starts with a folder in Sharepoint with multiple files that need to be imported together (All with the same Spec)

There are lots of groups of files to import into Power BI

  • We need the File Name – it will be used to create the file snapshot date
  • There is a header over the columns – this needs removing and then the headers promoting

Get Data > Folder creates helper folders that clutter up the back end and aren’t the best solution.

This is the code that gets created


    Source = Folder.Files("C:\Users\Name\Company\Project\Data"),

    #"Filtered Rows" = Table.SelectRows(Source, each Text.Contains([Name], "FileName")),

    #"Filtered Hidden Files1" = Table.SelectRows(#"Filtered Rows", each [Attributes]?[Hidden]? <> true),

    #"Invoke Custom Function1" = Table.AddColumn(#"Filtered Hidden Files1", "Transform File (8)", each #"Transform File (8)"([Content])),

    #"Renamed Columns1" = Table.RenameColumns(#"Invoke Custom Function1", {"Name", "Source.Name"}),

    #"Removed Other Columns1" = Table.SelectColumns(#"Renamed Columns1", {"Source.Name", "Transform File (8)"}),

    #"Expanded Table Column1" = Table.ExpandTableColumn(#"Removed Other Columns1", "Transform File (8)", Table.ColumnNames(#"Transform File (8)"(#"Sample File (8)"))),

    #"Promoted Headers" = Table.PromoteHeaders(#"Expanded Table Column1", [PromoteAllScalars=true]),

You don’t want lots of these files and folders cluttering up the beck end of Power BI slowing things down. Try the following.

We are going to run through this again but change what we do

Get Data / File

Let’s collect all FileA files

Select Binary and Name and Right Click – Remove Other Columns

Now go to Add Column – Custom Column

Lets build this function up

Returns the contents of the Excel workbook. Content was added here. Normally when you click the arrow button on Content, this is where Power BI creates the Helper queries for you which we don’t want.

If you click the new custom column (Just next to Table)  you can see the content

Now we can amend this custom column even more

= Table.AddColumn(#"Removed Other Columns", "Custom", each Table.Skip(Excel.Workbook([Content]){0}[Data]))

Table.Skip allows us to skip over the header which start at {0}

And we can even add promote headers to this one bit of code Table.Promoteheaders

= Table.AddColumn(#"Removed Other Columns", "Custom", each Table.PromoteHeaders(Table.Skip(Excel.Workbook([Content]){0}[Data])))

Now we can click the Arrow on the Custom column and bring through all the columns in the table.

No Helper Queries are created and we have done numerous tasks within that one line of code

If you have already created everything you can still do this and grab the code to replace the old code with at the start of each query. then remove the helper queries.


Power BI. Using flags in a junk dimension

We currently have a project where the metrics are actually flags to count whether a record is true or false rather than business metrics like Amount, SaleAmount etc

  • Is something completed? 1 or 0
  • Is something in Development? 1 or 0
  • Is something Out of Scope ? 1 or 0


Now, if you left these in the fact table you could sum them to create a measure. But you are left with thousands of rows of just 1s and 0s. against all the keys in the fact table.

Also, they aren’t really metrics. They are true and false flags and as a consequence should not be in a fact table.

The above solutions is not what Kimball would recommend. Kimball recommends a Junk Dimension

Definition from Kimball: — A junk dimension is the combination of several row-level cardinality flags and attributes into a single dimension table rather than modeling them as a separate dimension

It should also be noted that a junk dimension can contain flags and other attributes that don’t quite fit anywhere else.

The star schema looks like this (This is just an example STAR)

Flags are split between two areas D1 and D2 which has also been added to the junk dimension as its basically just a title to split the metrics into two rows in a table.

These two areas are two completely separate low level fact tables, with different metrics and dimensions. think of for example ‘Human resources’ and Marketing’.

we have them here because we have a high level granularity STAR Schema comparing some of the similar flags across the two departments.

This could have been added as a separate dimension but as its just one data item it seemed more sensible to add it here.

So instead of having them repeated over and over again in Fact. We just have every option available set in the junk dimension once. So every single record where F1 is true for Area D1 goes to Key 0.

Summing the flags

The flags are Bool. True false columns so we cant simply sum them up. If we were to do that we would need to convert them to int

Flag int = CONVERT([Flag],INTEGER)

Once converted we could sum them.

SUM([Flag Int]

The junk dimension is used against multiple fact tables and here is the first issue.

If we create DAX this way, there is no mention of the central fact table. Therefore if you use it it just counts all the true flags in the dimension, completely separately from the fact table.

We need that join to the fact table in the DAX

So we dont need to Convert the Flag to an int to use it in a measure. We can do the following:

Measure is true = CALCULATE(COUNT(‘Fact1′[JunkKey]),’Dim Junk'[Flag]=TRUE()|| ‘Dim Junk'[Area] = “D1” )


The CALCULATE function evaluates the Count of the Junk keys that are already integer within the fact table. This is important because we have now created a reference to one of the fact tables.

And we are counting where the Flag is equal to True in the Dim


Allows us to add another and. In this case we are only looking at the D1 area.

We can add these logically into the separate fact tables

Measure is true = CALCULATE(COUNT(‘Fact2′[JunkKey]),’Dim Junk'[Flag]=TRUE()|| ‘Dim Junk'[Area] = “D1” )

And now we have a fact table with measures we can use for reporting. And we have made sure out Power BI data model is as small as it can be in memory.

Power BI Concatenating two measures together under one column header for a table display

Another requirement came through for a table that showed the current month figures against last months figures,

However, to save space the two values were in the same column, last month contained in a ()

So part one was to create the date time intelligence for last month,

And then to attempt to concatenate the two figures just for the table

Time intelligence to see this time last month

There are various ways of doing this

Prev Month At Risk = CALCULATE([At Risk],PARALLELPERIOD('Dim'[Date], -1,MONTH))

Here we use CALCULATE to evaluate the SUM of At Risk (created in a base measure already) in a modified filter context. Which is PARALLELPERIOD using the date from the Date Dim.

PARALLELPERIOD takes the current set of dates (For us its month and year) and shifts the first and last date in the period specified a number of intervals. -1 takes us back a month.

This is the chosen method but you can also use

CALCULATE([At Risk],DATEADD(‘DimDate'[Date],-1,MONTH))
DATEADD returns a table that contains dates shifted forwards or backwards in time the number of intervals from the date in the current context. Again we are sing Month and Year in the current Context.

Returns a table that contains a column of dates, shifted either forward or backward in time by the specified number of intervals from the dates in the current context

PREVIOUSMONTH('Dim Report Snapshot Date'[Date]))

PREVIOUSMONTH returns all dates from the previous month using the first date in your context. As we are using month, our first date is 0103 so it goes back to 0102


Returns a table (Its a table expression) Returns a single column of date time values from the same period in your current context from last year, SAMEPERIODLAST Year can only go back a year.

Concatenate current month and Last Month into one single value

This is for a table only. Not to be used for Proper analytics.

So first we need to convert the number into a string.

And then we can concatenate. The following was used

At Risk =
VAR varMeasure = CONVERT([At Risk],STRING)

VAR varPrev = CONVERT([At Risk],STRING)

VAR varConcata = CONCATENATE(varMeasure,”(“)

VAR varConcatb = CONCATENATE(varPrev,”)”)



There will be lots of other ways to do this but I decided on the following
Creating Variables to hold:

  • The measure converted to a string
  • The previous (Month) measure converted to a string
  • CONCATENATE (You can only concatenate two things using this method) so The measure was concatenated with (
  • CONCATENATE, Doing the second part of the concatenation. The previous Month String value with )
  • Finally we return a Concatenation of a and B so make a string value we can use for the table.

To CONCATENATE more that two columns you can use

Column D = [column A] & " - " & [column B] & " - " & [column B]

So were we have 5 measures and a total. To view them in the table under one column header we have concatenated them together.

Power BI Composite Modelling (Multiple datasets Report) – Couldn’t load the data for this visual

This blog is for anyone using the new App experience (August 22) and has created a report using multiple datasets and the users can’t see the data

We have

  • A workspace
  • A Dataflow
  • Multiple Datasets
  • A report using all the datasets
  • An App with testers
  • There are two testers with access to the testing report

The app is published but the users only see visuals with no data. When they try to refresh they see this error

This seems to be a issue with the composite model. It turns out that for users of composite model reports you need to have the following turned on.

This means that the people in the testers group can view the composite report. But as a after effect they can also build reports over the datasets.

I believe Microsoft may be aware and are looking into this. But for the time being. Any users of composite reports need to have this permission selected.

Power BI – Deployment Pipeline Quick Tips – Setting dataflow Environment sources and Publishing (Direct Query) Datasets containing multiple Datasets

You need Premium or Premium Per user to work with Deployment Pipelines

This happens right at the beginning of the Deployment Pipeline process when you have just added Dev to the Pipeline and you need to deploy Test and Prod

Tip – Changing Data source Rules for Dataflow

You now need to deploy your dev dataflow which is connected to the dev database into Test. You cant change the data source rule until you have a data source to work with.

After deploy, the test dataflow is still against the dev data source (Azure SQL database)

Click Test Deployment Settings

Deployment Rules – Click on your dataflow

Data Source Rules – Change This (Your Dev Details) to this (Select and choose your Test SQL Server and Database)

And Save

The tip here is to then deploy your dataflow Dev to Test again. Only then will it use the new settings.

To check go back to the workspace and go to settings for the dataflow

Deploying Datasets that contain multiple data sets

This is specific to setting up Power BI With the Following Option

With this option set you can create smaller data sets, probably based on a star schema. Then if required you can connect to another data set. And then connect to more data sets and data. Without this option you can only connect to one data set.

This has changed from a Direct Query Connection (The standard way. 1 Data Set Only) to Live Query Connection (Analysis Services and Multiple data sets)

Tip- Move your hybrid data set after the original data sets

So here, what we can do is move the dataflows, and datasets A B and C at the same time.

Once completed move Star A B and C so it goes after the schemas its based on

Then do the report.

If you try and do them all together you will get errors.

So these are just a couple of tips to look out for when setting up your Pipelines for the first time. And if you use the setting that allows you to connect to multiple data sets.

Power BI – App Viewers can’t see the data in the report

We recently had an issue where a shared dataset (pbix) had been set up over a SQL Database.

This was then published to Power BI

A new pbix was created.

Power Platform – Power BI datasets was chosen and the shared dataset was selected. Then reports were created and published to Service.

An App was set up and a user was added to view the report.

However when they came to view the report, they would see the report but not the data. All they had was messages about not having access to the data.

At first we struggled to understand what the problem was and then it started to add up.

Previously we had worked on a project with dataflows and multiple datasets being used for one report. So we have the following ticked

This worked great for this specific project. We were in Premium. There were dataflows.

However, this project is just a test report, not set up in Premium and without dataflows.

The above setting is a blanket setting that sets every pbix to you create from Live Query to Direct Query

Live Query is where it live connects to just one data set only and then when you publish your report over the data set it uses that initial shared dataset and doesn’t create a new data set because the DAX, model etc. is all set up in that specific data set.

Direct Query is a slight change. You Direct Query the data source (the data set) and crucially you can also direct Query other data sets, even other data sources like data bases and flat files all together. But that Shared Data set is also direct querying its data source.

Direct query is a good one for real time analysis from a transactional database. But many DAX expressions aren’t available over Direct Query straight over a database. For example, time based intelligence DAX. So the reports are much simpler in Power BI. And more complex to set up at the database end for the users.

In this instance, the reason we have issues is because there is no dataflow at the start of the Power BI process. 

If you are using Direct Query over a dataflow, the data is imported into Power BI into the dataflow. The dataset Direct Queries the Dataflow.  Your users are then added to the workspace App and they can see the data because they have access to the dataflow.

Without the dataflow, your data set is calling data directly as Direct Query.  Which is essentially where Power BI always calls from the data base and not from the Power BI Columnar data store.

So the users were opening up the App, and trying to access data straight from the database because there is no dataflow holding the data. Because the user doesn’t have access to the database, there is no data to be seen.

So the issue here I think is that Power BI should be allowing us to switch this option on and off, depending up the choices we make on set up. Not just have it as a blanket option over ever single report like it does now. 

Without dataflows you want to Live connect to the shared dataset. Not Direct Query right down to the datasource.

With a dataflow its fine to Direct Query because the users have access to the dataflow data in the workspace

Power BI Datamarts (New May 2022)

Difference between dataflows, datamarts and datasets


Lets have a quick look at the history of the data set

Here we see everything in one pbix file. Only one person can work with the file at any one time. We cant reuse anything or work on anything separately. Our dataset is in the one pbix file. dependent upon Import or Direct query the dataset is in the Power BI Columnar Data storage.

the only use case for this now would be if you were simply working on your own small projects outside of a working team environment in Pro or even Power BI Free license.

Here we can see that the dataset is now separate from the dataflow (the data transformation) and the actual reporting pbix files. the Dataset is the model and the measures.

This is currently the use case that we use. However our main transformations are outside of this within the SQL database.


Dataflows are packaged ETL Type transformations. We are packaging up into a dataflow to be reused. these are really good for reusable dimensions. Like Dates. Locations, etc.

They are for individual datasets that you bring together later on in the process

Dataflow data sits in a data lake so you can use them for machine learning tasks really easily. this is one of the big wins for dataflows.

But can you do all of your transformations in them?

Some of the Power Query transformations can be really time consuming and memory intensive. Especially when you are trying to create a star schema from transactional tables and lots of separate data sources.

You also need to think about Premium or Pro because there are certain things that you can’t do in Pro within the dataflow because it needs Premium In Lake compute (Append and duplicate for example)

If you do all this in your Pbix file this can easily grind the file to a halt. Moving it to a dataflow means that this can be done at a different time and you refresh your pbix file with work that has already been done.

However even this can be too much. Imagine you are developing, you have to go to the dataflow and refresh. Power BI has to grind through all the steps and the steps are really complicated.

You can go wrong. Backtrack by creating more steps and leave the incorrect steps in very easily. Making a great number of activities. All the activities have to be refreshed. Even the wrong ones.

It is still recommended to do the heavy processing work outside of Power BI. say with Azure (Data Factory and SQL Database)

Then when Developing in the dataflow you can do things quickly and they can be transferred to the SQL Database at another time. Still allowing the user to develop quickly.


The new Premium Feature announced at Microsoft Build May 2022

The Self Service Database. it doesn’t replace the data warehouse.

Datamarts allow you to combine and work with data from all sources in a single place.

Datamarts replace the step we would call the shared dataset previously.

We would have a pbix file where we would bring in the dataflow (Which is used over the SQL datamart and we do all the friendly naming in the dataflow)

The Shared Data set contains the model and measures (I don’t use calculated columns as they can bloat the model)

The pbix file would be published to service. Then report pbix files are created over the top of the published dataset. In this example there are two pbix files.

Datamarts allow you to just have the one report pbix file instead.

Premium or PPU Only So as a user you have to understand that with Power BI Pro this isn’t a feature we can use.

Datamarts are about self service data analytics. Bridging the gap between business users and IT. How do we create the data warehouse without having to go to central IT?

No Code Low Code

But does it mean you don’t have to create your database and ELT inside Azure?

There is still the need to create full Enterprise solutions and SQL Datamarts and wearehouses.

Just like with the dataflows, transforming to an OLAP schema from OLTP (Or datasources that aren’t even OTLP sources but just scattered data sources) can be very memory and processing intensive.

Creating a data mart with better control and governance should still be done pre Power BI for large more complex based projects.

So what other use cases and plus points are there for the datamart?

Data Refresh

Another good example of a use case for the datamart is that datamarts Refresh the data flow then then dataset. No need to use APIs to run the datasets straight after the dataflows. Or setting up refreshes on Power BI for both, guessing the amount of time it will take to run the dataflow

Our Datamart users

This is a great options is for people who use macs and can’t use Desktop. It enables a SQL Endpoint for you

Datamarts are geared towards Self Service. the Citizen Data Analyst.

a person who creates or generates models that leverage predictive or prescriptive analytics but whose primary job function is outside of the field of statistics and analytics.”


Would you use the Datamart in an Enterprise setting?

In an enterprise setting you have Data Engineers and developers. You will have a BI team as well as analysts. There is a place for the data mart for the self service bronze approach. Still with the aim to move to the more governed approach of having the logic set in a SQL Database centrally.

Our analysts creating self service probably aren’t creating star schemas and fully attempting to transform within the dataflow. This will still need to be done by the BI Devs.

However its probably that without the datamart, all the relationships and measures were created inside one pbix file and there may not be a SQL Database layer. Just datasets created from files etc.

The datamart allows for a better governed blended approach

Would a Developer or a data engineer use a datamart?

The BI Developers and Data Engineers are probably working outside of Power BI in the SQL Database and with Data factory or other ETL packages. however they can now leverage the datamart features if they want to quickly look at the data for this project.

The Datamart model

So how does this change out datasets and dataflow models above?

We can see how the Datamart unifies the dataflow and the dataset that is usually created in the shared pbix files. It also raises lots of questions.

  • Do we still create dataflows separately?
  • What is this new layer. the SQL Database?
  • If we have our Datamart in SQL do we need to use the datamart in Power BI?

The Datamart SQL Database layer

Dataflows stores the data in a datalake

Datamarts are stored in an Azure SQL Database. You will hear this being called the data warehouse . When we think of the DW we think in terms of The Star Schemas.

If your logic is complex and the data sets are large its always best to use technologies outside of Power BI (Data factory, SQL Database)

The data warehouse that is being spoken about here is simply data storage, like your staging layer in the Azure SQL database. Our users here are probably not users that understand how to create OLAP schemas. So you can see this as your staging layer

Then you have the dataset layer with the relationships, calculations, so the SQL layer is the middle layer between the dataflow and the data set.

But what can you do with the SQL Layer and what can’t you do?

You cant write DDL (ALTER, CREATE) or DML (INSERT UPDATE, DELETE etc) Queries. Just DQL (SELECT).

So you can’t write stored procedures or do any real transformations within SQL. This still has to be done in the dataflow. You can only query it.

The SQL Database is not part of the transform layer

How to set up the Datamart in Power BI service

New DataMart

At the moment you can set this in Admin tenant settings. You either allow the whole organisation to use datasets or no one. Hopefully they will change this soon so you can allow a small group of users to test the functionality.

I will do another example post soon but basically, you can create the model (really great for Mac users who can’t use Desktop)

And you can also write measures in DAX. My main concern here is that simple base measures are fine but for complex ones. I always test them against a visual and you don’t have the ability to do this here.

Also, you cant create calculate columns or calculated tables. This is a good thing. you don’t want to be creating these anyway as they bloat your model due to none compression..

Behind the scenes Managed SQL Server is running the SQL layer and you still have the Power BI Columnar data store layer for the data set.

Row level security can also be done here. At SQL Layer and the dataset layer. (Two layers are created by applying security on the data set as you would usually do, but in service, not in desktop)

Ad Hoc Analysis can be done in Power Query by the user on the SQL layer, and if you know SQL you can write T SQL too within Power Query

You can also take your SQL Endpoint into SSMS for example (Read Only)

You can manage Roles in the Datamart and Assign Internal and External Users to the Role. Or Share the Endpoint with them if XMLA endpoints are on.

This is a really exciting new development for the Self Service side of Power BI. We now need to understand where it sits. Who our users are and how we can apply it to projects?


If you create reporting in Power BI service at the moment you cant publish to other workspaces or tenants. That’s where a pbix file comes in that is separate to Service and you can re publish to other tenants. How will the datamart help with this kind of functionality?

What are the future developments of the datamart going to be? for example Slowly changing dimensions, monitoring, version control?

Will this cost any more money over having a Preview license?

Will the SQL Layer ever become part of the transform functionality

Azure Logic App – Copying a file from Sharepoint to a Data Lake

I have been asked to set up a Logic app in Azure (That is Power Automate for anyone outside Azure) to copy specific file(s) from a Sharepoint folder and add to an Azure Data Lake.

The first example file is around 16,00 rows and not likely to grow too significantly. This is the same with the other files.

There is a specific use case behind this First logic app:

  • The Data in the csv file(s) is updated every day so the file name remains the same
  • We need to copy the File and overwrite the file in the data lake every day after the task has been done to update the Sharepoint File (Around 5PM every day)
  • we want the Logic App to run via Data Factory
  • Once the logic app has run we want to trigger the pipeline to populate the SQL database from the file in the data lake.

Set up the Logic App

In azure go to Logic App and New

Log Analytics: to get richer debugging information about your logic apps during runtime

Consumption Plan: Easiest to get started and fully managed (Pay as you go model). Workflows increase slowly or are fairly static

Standard Plan: Newer than the consumption plan. Works on a single tenant. Works on a flat monthly fee which gives you potential cost savings.

Create the Logic App

Once you have added tags and created its time to create the logic App

Because we want to trigger in Azure Data Factory we want to go for When a HTTP request is triggered

The HTTP Post URL will be used in Data Factory to trigger the Logic App.

I have added a JSON Schema that supports some of the important information for this project. Like Container for the data lake, Folder , File name and isFolder (Which becomes more important a little later.

     "properties": {       
        "Container": {            
           "type": "string"        
        "fileName": {            
           "type": "string"        
       "folder": {           
            "type": "string"        
       "isFolder": {            
           "type": "boolean"        
"type": "object"

List Folder

Now we want to List Sharepoint folder. So create a new step and search for List Folder

Returns files contained in a Sharepoint Folder.

Next you have to Sign into Sharepoint with a valid account that has access to the Sharepoint site.

Here is where we have a question. For this test, my own username and password has been used but obviously I change my password at certain points which means that this will need manually updating when this happens.

What we need is a way of logging into Sharepoint that isn’t user related and we can use within the logic app. This needs further thought and investigation.

When you log in you create a Sharepoint API connection in Azure Resource Group

To get the site address you can go into Sharepoint, Click on the … against the file and copy link.

The link needed amending slightly because it needs to be

If you have access you should then be able to click the folder against File Identifier and select the correct area

For Each

Next Stop, For each ‘Body’ from the List Folder step. We get the File Content. Go to Next Step and choose the For Each Condition (Because there will be multiple files)

Get File Content

Now We want to Get File Content From Sharepoint

Gets File contents using the File Identifier. The contents can be copied somewhere else or used as an attachment

You need to access the same Sharepoint site address as before. Then click on File identifier and choose ID from the Sharepoint Dynamic Content pop up

so here we can see that from the list folder step we have lots of file metadata we can use like DisplayName. ID, LastModified etc.

We know we need ID for Get File Content

We are at a point where we can run this now as a test.

Note that so far we have this set up

but we hit specific issues

Status 404 File not found

cannot write more bytes to the buffer than the configured maximum buffer size of 10457600

So we have two issues to resolve and after a bit of help on the Q&A Forums we find out that:

List Files “Returns files contained in a Sharepoint Folder. ” Actually also returns folders which are erroring because they are not files

Logic Apps aren’t really set up for large files. There doesn’t appear any way we can get past the size issue. So we need to check our files and also think of ways to bring through smaller data sets if needs be.

Thankfully our files are way below the threshold and the business thinks that they won’t increase too much.

So here is where we can start applying criteria, which we want to do anyway because we only want certain files.

  1. If its a folder we don’t want to use it
  2. If its over 10457600 in size we don’t want to use it
  3. Only bring through files called…….

So we need to change our For Each

Within For each add a new step and search for Condition

And add your conditions (And Or)

Then you can move the Get File content into True

So If IsFolder is false and the size is less than 10457600 we can grab file A OR File B.

When you now test this Logic App Get File content should succeed with most not even hitting the criteria.

Create Blob

Finally within the True section we need to add the file to our Data Lake.

Search for Create Blob

Here you have to sign into your Blob Storage which again creates another API Connection in Azure

You have to supply the Storage account name and choose an authentication type. Access Key has been used, the details added here. Normally in data Factory the Access Key is obtained through a Key Vault so, more information is needed to come up with the most secure way of doing this. There are two other authentication types to choose from.

More investigation is needed into these other approaches.

Now we can do a full test of the Logic App

Testing the Logic App

When you trigger the logic app

The Body contains a long list of every object. Really handy to know what the details are inside this action.

To test this was copied into a word document.

Next comes the Get File Content

Now most of the files don’t satisfy the condition.

Next was clicked to get to a file in Get File Content (first one appeared as number 32)

And now we can see the Body of the Create Blob. (This happens for every file specified in the criteria

And if you use Microsoft Storage Explorer app you can check that they have indeed been updated (Either its a new file or it updates what is already there)

Data Factory

Now we have saved the Logic App we want to trigger it in Data Factory

Create a pipeline and choose a web activity

Copy the URL from the Logic App and paste here

For the Body I simply used the Simply JSON at the start of this article.

Now you can trigger this pipeline along with all your other pipelines to run the Data into your Data Lake and then into SQL to be used for Analytics.

Power BI February 2022 Updates Dynamic M Query Parameters

Now supports SQL Server and more data sources

But what are Dynamic M Query Parameters and what does this mean?

It feels like they have been upgraded to use with direct query data sources so you can restrict the amount of data being asked for at the lowest level.

Lets have a look at a simple example using Taxi data from a Microsoft learning path.

First of all you need to open Power BI – Options and Settings – Options

Its in Preview so make sure that is ticked before continuing

Get Data Azure SQL Database (The guidance mentions SQL Server but it seems that both can be used for this test)


Then go to Transform data.

Right Click on trip fares to get to advanced editor

Source = Sql.Database("", "taxi-data-db"),    dbo_TripFares = Source{[Schema="dbo",Item="TripFares"]}[Data]

Its currently bound to a table but we need to bind it to a query for this process.

Click the cog against source.

Go into Advanced Options and add the SQL Statement

SELECT * FROM dbo.TripFares

And then go back and look at advanced editor

Source = Sql.Database("", "taxi-data-db", [Query="SELECT * FROM dbo.TripFares"]),    
dbo_TripFares = Source{[Schema="dbo",Item="TripFares"]}[Data]

So now, its nearly bound to a query but you will note that it looks like the table is erroring.

You can go back to Advanced Editor and change to

Source = Sql.Database("", "taxi-data-db", 
[Query="SELECT * FROM dbo.TripFares"])in   

 We only need the query and not dbo_TripFares

Now we can add the  Dynamic M Query parameters. I will go for an easy one first as a demo.

And then I change the advanced code again

Source = Sql.Database("", "taxi-data-db", [Query="SELECT * FROM dbo.TripFares Where payment_type = '" & paramPaymentType & "'"])

Note the new WHERE Clause that concatenates the value in our parameter

It will read in SQL SELECT * FROM dbo.TripFares Where payment_type = ‘CRD’

When it runs the first time you are asked to approve and you can actually see the SQL its going to use which is good. (Note I had to change to CSH to get the message up but I am running with CRD)

When it comes through its restricting to the selected payment type

We are going to change the code again

filterQueryPaymentType = "SELECT * FROM dbo.TripFares Where payment_type = '" & paramPaymentType & "'",    
Source = Sql.Database("", "taxi-data-db", 

This sets the SQL command first and we pass the Filter query into the data source

Now we know that the query works. Lets use it in Power BI Reporting and Assign to table.

This will need a lookup table of all the payment types to work

I am going to simply create the reference table in M

Source = Sql.Database("", "taxi-data-db", 
[Query="SELECT DISTINCT payment_Type FROM dbo.TripFares"])
in    #"Source"

Close and Apply

Now bind the table to the parameter in Modelling tab

Click on Payment_Type column. Advanced. Bind to parameter

Click Continue

A multi select is not going to be used for this demo

I have added a quick table. the metrics have come through as strings and there will be lots of things you need to test in direct query mode but I will ignore for the time being.

I dragged in Payment type from the Payment Type lookup into a slicer.

Click the slicer and see your data change. every time you click the slicer a direct query will happen but only for the payment type selected, hopefully making things much quicker.

And there you go. You have set up a restricted direct query. This will help with any direct query reports you need to create based on real time data.

You are still hitting the SQL DB though a lot so this would need thinking out.

And remember, Direct query doesnt give you the full Power BI reporting suite so your reports may be more basic. And usually I like to work with Star schemas but here we have the extra complexity of lookup tables to work with the parameters.

I will be looking at a date time example soon hopefully. This is clearly an extremely important piece of the Direct query real time puzzle.

Power BI Admin APIs to return a list of email subscriptions

Get Dashboard Subscriptions brings back a list of everyone who has subscribed to a dashboard

What is a Power BI Subscription?

Subscriptions are a great way to assign yourself and other users to get emails regarding report content.

There are certain governance rules we follow.

  1. Report viewers views content via an app. We don’t want report viewers coming into the App workspace. we want them to see carefully selected and brought together content.
  2. If we use Subscriptions we want to push though a really nice screen shot of a report that changes and gets the users wanting to come and see more content within that app. therefore we always have a report or dashboard with visuals that don’t need scroll bars to engage the viewer so they want to see more.
  3. because of this, we want to be able to subscribe people to App content

Go to an App. Note you can add your Subscription here which is a link to the dashboard

for this example, the App dashboard is subscribed to

then we go to try it out from the Microsoft API Page

Dashboard Subscriptions

and try the API

Add the Dashboard ID to parameters

But this is where logic is not quite working (the hope is that this will resolve fairly quickly). The above API doesn’t give you information back if you subscribe via the app. Only when you subscribe to the actual Dashboard in the workspace.

We want all our report viewers accessing the pre built app so this is where the information is most required.

When the user is added to a dashboard subscription in the workspace. The API is tested again.

What this can show us is anyone in the workspaces that has subscribed to the actual dashboard.  We want all viewers with App access.

Get report Subscriptions as Admin

This is the same as above but with reports

Get user Subscriptions as Admin

I get my user ID from Azure Active Directory

And see what I’m subscribed too but again, only workspace content

Logically, I feel like our viewers should be subscribing through the apps

this is really good stuff but I feel like they need to resolve the issue with Apps. Apps are the go to areas for users to view content so this is where we want people to subscribe too.

If you look at the information coming back. Here we can see the artifact type is report but there is no where that mentions if the report is in an App or in the workspace and I feel like this is actually important information. I only know because I have tested against both the App and the workspace.

If this could be resolved these APIs would be really useful to help us understand the subscription uptake.