Design a site like this with
Get started

Power BI Composite Modelling (Multiple datasets Report) – Couldn’t load the data for this visual

This blog is for anyone using the new App experience (August 22) and has created a report using multiple datasets and the users can’t see the data

We have

  • A workspace
  • A Dataflow
  • Multiple Datasets
  • A report using all the datasets
  • An App with testers
  • There are two testers with access to the testing report

The app is published but the users only see visuals with no data. When they try to refresh they see this error

This seems to be a issue with the composite model. It turns out that for users of composite model reports you need to have the following turned on.

This means that the people in the testers group can view the composite report. But as a after effect they can also build reports over the datasets.

I believe Microsoft may be aware and are looking into this. But for the time being. Any users of composite reports need to have this permission selected.


Power BI – Deployment Pipeline Quick Tips – Setting dataflow Environment sources and Publishing (Direct Query) Datasets containing multiple Datasets

You need Premium or Premium Per user to work with Deployment Pipelines

This happens right at the beginning of the Deployment Pipeline process when you have just added Dev to the Pipeline and you need to deploy Test and Prod

Tip – Changing Data source Rules for Dataflow

You now need to deploy your dev dataflow which is connected to the dev database into Test. You cant change the data source rule until you have a data source to work with.

After deploy, the test dataflow is still against the dev data source (Azure SQL database)

Click Test Deployment Settings

Deployment Rules – Click on your dataflow

Data Source Rules – Change This (Your Dev Details) to this (Select and choose your Test SQL Server and Database)

And Save

The tip here is to then deploy your dataflow Dev to Test again. Only then will it use the new settings.

To check go back to the workspace and go to settings for the dataflow

Deploying Datasets that contain multiple data sets

This is specific to setting up Power BI With the Following Option

With this option set you can create smaller data sets, probably based on a star schema. Then if required you can connect to another data set. And then connect to more data sets and data. Without this option you can only connect to one data set.

This has changed from a Direct Query Connection (The standard way. 1 Data Set Only) to Live Query Connection (Analysis Services and Multiple data sets)

Tip- Move your hybrid data set after the original data sets

So here, what we can do is move the dataflows, and datasets A B and C at the same time.

Once completed move Star A B and C so it goes after the schemas its based on

Then do the report.

If you try and do them all together you will get errors.

So these are just a couple of tips to look out for when setting up your Pipelines for the first time. And if you use the setting that allows you to connect to multiple data sets.

Power BI – App Viewers can’t see the data in the report

We recently had an issue where a shared dataset (pbix) had been set up over a SQL Database.

This was then published to Power BI

A new pbix was created.

Power Platform – Power BI datasets was chosen and the shared dataset was selected. Then reports were created and published to Service.

An App was set up and a user was added to view the report.

However when they came to view the report, they would see the report but not the data. All they had was messages about not having access to the data.

At first we struggled to understand what the problem was and then it started to add up.

Previously we had worked on a project with dataflows and multiple datasets being used for one report. So we have the following ticked

This worked great for this specific project. We were in Premium. There were dataflows.

However, this project is just a test report, not set up in Premium and without dataflows.

The above setting is a blanket setting that sets every pbix to you create from Live Query to Direct Query

Live Query is where it live connects to just one data set only and then when you publish your report over the data set it uses that initial shared dataset and doesn’t create a new data set because the DAX, model etc. is all set up in that specific data set.

Direct Query is a slight change. You Direct Query the data source (the data set) and crucially you can also direct Query other data sets, even other data sources like data bases and flat files all together. But that Shared Data set is also direct querying its data source.

Direct query is a good one for real time analysis from a transactional database. But many DAX expressions aren’t available over Direct Query straight over a database. For example, time based intelligence DAX. So the reports are much simpler in Power BI. And more complex to set up at the database end for the users.

In this instance, the reason we have issues is because there is no dataflow at the start of the Power BI process. 

If you are using Direct Query over a dataflow, the data is imported into Power BI into the dataflow. The dataset Direct Queries the Dataflow.  Your users are then added to the workspace App and they can see the data because they have access to the dataflow.

Without the dataflow, your data set is calling data directly as Direct Query.  Which is essentially where Power BI always calls from the data base and not from the Power BI Columnar data store.

So the users were opening up the App, and trying to access data straight from the database because there is no dataflow holding the data. Because the user doesn’t have access to the database, there is no data to be seen.

So the issue here I think is that Power BI should be allowing us to switch this option on and off, depending up the choices we make on set up. Not just have it as a blanket option over ever single report like it does now. 

Without dataflows you want to Live connect to the shared dataset. Not Direct Query right down to the datasource.

With a dataflow its fine to Direct Query because the users have access to the dataflow data in the workspace

Power BI Datamarts (New May 2022)

Difference between dataflows, datamarts and datasets


Lets have a quick look at the history of the data set

Here we see everything in one pbix file. Only one person can work with the file at any one time. We cant reuse anything or work on anything separately. Our dataset is in the one pbix file. dependent upon Import or Direct query the dataset is in the Power BI Columnar Data storage.

the only use case for this now would be if you were simply working on your own small projects outside of a working team environment in Pro or even Power BI Free license.

Here we can see that the dataset is now separate from the dataflow (the data transformation) and the actual reporting pbix files. the Dataset is the model and the measures.

This is currently the use case that we use. However our main transformations are outside of this within the SQL database.


Dataflows are packaged ETL Type transformations. We are packaging up into a dataflow to be reused. these are really good for reusable dimensions. Like Dates. Locations, etc.

They are for individual datasets that you bring together later on in the process

Dataflow data sits in a data lake so you can use them for machine learning tasks really easily. this is one of the big wins for dataflows.

But can you do all of your transformations in them?

Some of the Power Query transformations can be really time consuming and memory intensive. Especially when you are trying to create a star schema from transactional tables and lots of separate data sources.

You also need to think about Premium or Pro because there are certain things that you can’t do in Pro within the dataflow because it needs Premium In Lake compute (Append and duplicate for example)

If you do all this in your Pbix file this can easily grind the file to a halt. Moving it to a dataflow means that this can be done at a different time and you refresh your pbix file with work that has already been done.

However even this can be too much. Imagine you are developing, you have to go to the dataflow and refresh. Power BI has to grind through all the steps and the steps are really complicated.

You can go wrong. Backtrack by creating more steps and leave the incorrect steps in very easily. Making a great number of activities. All the activities have to be refreshed. Even the wrong ones.

It is still recommended to do the heavy processing work outside of Power BI. say with Azure (Data Factory and SQL Database)

Then when Developing in the dataflow you can do things quickly and they can be transferred to the SQL Database at another time. Still allowing the user to develop quickly.


The new Premium Feature announced at Microsoft Build May 2022

The Self Service Database. it doesn’t replace the data warehouse.

Datamarts allow you to combine and work with data from all sources in a single place.

Datamarts replace the step we would call the shared dataset previously.

We would have a pbix file where we would bring in the dataflow (Which is used over the SQL datamart and we do all the friendly naming in the dataflow)

The Shared Data set contains the model and measures (I don’t use calculated columns as they can bloat the model)

The pbix file would be published to service. Then report pbix files are created over the top of the published dataset. In this example there are two pbix files.

Datamarts allow you to just have the one report pbix file instead.

Premium or PPU Only So as a user you have to understand that with Power BI Pro this isn’t a feature we can use.

Datamarts are about self service data analytics. Bridging the gap between business users and IT. How do we create the data warehouse without having to go to central IT?

No Code Low Code

But does it mean you don’t have to create your database and ELT inside Azure?

There is still the need to create full Enterprise solutions and SQL Datamarts and wearehouses.

Just like with the dataflows, transforming to an OLAP schema from OLTP (Or datasources that aren’t even OTLP sources but just scattered data sources) can be very memory and processing intensive.

Creating a data mart with better control and governance should still be done pre Power BI for large more complex based projects.

So what other use cases and plus points are there for the datamart?

Data Refresh

Another good example of a use case for the datamart is that datamarts Refresh the data flow then then dataset. No need to use APIs to run the datasets straight after the dataflows. Or setting up refreshes on Power BI for both, guessing the amount of time it will take to run the dataflow

Our Datamart users

This is a great options is for people who use macs and can’t use Desktop. It enables a SQL Endpoint for you

Datamarts are geared towards Self Service. the Citizen Data Analyst.

a person who creates or generates models that leverage predictive or prescriptive analytics but whose primary job function is outside of the field of statistics and analytics.”


Would you use the Datamart in an Enterprise setting?

In an enterprise setting you have Data Engineers and developers. You will have a BI team as well as analysts. There is a place for the data mart for the self service bronze approach. Still with the aim to move to the more governed approach of having the logic set in a SQL Database centrally.

Our analysts creating self service probably aren’t creating star schemas and fully attempting to transform within the dataflow. This will still need to be done by the BI Devs.

However its probably that without the datamart, all the relationships and measures were created inside one pbix file and there may not be a SQL Database layer. Just datasets created from files etc.

The datamart allows for a better governed blended approach

Would a Developer or a data engineer use a datamart?

The BI Developers and Data Engineers are probably working outside of Power BI in the SQL Database and with Data factory or other ETL packages. however they can now leverage the datamart features if they want to quickly look at the data for this project.

The Datamart model

So how does this change out datasets and dataflow models above?

We can see how the Datamart unifies the dataflow and the dataset that is usually created in the shared pbix files. It also raises lots of questions.

  • Do we still create dataflows separately?
  • What is this new layer. the SQL Database?
  • If we have our Datamart in SQL do we need to use the datamart in Power BI?

The Datamart SQL Database layer

Dataflows stores the data in a datalake

Datamarts are stored in an Azure SQL Database. You will hear this being called the data warehouse . When we think of the DW we think in terms of The Star Schemas.

If your logic is complex and the data sets are large its always best to use technologies outside of Power BI (Data factory, SQL Database)

The data warehouse that is being spoken about here is simply data storage, like your staging layer in the Azure SQL database. Our users here are probably not users that understand how to create OLAP schemas. So you can see this as your staging layer

Then you have the dataset layer with the relationships, calculations, so the SQL layer is the middle layer between the dataflow and the data set.

But what can you do with the SQL Layer and what can’t you do?

You cant write DDL (ALTER, CREATE) or DML (INSERT UPDATE, DELETE etc) Queries. Just DQL (SELECT).

So you can’t write stored procedures or do any real transformations within SQL. This still has to be done in the dataflow. You can only query it.

The SQL Database is not part of the transform layer

How to set up the Datamart in Power BI service

New DataMart

At the moment you can set this in Admin tenant settings. You either allow the whole organisation to use datasets or no one. Hopefully they will change this soon so you can allow a small group of users to test the functionality.

I will do another example post soon but basically, you can create the model (really great for Mac users who can’t use Desktop)

And you can also write measures in DAX. My main concern here is that simple base measures are fine but for complex ones. I always test them against a visual and you don’t have the ability to do this here.

Also, you cant create calculate columns or calculated tables. This is a good thing. you don’t want to be creating these anyway as they bloat your model due to none compression..

Behind the scenes Managed SQL Server is running the SQL layer and you still have the Power BI Columnar data store layer for the data set.

Row level security can also be done here. At SQL Layer and the dataset layer. (Two layers are created by applying security on the data set as you would usually do, but in service, not in desktop)

Ad Hoc Analysis can be done in Power Query by the user on the SQL layer, and if you know SQL you can write T SQL too within Power Query

You can also take your SQL Endpoint into SSMS for example (Read Only)

You can manage Roles in the Datamart and Assign Internal and External Users to the Role. Or Share the Endpoint with them if XMLA endpoints are on.

This is a really exciting new development for the Self Service side of Power BI. We now need to understand where it sits. Who our users are and how we can apply it to projects?


If you create reporting in Power BI service at the moment you cant publish to other workspaces or tenants. That’s where a pbix file comes in that is separate to Service and you can re publish to other tenants. How will the datamart help with this kind of functionality?

What are the future developments of the datamart going to be? for example Slowly changing dimensions, monitoring, version control?

Will this cost any more money over having a Preview license?

Will the SQL Layer ever become part of the transform functionality

Azure Logic App – Copying a file from Sharepoint to a Data Lake

I have been asked to set up a Logic app in Azure (That is Power Automate for anyone outside Azure) to copy specific file(s) from a Sharepoint folder and add to an Azure Data Lake.

The first example file is around 16,00 rows and not likely to grow too significantly. This is the same with the other files.

There is a specific use case behind this First logic app:

  • The Data in the csv file(s) is updated every day so the file name remains the same
  • We need to copy the File and overwrite the file in the data lake every day after the task has been done to update the Sharepoint File (Around 5PM every day)
  • we want the Logic App to run via Data Factory
  • Once the logic app has run we want to trigger the pipeline to populate the SQL database from the file in the data lake.

Set up the Logic App

In azure go to Logic App and New

Log Analytics: to get richer debugging information about your logic apps during runtime

Consumption Plan: Easiest to get started and fully managed (Pay as you go model). Workflows increase slowly or are fairly static

Standard Plan: Newer than the consumption plan. Works on a single tenant. Works on a flat monthly fee which gives you potential cost savings.

Create the Logic App

Once you have added tags and created its time to create the logic App

Because we want to trigger in Azure Data Factory we want to go for When a HTTP request is triggered

The HTTP Post URL will be used in Data Factory to trigger the Logic App.

I have added a JSON Schema that supports some of the important information for this project. Like Container for the data lake, Folder , File name and isFolder (Which becomes more important a little later.

     "properties": {       
        "Container": {            
           "type": "string"        
        "fileName": {            
           "type": "string"        
       "folder": {           
            "type": "string"        
       "isFolder": {            
           "type": "boolean"        
"type": "object"

List Folder

Now we want to List Sharepoint folder. So create a new step and search for List Folder

Returns files contained in a Sharepoint Folder.

Next you have to Sign into Sharepoint with a valid account that has access to the Sharepoint site.

Here is where we have a question. For this test, my own username and password has been used but obviously I change my password at certain points which means that this will need manually updating when this happens.

What we need is a way of logging into Sharepoint that isn’t user related and we can use within the logic app. This needs further thought and investigation.

When you log in you create a Sharepoint API connection in Azure Resource Group

To get the site address you can go into Sharepoint, Click on the … against the file and copy link.

The link needed amending slightly because it needs to be

If you have access you should then be able to click the folder against File Identifier and select the correct area

For Each

Next Stop, For each ‘Body’ from the List Folder step. We get the File Content. Go to Next Step and choose the For Each Condition (Because there will be multiple files)

Get File Content

Now We want to Get File Content From Sharepoint

Gets File contents using the File Identifier. The contents can be copied somewhere else or used as an attachment

You need to access the same Sharepoint site address as before. Then click on File identifier and choose ID from the Sharepoint Dynamic Content pop up

so here we can see that from the list folder step we have lots of file metadata we can use like DisplayName. ID, LastModified etc.

We know we need ID for Get File Content

We are at a point where we can run this now as a test.

Note that so far we have this set up

but we hit specific issues

Status 404 File not found

cannot write more bytes to the buffer than the configured maximum buffer size of 10457600

So we have two issues to resolve and after a bit of help on the Q&A Forums we find out that:

List Files “Returns files contained in a Sharepoint Folder. ” Actually also returns folders which are erroring because they are not files

Logic Apps aren’t really set up for large files. There doesn’t appear any way we can get past the size issue. So we need to check our files and also think of ways to bring through smaller data sets if needs be.

Thankfully our files are way below the threshold and the business thinks that they won’t increase too much.

So here is where we can start applying criteria, which we want to do anyway because we only want certain files.

  1. If its a folder we don’t want to use it
  2. If its over 10457600 in size we don’t want to use it
  3. Only bring through files called…….

So we need to change our For Each

Within For each add a new step and search for Condition

And add your conditions (And Or)

Then you can move the Get File content into True

So If IsFolder is false and the size is less than 10457600 we can grab file A OR File B.

When you now test this Logic App Get File content should succeed with most not even hitting the criteria.

Create Blob

Finally within the True section we need to add the file to our Data Lake.

Search for Create Blob

Here you have to sign into your Blob Storage which again creates another API Connection in Azure

You have to supply the Storage account name and choose an authentication type. Access Key has been used, the details added here. Normally in data Factory the Access Key is obtained through a Key Vault so, more information is needed to come up with the most secure way of doing this. There are two other authentication types to choose from.

More investigation is needed into these other approaches.

Now we can do a full test of the Logic App

Testing the Logic App

When you trigger the logic app

The Body contains a long list of every object. Really handy to know what the details are inside this action.

To test this was copied into a word document.

Next comes the Get File Content

Now most of the files don’t satisfy the condition.

Next was clicked to get to a file in Get File Content (first one appeared as number 32)

And now we can see the Body of the Create Blob. (This happens for every file specified in the criteria

And if you use Microsoft Storage Explorer app you can check that they have indeed been updated (Either its a new file or it updates what is already there)

Data Factory

Now we have saved the Logic App we want to trigger it in Data Factory

Create a pipeline and choose a web activity

Copy the URL from the Logic App and paste here

For the Body I simply used the Simply JSON at the start of this article.

Now you can trigger this pipeline along with all your other pipelines to run the Data into your Data Lake and then into SQL to be used for Analytics.

Power BI February 2022 Updates Dynamic M Query Parameters

Now supports SQL Server and more data sources

But what are Dynamic M Query Parameters and what does this mean?

It feels like they have been upgraded to use with direct query data sources so you can restrict the amount of data being asked for at the lowest level.

Lets have a look at a simple example using Taxi data from a Microsoft learning path.

First of all you need to open Power BI – Options and Settings – Options

Its in Preview so make sure that is ticked before continuing

Get Data Azure SQL Database (The guidance mentions SQL Server but it seems that both can be used for this test)


Then go to Transform data.

Right Click on trip fares to get to advanced editor

Source = Sql.Database("", "taxi-data-db"),    dbo_TripFares = Source{[Schema="dbo",Item="TripFares"]}[Data]

Its currently bound to a table but we need to bind it to a query for this process.

Click the cog against source.

Go into Advanced Options and add the SQL Statement

SELECT * FROM dbo.TripFares

And then go back and look at advanced editor

Source = Sql.Database("", "taxi-data-db", [Query="SELECT * FROM dbo.TripFares"]),    
dbo_TripFares = Source{[Schema="dbo",Item="TripFares"]}[Data]

So now, its nearly bound to a query but you will note that it looks like the table is erroring.

You can go back to Advanced Editor and change to

Source = Sql.Database("", "taxi-data-db", 
[Query="SELECT * FROM dbo.TripFares"])in   

 We only need the query and not dbo_TripFares

Now we can add the  Dynamic M Query parameters. I will go for an easy one first as a demo.

And then I change the advanced code again

Source = Sql.Database("", "taxi-data-db", [Query="SELECT * FROM dbo.TripFares Where payment_type = '" & paramPaymentType & "'"])

Note the new WHERE Clause that concatenates the value in our parameter

It will read in SQL SELECT * FROM dbo.TripFares Where payment_type = ‘CRD’

When it runs the first time you are asked to approve and you can actually see the SQL its going to use which is good. (Note I had to change to CSH to get the message up but I am running with CRD)

When it comes through its restricting to the selected payment type

We are going to change the code again

filterQueryPaymentType = "SELECT * FROM dbo.TripFares Where payment_type = '" & paramPaymentType & "'",    
Source = Sql.Database("", "taxi-data-db", 

This sets the SQL command first and we pass the Filter query into the data source

Now we know that the query works. Lets use it in Power BI Reporting and Assign to table.

This will need a lookup table of all the payment types to work

I am going to simply create the reference table in M

Source = Sql.Database("", "taxi-data-db", 
[Query="SELECT DISTINCT payment_Type FROM dbo.TripFares"])
in    #"Source"

Close and Apply

Now bind the table to the parameter in Modelling tab

Click on Payment_Type column. Advanced. Bind to parameter

Click Continue

A multi select is not going to be used for this demo

I have added a quick table. the metrics have come through as strings and there will be lots of things you need to test in direct query mode but I will ignore for the time being.

I dragged in Payment type from the Payment Type lookup into a slicer.

Click the slicer and see your data change. every time you click the slicer a direct query will happen but only for the payment type selected, hopefully making things much quicker.

And there you go. You have set up a restricted direct query. This will help with any direct query reports you need to create based on real time data.

You are still hitting the SQL DB though a lot so this would need thinking out.

And remember, Direct query doesnt give you the full Power BI reporting suite so your reports may be more basic. And usually I like to work with Star schemas but here we have the extra complexity of lookup tables to work with the parameters.

I will be looking at a date time example soon hopefully. This is clearly an extremely important piece of the Direct query real time puzzle.

Power BI Admin APIs to return a list of email subscriptions

Get Dashboard Subscriptions brings back a list of everyone who has subscribed to a dashboard

What is a Power BI Subscription?

Subscriptions are a great way to assign yourself and other users to get emails regarding report content.

There are certain governance rules we follow.

  1. Report viewers views content via an app. We don’t want report viewers coming into the App workspace. we want them to see carefully selected and brought together content.
  2. If we use Subscriptions we want to push though a really nice screen shot of a report that changes and gets the users wanting to come and see more content within that app. therefore we always have a report or dashboard with visuals that don’t need scroll bars to engage the viewer so they want to see more.
  3. because of this, we want to be able to subscribe people to App content

Go to an App. Note you can add your Subscription here which is a link to the dashboard

for this example, the App dashboard is subscribed to

then we go to try it out from the Microsoft API Page

Dashboard Subscriptions

and try the API

Add the Dashboard ID to parameters

But this is where logic is not quite working (the hope is that this will resolve fairly quickly). The above API doesn’t give you information back if you subscribe via the app. Only when you subscribe to the actual Dashboard in the workspace.

We want all our report viewers accessing the pre built app so this is where the information is most required.

When the user is added to a dashboard subscription in the workspace. The API is tested again.

What this can show us is anyone in the workspaces that has subscribed to the actual dashboard.  We want all viewers with App access.

Get report Subscriptions as Admin

This is the same as above but with reports

Get user Subscriptions as Admin

I get my user ID from Azure Active Directory

And see what I’m subscribed too but again, only workspace content

Logically, I feel like our viewers should be subscribing through the apps

this is really good stuff but I feel like they need to resolve the issue with Apps. Apps are the go to areas for users to view content so this is where we want people to subscribe too.

If you look at the information coming back. Here we can see the artifact type is report but there is no where that mentions if the report is in an App or in the workspace and I feel like this is actually important information. I only know because I have tested against both the App and the workspace.

If this could be resolved these APIs would be really useful to help us understand the subscription uptake.

Power BI Sparklines

Sparklines are small charts you can add to a table or Matrix and are new for the start of 2022.

Above we have a Matrix showing Products by country and I would like to also see this measure on a timeline.

Select the metric Value : Add a Sparkline Button

Here we choose the measure Total Sales which has already been summarised. The Date will be used as the X Axis so we can see the trend across time.

Rename to Sparkline if you want (double click the value to highlight as editable) and now you can see the trends

And you can also format the spark line (Right at the bottom of formatting)

So Sparklines are great for adding you your tabular information, I think the line charts are pretty good.

Current restrictions

Only supports 5 per visual so if you have more categories that wont work.

Apparently Date hierarchies don’t work at the moment but they will in the future

It would be good to assign conditional formatting to the colours. So going up could be green. Staying around the same amber and going down red.

they really do bring tables to life though and are are really great addition to power BI

Investigating the Power BI Scanner APIs Part 1

Delegating Permissions

Part 1 will be an introduction to Scanner APIs and how you delegate permissions in order to now have to use the power BI Admin Account

Power BI Scanner APIs are fairly new and there have been a lot of updates happening in the September October 21 Power BI Updates.

These scanner APIS scan, catalog and report on all the metadata of your organisations Power BI artifacts.

Scanner APIs are the Admin REST APIs that can extract tenant level meta data that support the Power PI good Governance Pillar of Discoverability and Monitoring

So the scanner APIs are a set of Admin REST APIs.

What is an API?

API is the acronym for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other”

What is a REST API?

“an application programming interface that conforms to the constraints of REST. REST = Representational state transfer

Scanner APIs

When you look at the list of Power BI APIs these are the ones specific to the Scanner API group

Your Power BI Admin needs to set this up. I have used the following information for this

And as usual I’m extremely thankful to everyone providing such fantastic information.

Service Principal support for read-only Admin APIs

Service Principal Support to the Scanner admin APIS became available September 21.

“Service Principal is an authentication method  that can be used to let an Azure AD applications access Power BI APIs. This removes the need to maintain a service account with an admin role. To allow your app to use the admin API’s, you need to provide your approval once as part of the tenant settings configuration.”

The Scanner API’s and Service Principal Support could well be a game changer for governance if everything is actually in place.

So we are able to delegate permissions to use the APIs

I followed the documentation…

Create an Azure AD app.

So as a prerequisite here is my list of everything that needs to be done based on the documentation read


Create Azure AD App

  • Go to Azure AD
  • App registration – New Registration
  • Make sure its a web application

Assign Role

  • You will need to choose an Azure Area for this project (Subscription and resource group)
  • Go to Azure Level (In my case the level is at resource group not subscription)
  • go to IAM. Add Role to the app
  • I selected contributor

Get Tenant and AppID

  • Go back to App
  • Get tenant ID
  • Get Application (Client ID)
  • These will be store both in the Application Code at a later date so keep this in mind.

Create Application Secret

  • App Registration – Select the app again
  • Client Secret – New Secret
  • I have added the secret to key vault for use later (the Key Vault is in the same Resource group as selected above)

Configure Access Policies on Resources

  • I have a Key vault.
  • Added the Service principal of the app in Access Policies with Get and List on Secrets

Create Security Group in Azure AD 

  • Go to Azure AD Groups
  • New Security Group
  • Add the App as a member in Members

Enable Power BI service Admin Settings

  • Power BI Admin – tenant Settings (Must be Power BI admin)
  • Allow Service principal to use ReadOnly Power BI admin APIs
  • Add the Security Group created above which has the service principal as a member

Start using read only admin APIs?

The documentation finishes at this point so how do you use these APIS?

We will look at this in part 2 but at this point we want to be able to set up and use in a Data factory.

Linked Service

Note that here we use the AAD Service Principal

We added the App Secret into Key Vault which we used here

The Service Principal ID (Blanked out here) Is taken from the App. Azure Active Directory

App registrations

And after a test its successful.

We can go onto swap out the authentication type of web services later in the process. Like this one for getting Workspace Info (Scanner API 2)

Later we will look at how to set up the Scanner API in a data factory, But in the meantime, Here is a possible error you can get when attempting to work with the Service Principal

Operation on target Post WorkspaceInfo failed: GetSpnAuthenticationToken: Failed while processing request for access token with error: Failed to get access token by using service principal. Error: unauthorized_client, Error Message: AADSTS700016: Application with identifier 'IDENTIFIER DETAILS' was not found in the directory 'Peak Indicators'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant. 

After closer inspection, I had missed the last number from the Application /Client ID so this was a quick fix.

When you use the Scanner APIs in Data factory you use all 4 in sequence. Lots more to think about here. The data factory Set up. How you switch modified workspaces to only take updates after a certain time.

so lots more to come

Power BI DAX – Create a percentage across days of week (ALLEXCEPT. ALL. DIVIDE)

We just looked at the following request

We have sold specific quantities of products. Can we look at a week for a year in a slicer and see the percentage of items sold for each day. Which days do we sell the most products overall?

We have the following model (Just the parts that will be required for this exercise

And thanks to the Power BI Forums for giving me some help on figuring this out.

What we want to end up with is:

  • A year Slicer
  • Product category on rows of a matrix
  • The Day Name from Date on the Columns of the Matrix
  • And % Quantity for that day based on all the days of the week as the value

Power Query Editor

At first I believed that I needed to add some kind of Ranking order for the DAX to use and as this will be a column its better to do in Power Query Editor (Or in the Sort DB)

To speed things up I created a Year Week Column in SQL DB consisting of the following examples

  • 202101
  • 202102
  • 202135

So the weeks 1 to 9 was padded out with a zero. I then formatted this to a number field and called it Year Week. Every seven rows within that week in the date table will have the same Year Week Number Applied


I can now create the measure. Lets have a look at it in a bit more detail

Order Quantity by Week % = Var varOrderQtyWeek = CALCULATE(SUM(FactResellerSales[Order Quantity]), FILTER(ALL(DimDate), DimDate[Year Week]))
RETURN DIVIDE(SUM(FactResellerSales[Order Quantity]), varOrderQtyWeek)

And this measure is then set as a percentage.

First of all we create a variable. It gets the Sum of Order Quantity and it filters by using an ALL on the column Year Week we just created in the Date Table

“ALL Returns all the rows in a table, or all the values in a column, ignoring any filters that might have been applied (On Year Week).”

And we return the sum of order Quantity (Which in the context is just for, for example Friday and Accessories by the Sum of Order Quantity

This appears to work

We have Implicitly filtered by Product Category and Day Name of Week.

the question here is, did we even need to set ALL on Year Week column in date. Could we have just said use the entire date

Order Quantity by Week % = Var 
varOrderQtyWeek = CALCULATE(SUM(FactResellerSales[Order Quantity]), ALL(DimDate)) 

RETURN DIVIDE(SUM(FactResellerSales[Order Quantity]), varOrderQtyWeek)

This is working in the same way and makes sense to me. We are using all the dates in the date table. And removing the Filter will create a faster DAX query.

Its looking at the entire data set and we can see that everything totals to 100%. So for us, taking into account all years Thursdays look like a good day especially for accessories.

However we don’t want to analyse the full data set and when we add in a year slicer the logic fails

As you can see, the story is still true but its telling the wrong story. Now for Accessories we have 31% of sales happening in 2021, attributing to smaller percentages across the days.

So we want to change the DAX to accept the Year Slicer

Order Quantity by Week % = 

Var VarALLEXCEPT = CALCULATE(SUM(FactResellerSales[Order Quantity]), ALLEXCEPT(DimDate,DimDate[CalendarYear]))

DIVIDE(SUM(FactResellerSales[Order Quantity]), VarALLEXCEPT)

And this appears to now work just taking the year 2012 into consideration because we are using ALLEXCEPT

Thursday is definitely a good day.

“ELLEXCEPT Removes all context filters in the table except filters that have been applied to the specified columns.”

So we create a variable to get the Sum or Order Quantity with ALLEXCEPT our Date Slicer which in this case is the calendar year slicer

So we are using 2012 from the date Dimension in the slicer. And Bikes from Product and Thursday as the Week Name from date in the Matrix

We Divide the Sum of Quantity which has all the filters applied. Bikes. Thursday. 2021

By the Sum Of Quantity with all, Except we do apply the Calendar Year.

  • DIVIDE(50/100)
  • 50 is Bikes on a Thursday in 2012
  • 100 is Bikes in 2012

DAX is something I do struggle with a little. You think you understand it. Turn your head from the screen and its gone again.

But hopefully this goes a little way to understanding how this DAX in this specific context has been approached