Power BI Premium Per User now in preview (December 2020 updates)

One of the main benefits of Power BI Premium is the ability to share with hundreds of users who don’t all have to have their own Power BI Pro license at £7.50 a month.

At a certain level, Premium becomes less costly than giving everyone who needs them Pro licenses. its said, you need 500 or more users for Premium to make sense.

However, its not just about having the ability to have hundreds of people viewing your reports. There are many more reasons to have Premium. There are lots of enterprise style features not part of Pro. Enterprise of course comes with the ability to have larger and speedier models.

This is a real issue for smaller businesses who cant take advantage of Premium per capacity, P1 P2 or P3 nodes.

However, there now may be an option to have premium capabilities without having a Premium License.

Premium per user is targeted at small and medium businesses because, if you are not a large enterprise, the Premium price point of just under £4K every month can be eye watering.

Premium extends Power BIs licensing model

Lets have a look at some features between Pro, Premium Per User and Premium

Model Size

  • Power BI Pro 1 gig per data set. Workspace Max 10 BG
  • Premium Per user 10gig (Per model)
  • Premium Per Capacity 10 gig (Per model. can be refreshed up to 12)

Refresh Rate

  • Power BI Pro 8 refreshes a day
  • Premium Per user 48 refreshes a day
  • Premium Per Capacity 48 refreshes a day (Gen 2 has significantly improved refreshes)

Paginated reporting

Paginated Reporting comes with Report Builder. the free tool for creating Paginated Reports. You can build paginated reports over a model created with power BI, or other data sources and publish to a Power BI workspace in the same way as a Power BI Report

Use a paginated report if you want to do printing or PDF generation. they are great for Sales invoices for example. Power BI reports are used for exploring the data.

  • Power BI Pro Can use Report Builder Free but cant publish to a Pro workspace
  • Premium Per user Yes
  • Premium Per Capacity Yes

AI Capabilities

Apply ML Model in dataflows

Impact Analysis in Service

AI Insights in Power Query Editor and Dataflows

  • Power BI Pro No
  • Premium Per user Yes
  • Premium Per Capacity Yes

Advanced Dataflows

For example, direct query and the ability to create compute and liked entities (actions that perform in storage computations)

  • Power BI Pro No
  • Premium Per user Yes
  • Premium Per Capacity Yes

Usage based Aggregate Optimisation

Aggregations allow you to manage large tables. You can have tables at a higher level of granularity, aggregated within Power BI, for example at year level, which are imported into Power BI. When you want to drill down to lower level detail, you can then move to the detailed data in Direct Query mode. aggregations should generally only be used for really large models.

  • Power BI Pro Yes?
  • Premium Per user Yes
  • Premium Per Capacity Yes

Deployment Pipelines for Application Lifecycle Management

  • Power BI Pro No
  • Premium Per user Yes
  • Premium Per Capacity Yes

XMLA Endpoint Connectivity

XML For analysis protocol. XMLA is used to connect to the Analysis Services engine which allows Power BI to have all the features of Analysis Services.

A major draw for this feature is the ability to create your shared one view of the truth data model within Power BI. And allow that model to be used by other analytics services, not just Power BI

  • Power BI Pro No
  • Premium Per user Yes
  • Premium Per Capacity Yes

Enhanced Automatic Page Refresh

Available as settings within the Power BI Admin Portal.

  • Power BI Pro No
  • Premium Per user Yes
  • Premium Per Capacity Yes

Multi Geo Support

Helps multinational customers deploy to data centres around the world, rather than just the home data centre.

  • Power BI Pro No
  • Premium Per user No
  • Premium Per Capacity Yes

Unlimited Distribution

This is the big one for Power BI Premium. the ability so share content with many users without individual Pro licenses. We will look more into this later

  • Power BI Pro No
  • Premium Per user No
  • Premium Per Capacity Yes

Power BI Reports On Premises

Using the Power BI reports Server. This option gears up an enterprise for moving fully to the Power BI Premium service later.

Updates to the specific Power BI Desktop (for on premesis usage) are much slower that Power BI Desktop for Service. You get reduced functionality. For example, no dashboards.

  • Power BI Pro No
  • Premium Per user No
  • Premium Per Capacity Yes

Bring your Own Key

Power BI encrypts data at rest and in process and uses Microsoft managed keys to do so. Premium allows you to use your own keys which sometimes makes it easier to meet compliance requirements. It gives you extra control.

  • Power BI Pro No
  • Premium Per user No
  • Premium Per Capacity Yes

Getting Premium Per user?

Upgrade to Power BI Pro and then upgrade to a Premium per user license

This will extend until general availability

Do we know what the Price point is yet? No. At the moment its free but without the knowledge of the actual pricing this is a hard one to take up before general release

Premium per user overwrites the Power BI Pro user license so there is no need for both

The Power BI Premium per user Workspace

You need to create your workspace. then go to settings and assign Premium per user capacity afterwards.

Only other developers with a premium per user license will be able to access the workspace.

This is a major point. If you are a team of 4 working in one workspace. Each user needs the Premium per user licence. So none of your developers with Pro licenses can work within this workspace.

None of your report users can access the report via the app without having Power BI Premium Per Users either.

Without that price its incredibly difficult right now to look into how this will affect the business as a whole. essentially it would mean switching everyone up to Power BI Premium Per user Licenses

Do we know if Premium Per user will be bundled into the Office 365 E5 Offering?

Currently if you have E5 licenses Power BI comes with the package. With Premium per user, will this become part of the E5 package?

It looks like E5 customers can purchase Premium per user as an add on to Pro but again, there is no specific information on this as yet so its difficult to tell how this will effect things.

So, as usual there are lots and pros and cons to this new license and we need a lot more information on pricing to be able to make any decisions.

But the ability to use features that are tempting you across to Premium, that’s really interesting. I thought that maybe there would be a case for going through your reporting portfolio to see if you have a mixture of cases for Pro and Premium per license but what with Dataflows, Pipelines etc, it would be difficult to establish a split between the workloads.

This is one to watch

Azure SQL Database. Publishing from Development to Production Part 2

The Dev to Prod Process

Initially in part one we set up the process with Visual Studio 2019 and Devops and moved all our objects across to Production. Then with Data Factory we moved data from the production data lake into Production Azure SQL DB

We have all the Source data in a data lake (and its been assured that the Production data lake is the same as the Development data Lake)

We have a Data Factory in Production that goes through a DevOps release pipeline so we should now be able to use the Production Data Factory to Load all the Production Data into the Production SQL database on a regular basis.

What happens when you already have Objects and data in Target?

Only the changes will be released. So the next time you release into production you are releasing the delta

Lets see this in action

  • The Initial SQL database was released into Production with Visual Studio
  • A Production Data Factory moved all the data into the new Objects
  • Now we have an updated Dev SQL Database.

Open your visual Studio Project

Open the project that was created in the last session.

In SQL Server Object Explorer

You have the Azure Server and Database. Right click and Refresh

And you have the Local Project DB which contains all the Objects. We can Check the Schema differences between the project DB and the latest DB within Azure

Go to the dev SQL Database (Click on the icon to the left of the server to open it up)

on the Azure SQL Database. Right click and Choose Schema Compare

For Select Target , select Select Target

Open the Local DB Project. The Database in this case is the same name as your Solution in Solution Explorer. (Now I know I should have given my solution the project name and my Project an extension of _DB to differentiate the two)

Click OK

Click Compare.

Now you get to see what has been deleted. in this case a table and a Procedure has been dropped

Next we can see changes. If the Table Change is clicked on, you get more information about that change in the object definitions. In this case a test column has been added.

This creates a quandary when it comes to loading in the data because this table should be fully populated but the new column will be blank. Is it possible to do a full load for these updated tables with Data Factory, OR do we need to look at something more complex?

And finally additions. In this case there are lots of new Tables procedures and two new functions.

Once happy Click Update and your changes will be published into Solution Explorer

To Check, have a look for some of the new tables, SPs etc in Solution Explorer

Once completed you can click the x icon to close the comparison window and you can save your Comparison information

Rebuild the Project in Visual Studio

Now we want to Rebuild our Project within Solution Explorer

Right click on the Project in Solution Explorer and choose Rebuild. this rebuilds all the files.

  • Rebuild rebuild your entire project
  • Build just rebuilds on the changes

Process your Changes with GIT

Now its in your project you need to process those changes with GIT

in Git changes. Commit all and Push

And remember to add a message

This image has an empty alt attribute; its file name is image-94.png

These objects should now be in Devops. You can go to Devops Repos. then to your database specific project and check for new tables, SPs etc

My new junk dimension object are there so this is all working.

Release the new database objects into Production database

now all the code is in the repos we can push the new and updated objects into production with a DevOps Release Pipeline.

There is already data in my production database. As as initial starting point I do a quick check on a few tables to get a feel of the data.

This SQL Script allows you to do a quick check on row counts in the production database

SELECT
QUOTENAME(SCHEMA_NAME(sOBJ.schema_id)) + '.' + QUOTENAME(sOBJ.name) AS [TableName]
, SUM(sPTN.Rows) AS [RowCount]
FROM
sys.objects AS sOBJ
INNER JOIN sys.partitions AS sPTN
ON sOBJ.object_id = sPTN.object_id
WHERE
sOBJ.type = 'U'
AND sOBJ.is_ms_shipped = 0x0
AND index_id < 2 -- 0:Heap, 1:Clustered
GROUP BY
sOBJ.schema_id
, sOBJ.name
ORDER BY [TableName]
GO

Azure DevOps

Choose the database repository (You should also have a repository for data factory)

Build Pipelines

Go to Pipelines. Before releasing to Prod we actually have to build all our code into an artifact for the release pipeline

Click on your Project _DBComponentsCI pipeline (Continuous integration) set up in Part 1

Lets remind ourselves of this pipeline by clicking Edit

This image has an empty alt attribute; its file name is image-188.png

We build the solution file from the Repos in devops. Then Copy files to the staging directory. Finally publish the artifact ready for release.

Come out of Edit and this time choose Run Pipeline

And Run.

Once its run, there are warnings again but for the time being I’m going to ignore these

Otherwise the build pipeline runs successfully.

Release pipeline

Now we have rebuilt the artifact using the build pipeline, go to pipelines, Releases and to your DB_CD release (continuous delivery)

We have a successful release. I can run the above SQL and check for differences. for a start, there were 39 objects and now there are 43 so you can immediately see that our production database has been updated.

The row count shows that we haven’t lost any data. We have simply updated the objects.

Part three will allow us to look more in-depth at the data side. How we deal with the data in Dev and Prod

Power BI Premium Gen 2 First Look

Currently Premium Gen 2 is in Preview (Jan 20201) but it looks really exciting for those with Premium capacity. Lets have a look at Gen 2 and see how it differs to Gen 1

Gen 1

With Premium Gen 1 we are bound by the number of vCores and by the memory we have. Lets take the P1 SKU as an example

P1, P2 P3 P4 = Are all Premium SKUs. Each one has more vCores for better performance

SKU = Stock Keeping Unit

vCores = Virtual Cores is a puchasing model which gives you more control over compute and memory requirements.

When it comes to Premium, reports would slow down if there were two many queries running. And if people were using many models at the same time you would have to wait for your time slot.

As you can see, there is only 25 gig memory across the data sets Once a report isn’t being run and used any more, that memory gets dropped and is added back into the pool.

Report users and report developers are also fighting with report refreshes.

Premium Gen 2

Gen 2 is not in generally availability yet but if you have Premium you can switch to Gen 2 in preview.

In Power BI Admin (as the Power BI administrator or Global Admin)

Go to the Capacity Settings and switch Gen 2 from disabled to enabled.

You can also go back to Gen 1 if you need to but if you do make sure you flag up any issues you are having.

lets have a look at the model compared to Gen 1

Autoscaling

The end users encounter the throttling and performance issues with Gen1 because they physically only have 4 backend vCores. Now with Gen2, Autoscaling allows you to deal with spikes. This is not available yet but will be coming. This is helped by the fact that there are other vCores that can be called on.

If you do come up to the 4 core Limitation it will, or may lend you a vCore so you don’t see impact for your end users

Previously our admins had to deal with this kind of problem but this will really help automate these kind of issues

Memory

Data Sets can go over the 25 gig memory capacity. Previously Premium was 25 gig for all the data sets. Now Data sets are gated individually.

This is a fantastic update. We don’t have to worry about the collective size of all our data sets.

Refreshes

Previously there was a maximum of 6 refreshes at any one time. Otherwise you can get throttled.

With Gen 2, refreshes get spread out over a 24 hour period and don’t impact other queries from users. refreshes just run

The looks great. People are seeing refreshes of an hour and a half coming down to 10 minutes.

Capacity Usage Metrics

This is coming soon and will have a breakdown of items.

Its a little annoying when you have set up gen 2 and want to view the metrics to see how everything is working but currently can’t.

With Gen 2 we will also be able to work against a chargeback model. This means that we can spread the costs of Premium between distinct areas of an organisation dependant upon their usage.

Workloads

Again the workload settings aren’t fully functional at the moment but more will be coming.

For example for data set workloads we can specify minimun refresh intervals and execution intervals. we can detect changes in our metrics.

We don’t have settings for dataflows and AI yet.

Why go with Premium Gen 2 Checklist

  • Performance benefits
  • End users see faster reports
  • Refreshes, we now don’t have refresh bottlenecks and we remove refresh failures due to throttling
  • Premium per user
  • Improved metrics will be introduced soon
  • Autoscaling
  • Proactive admin notifications

Why it may be worth waiting until Preview becomes GA

It looks like people are having some issues with dataflows and there is already be a known issue about this

It looks like this might be fixed quickly, and once dataflows are OK, it seems like. A workaround is to move your dataflows out into another Workspace and then back in but hopefully this will get much better.

Questions

Is Premium Gen 2 going to be the same price as Gen 1?

Is there any way to find out how many dataflows you have if dataflows are an issue?

Will we still give great functionality to the Power BI Pro users?

Power BI – December 2020 – Small Multiples

Small Multiples are now part of some of the main visuals so lets take a look at them using Adventureworks data

To take advantage of this at present you have to switch it on in Preview features in Options / Preview Features

Column

Using the Clustered Column Chart

Note that there is a small multiples area to place data items

Currently the Small multiple categories based on Group are ordered alphabetically. It looks like better sorting will be coming in later releases

It would be nice to be able to shade the multiples slightly as background because Europe and North America are difficult to differentiate

Also there are only three groups so the last grid is empty. I attempted to make the visual long and narrow to try and get all three multiples on the same row but it wouldn’t let me.

As a consequence, it may be worth thinking about only doing small multiples on groups that don’t leave you with a blank grid.

The next possible issue is the Calendaryear axis only being available for the bottom charts. this may also be confusing when you are looking at a few multiples.

Bar

Be careful what you are adding to your Bar chart. This started of as a visual counting product Sub categories ‘Total quantity’. I thought it would be nice to split these up by the Product category Accessories / Bikes/ Clothing and Components

However, this would only work if each small multiple group had the Product sub categories available to them only. And every group shares the same list so we have lots of empty values

As you can see, this grouping does not work. You will have to think carefully about this. The small multiple group should really share the same Axis information to make it work at the moment.

This now makes sense because each Group is selling the same Product Categories.

So the Bar chart small multiples are not suitable for situations where the small multiple group doesn’t have the same axis information because the axis is shared

Line

This is a good one. the Line Chart clearly shows where each group starts and ends.

There are 4 categories which means there are no empty areas.

So far this is the best example of the Small multiple in action

Area

This is just as successful as the line chart as they are so similar

Each group is easily viewed.

After looking at the Small Multiples so far, I would definitely say that the Line and Area charts are the best use cases for the Small multiples. they look good and are easy to understand.

But you need to put some thinking into the best data items to display in your small multiple visuals to ensure your users still get the best from the report.

February 2021

Really good news on the small multiples update

You can now format to make the visual easier to read by the user

This resolves a few of my issues before about how confusing this column chart looked before.

Azure Data Factory – Moving from Development to Production

When working on larger projects we need to merge changes from Developers. When all the changes are in the central branch we can then have an automated process to move development to Production

Smoke tests

In computer programming and software testing, smoke testing is preliminary testing to reveal simple failures severe enough to, for example, reject a prospective software release

Integration testing

Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. Integration testing is conducted to evaluate the compliance of a system or component with specified functional requirements. It occurs after unit testing and before validation testing

Resources Involved with the current Project

  • Azure DevOps
  • Azure SQL Server
  • Azure SQL Database
  • Azure Data Factory
  • Azure Data Lake Gen 2 Storage
  • Azure Blob Storage
  • Azure Key vault

Each resource has its own specific requirements when moving from Dev to Prod.

We will be looking at all of them separately along with all the security requirements that are required to ensure that everything works on the Production side

This post specifically relates to Azure Data Factory and DevOps

Azure data factory CI/DC Lifecycle

GIT does all the creating of the feature branches and then merging them back into main (Master)

Git is used for version controlling.

In terms of Data Factories, you will have a Dev Factory, a UAT factory (If Used) and a Prod Data factory. You only need to intergrate your development data factory with GIT.

The Pull request merges feature into master

Once published we need to move the changes  to the next environment, in this case Prod (When ready)

This is where Azure Devops Pipelines come into play

If we are using the Azure Devops Pipelines for continuous development the following things will happen

  • The devops Pipeline will get the powershell script from the master branch
  • The get the ARM template from the publish branch
  • Deploy the Power Shell script to the next environment
  • Deploy the arm template to the next environment

Why use Git with data Factory

  • Source control allows you to track and audit changes
  • You can do partial saves when for example you have an error. Data Factory wont allow you to publish but with Git you can save where you are and then resolve issues another time
  • It allows you to collaborate more with team members
  • Better CI/CD when deploying to multiple environments
  • Data Factory is time times faster with a GIT back end that it is when authoring against the data factory service because resources are downloaded from GIT
  • Adding your code into Git rather than simply into the Azure Service is actually more secure and faster to process

Setting Up Git

We already have an Azure Devops Project with Repos and Pipelines turned on

We already have an Azure Subscriptions and Resource Groups for both Production and Development Environments

There is already a working Data Factory in development

In this example Git was set up through the Data Factory management hub (the toolbox)

DevOps Git was used for this project rather than GitHub because we have Azure DevOps

Settings

The Project Name matches the Project in Devops

The Collaboration branch is used for Publishing and by default it’s the master branch. You can change the setting in case you want to publish from another branch.

Import existing resources to repository means that all the work done before adding Git can be added to the repository.

Devops is now set up.

Close Azure Data Factory so we can reopen it again to go through the GIT process (If it is open)

Where to find your Azure DevOps

You should now be able to select your own Area in Devops / Repos and select the created project within Azure Devops and Repos

You will need an Account in Azure DevOPs Click on Repos and that account must be higher than Stakeholder to access Repos

You can then select the Project you have created

Using Git with Azure Data Factory

In Azure Open up your Data Factory

Git has been Enabled (Go to Manage to review Git)

The master branch is the main branch with all the development work on it

We now develop a new Feature. Create a Feature Branch + New Branch

We are now in the feature branch and I am simply adding a description to a Stored Procedure Activity in the pipeline. However this is where you will now do your development work rather than within the master

For the test, the description of a Pipeline is updated. Once completed my changes are in the feature 1 branch. I can now save my feature

You don’t need to publish to save the work. Save all will save your feature, even if there are errors

You can go across to Devops to see your Files and history created in Azure Devops for the feature branch (We will look at this once merged back into Production)

Once happy Create the pull request

This takes you to a screen to include more details

Here I have also included the Iteration we are currently working on in Devops Boards.

A few tags are also added. Usually, someone will review the work and will also be added here.

The next screen allows you to approve the change and Complete the change

In this case I have approved. You can also do other things like make suggestions and reject

Completing allows us to complete the work and removes the feature branch. Now all the development in the feature branch will be added to the main, master branch.

In Data Factory, go back to the master branch and note that your feature updates are included

We now publish the changes  in Master Branch which creates the adf publish branch. This publish branch creates the ARM template that represents the pipelines, linked services, triggers etc.

Once published, In Devops Repos , there are now files to work with

You can see your change within the master branch

(The changes would normally be highlighted on the two comparison screens)

Here we open the Pipelines folder got the compare tab and find the before and after code

And you can also see your history

The Arm templates is in the adf_Publish branch, if you select this branch

Once done we need to move the changes  to the next environment, in this case Prod (When ready)

This is where Azure Devops Pipelines come into play

Continuous Development using Azure DevOps

We need another Data Factory object to publish changes to

In this case, the Production has been created with Azure Portal within the Production Subscription and Production Resource Group

Git Configuration is not needed on the Production resource. Skip this step

Create your tags and Review and Create

DevOps Pipelines

For this specific Project, We don’t want to update production automatically when we publish to Dev. We want this to be something that we can do manually.

Go to Pipelines and create a new release Pipeline (In DevOps)

Click on Empty job because we don’t want to start with a template

And because for this project there is no UAT, just Production name the Release Pipeline Prod

Click on the X to close the blade

We need to sort out the Artefact section of the Pipeline
 
Click on Add an Artefact and choose an artefact from Azure Repos
 
We may as well add adf_Publish branch which contains the ARM templates
And the Master branch

the Source alias was updated with _adf_publish

Both Pipelines are Azure Repos artefacts

Next We move to Prod and Start adding tasks

Click on 1 job, 0 tasks to get to tasks

Click + against Agent Job to add the task Our task is for ARM Template deployment


Click Add

Then click on the new Task to configure

The first section is where you select your production environment

Next you need to select the ARM template and the ARM template parameters file. These are always updated in the Devops artefact everytime you publish to dev.

The JSON templates are in the adf_publish branch

Now you need to override the template parameters because these are all for Dev and we need them to be production.  These are:

These will be specific to your own data Factory environment. In this instance we need to sort out the information for the Key vault and data lake storage account

factoryName

This one is easy. The only difference is changing dev to prd

AzureDataLakeStorageGen2_LS_properties_typeProperties_url

The Production Data lake Storage account must be set up in dev and prod before continuing. Go to this Storage account resource

This information is also stored in our Key Vault as a secret which we can hopefully use at a later date.

It is taken from Storage Account, Properties. We want the Primary endpoint Storage for the data lake

Copy the Primary Endpoint URL and override the old with the new Prod URL in DevOps

AzureKeyVault1_properties_typeProperties_baseUrl

We need to update https://dev-uks-project-kv.vault.azure.net/

Lets get this overridden. We already have a Key vault Set up in production. Get the URI from Overview in the Production Key Vault Service

and lets add this into our DevOps parameter

AzureDataLakeStorageGen2_LS_accountKey

This is empty but we could add to it later in the process.

Account keys are the kind of things that should be kept as secrets in Key vault in both Dev And Prod

Lets get them set up. Just for the time being, lets ensure we have the Data Lake storage account key within our development and Production Key vaults

Key Vault.   

Within Key vault in development create a secret with the nameAzureDataLakeStorageGen2LSaccountKey

And the key from the storage account comes from……

And Repeat for Production Key vault

For the time being through lets leave this blank now we have captured the information in the key vault. It should come useful at a later date

AzureSqlDatabaseTPRS_LS_connectionString

This was also empty within the parameters for dev.

You can get the connection string value by going to your SQL data base. Connection Strings. PHP and finding the try statement

And here is the Connection String value for production

Server=tcp: prd-uks-project-sql.database.windows.net,1433; Database= prd-uks-project-sqldb;

You can also add this information into Key Vault as a secret and repeat for Production

For the first instance we are going to leave empty as per the dev parameters. At some point we should be able to set up the Security Principal so we can change the hardcoded values to Secrets

The parameters created in dev are now overridden with the production values

The Pipeline is then named

Create a release

Once Saved. Click back on Releases

For this tye of release we only want to do it manually

Create a Release for our very first manual release

Click back on releases

And click on release 1 to see how it is doing

You can click on Logs under the Stages box to get more information

Now you should be able to go back to the production data Factory and see that everything has been set up exactly like Dev.

Go and have a look at linked Services in the Production data Factory

Note that they are all set with the Production information

We now have a process to move Dev to Prod whenever we want

The Process

Throughout the sprint, the development team will have been working on Feature branches. These branches are then commited into the master pipeline and deployed to Dev

Once you are happy that you want to move your Data Factory across from dev into Prod. Go to DevOps Release pipeline

Create Release to create a new release

It uses the Artefact of the Arm template which is always up to date after a publish.

This will create a new release and move the new information to Prod

All your resources will be to be able to quickly move from Dev to Prod and we will look at this in further posts

Create your website with WordPress.com
Get started