Power BI New Visuals Feb 2020 Release The Funnel and Pyramid chart by XViz

lets again have a look at the Adventureworks visuals using the Adventureworks warehouse as the sample data set

Funnel and Pyramid chart by XViz

Great for looking at Lead to Customer Conversion process for Sales analysis. Also great to visualise business functions like Organisation Hierarchies.

a very simple example is to look at Sales by Education

Its very possible that those actually in a College activity are providing us with the most revenue (Partial College)

YTD Internet Sales was change to Show value as Percent of Grand Total and in Format. Labels were turned on.

You can also change it into a Pyramid

as an example, the sales team have their own data of Prospective Buyers and the code changes when they convert to actual customers.

In Power Query Editor, a New Column has been added to track conversions

A custom column was also added and set to 1 for an easy customer count.

We only have 4% of customers as leads at the moment. Over 51% have converted to Large Scale Projects with us.

1 % have had one call from us so the Sales team need to get more leads and make more calls.

Click on the report Advanced Editor

And you can add Conditional Formatting.

However, there seems to be an issue here because, we have 6 in the range but the colours only come out in three groups which isn’t correct

Power BI March 2020 New Drill through Functionality

Currently we have lots of visuals on the report pages and the only way to know that they drill through to other reports is to hover over them

In the March 2020 release there is new functionality in the form of buttons

Add a blank button

This button is now set to drill through preview and you have chosen the drill through page. On normal drill through, everything this that is available is shown which is difficult when the report structure is complex. This allows you to target and manage the drill through structure.


Nothing selected
31 days selected

And this drills through to the detail reports for 31 days. Note that the tooltip in Desktop tells you how to drill through. Lets see this in Service.

Change the text on the button

Lets customise the button text using conditional formatting

the data item behind this drill through is ‘Resolved in’ and then it contains number of days categories.

Create a new measure

A new measure has been added to the dimension table containing the Resolved in Item

String for Resolved in Button logic = IF((SELECTEDVALUE('dim table'[Resolved in ],0)==0),"See Resolved in Details", "See Details for " & SELECTEDVALUE('dim table'[Resolved in ]))

If the selected value is null, set it to 0. So if its zero, show “See resolved in Details”. If its not zero then it can be displayed

Add New Measure to the Button

Resolving a noisy drill through with Drill through Buttons

This visual in this Drill though report has three Drill through’s for Incident. Then the stage of the incident. Finally the Business Property Drill through. All are valid but we would prefer to Not have the stage here because this should only be accessible via Incident.

Add two buttons. Link one to the Incident Drill through report. The other to the Business Property Drill through report

We want the button to be dark red when its active.

Create a new measure for the Incident in the incident Dimension

String for Incident in Button logic = IF((SELECTEDVALUE('dim BPIncident'[Incident Ref],0)==0),"See Incident Details", "See Incident Details for " & SELECTEDVALUE('dim BPIncident'[Incident Ref]))

And add this as conditional Formatting to the button Text

Create a new measure in the Business Property Dimension

String for Business Property in Button logic = IF((SELECTEDVALUE('dim BusProperties'[BP Number],0)==0),"See Business Property Details", "See Business Property Details for " & SELECTEDVALUE('dim BusProperties'[BP Number]))

And add this as conditional Formatting to the button Text

And now you should get drill throughs that the user can immediately see and use. Avoiding the stage drill through because it doesnt match what we want the journey to be

Note that the buttons are greyed out

Weaknesses

  • The button takes up more space.
  • Currently you have the drill through button and the drill through on the visual. This could be confusing to the user. It would be great if you could turn the drill through on the visual to off

A Roundup of Cool new Power BI Visuals Pre January 2020 release

There have been so many awesome new visuals come out that we need to take a moment to have a look at a few using Adventureworks as the example data set.

Advanced Gauge by OKViz

Looking at Year to date and Last Year to Date Measures of Internet Sales and Quantity of Products Bought

We have some DAX to create the following measures for this year to date and Last Year Year to Date

YTD Internet Sales = TOTALYTD(SUM(FactInternetSales[SalesAmount]),DimDate[FullDateAlternateKey].[Date])
YTD Quantity = TOTALYTD(SUM(FactInternetSales[OrderQuantity]),DimDate[FullDateAlternateKey].[Date])
LY YTD Quantity = CALCULATE([YTD Quantity],SAMEPERIODLASTYEAR(DimDate[FullDateAlternateKey].[Date]))
LY YTD Internet Sales = CALCULATE([YTD Internet Sales],SAMEPERIODLASTYEAR(DimDate[FullDateAlternateKey].[Date]))

These would look great in a KPI or two gauges. However, the Advanced Gauge allows you to view them together

Download the visual from the market place.

First of all I’m going to add a Filter for year to the report

The Actual and Target Values for your primary measure is the outer ring and your secondary measures for quantity make up the central ring

You could also add Minimum and Maximum measures if you had them.

This is a really nice visual but at a glance, without the table info it can be quite hard to read and I find the tooltips a little confusing

Once you get a handle on it its clear that we sold more in 2012 than 2011 but we made less money

Straight away, that leaves you wanting to set up analytics to find out why that happened.

Some of the reviews of this visual say it doesn’t work with percentages though which is disappointing

XViz Hierarchy Tree

This is another custom visual to show off your hierarchies

This is very similar to the decomposition tree which is maybe preferable in terms of look and usability. You cant choose the next step down in the tree like you can with the decomposition tree for example

You need ALOT of space for this one

There is already a nice hierarchy for Location data set up. Add Product category to this and analyse by Sales Amount

We are interested in the 20 items against North America so I can click on the end of the node to find out more.

You can see why you need space for this one. There is a bar to allow you to move in and out but its quite hard to control where your visual will end up on the page. I couldn’t get it looking any better than the above visual.

Still, Users will love this because it gives them the ability to interact with the data. Would I choose this over the decomposition tree? Probably not.

It doesn’t look to good when you bring the visual down to size. You would need a full page for this one

X Viz Hierarchical Filter

This visual is a similar one to an old hierarchical filter

This is the new version

And this is the old version

The findings are that if you have multiple hierarchy items in the new version, this visual takes up much more room than the original Hierarchy filter. The preferred choice may be the original one for taking up less space on a page.

And don’t forget, Hierarchical filtering is now available on the Slicer visual that comes with power BI

XViz Bullet Chart

This bullet chart was developed as a replacement for gauges and meters. It saves space and can be either horizontal or vertical.

We have three measures I want to look at by Product Sub Category and I want to be able to slice by Product Category so I can see, for example all the Bikes

YTD Revenue = TOTALYTD(SUM(FactResellerSalesXL_CCI[Sales Amount]),DimDate[FullDateAlternateKey].[Date])
LY YTD Revenue = CALCULATE([YTD Sales],SAMEPERIODLASTYEAR(DimDate[FullDateAlternateKey].[Date]))
2 Years ago YTD Revenue = CALCULATE([YTD Sales],PARALLELPERIOD(DimDate[FullDateAlternateKey].[Date],-24,MONTH))

We wanted to add Year to date for the same period two years ago and for this the PARALLELPRERIOD function can be used. In this case, its looking at 24 months previous to the Date.

Bands have been set within the format, rather than using data

Straight away we can see Jerseys are producing high sales and last year there seemed to be a dip in sales. We are doing much better in the current year

X VIZ Parallel Coordinates

this visual allows you to compare series on a set of numeric variables

Using the previous visuals measures as an example

This immediately shows us that there are 6 product categories leading the way and we are doing much better in the current year than in the last two years.


This visual tells us that we are selling more road frames than road bikes, however we make a lot more money with Road bikes

XVIZ GANTT Chart

In one of our Reporting projects we have a GANTT Chart by MAQ Software

This gives a really nice timeline overview of incidents per location and what the impact is, along with the stage of the incident. This has already been well received, so the question is, how will XVIZ GANTT chart compare?

The MAQ Software visualisation was created by the following fields

Essentially, the MAQ GANTT chart has been re-purposed here for something other than tracking of a project. A brief look at the XVIZ GANTT chart and it would seem like you use it specifically for project tracking.

Therefore you need data items like;

AssignedTo, CompletedDateTime, CreatedDateTime, DueDateTime, PercentCompleted, StartDateTime, TaskName, TaskStatus,TaskOwner etc.

This looks good and if you want to track projects within Power BI over the top of a data source this may be the way to go. However we use Azure DevOps Boards or Jira. the MAQ Software GANTT Chart gives you more options to use the visual for other reasons which is a really good reason to add it to your reports.

Zebra BI Charts

These are really great looking reports. As soon as you try and add any more detail like Category into Group, which would split you report into reports by Group

Annoyingly, you get pointed towards a free trial but I cant find anywhere that openly states what the costs are. Its not even mentioned in the frequently asked questions.

Personally, if there is a cost attached I like the company to be more open about it. Therefore I’m not comfortable with trying the free try. this is a shame and I think Zebra should state the costs before the trial.

Market Place visuals are created by External companies. Always check with your organisational policies that you are happy to use these visuals

Power BI 2020 Updates Incremental Processing for Power BI Pro from (Source) Azure SQL Database (Bug)

This is really the most exciting update for probably all of us working with Power BI. Currently we already have a backlog or reports in Pro that have required Incremental loading for some time so its great to be able to finally get the chance to try it

Our Project, to view Tweets with sentiments etc in a Power BI Report involves the following Services:

Logic Apps

A Logic app that is triggered when a new tweet is posted regarding a specific company

Content Moderator – Logic Apps uses the content moderator to check for profanity

Cognitive Services – Logic apps uses the Cognitive Service to add a sentiment score and to find key phrases

There is also a second Logic App that uses the same logic for Tweets posted by the company.

Azure Storage Account – Tabular Data Storage

  • The Logic Apps loads the tweet information into a Tweets Table
  • The Keyphrases into a Keyphrases table that connects to the tweets table
  • The Media items into a Media table that connects to the tweets table

Data Factory

Data Factory is used to load the information from the Tabular Data Storage into a SQL Server Data base staging area incrementally

The logic is fairly straight forward in that data items are inserted. Nothing is updated or deleted

There is a Pipeline for each single table

The SQL For the Lookup for the data set Mentions

SELECT MAX(WatermarkValue) AS WatermarkValue From [staging].[watermarktable]
WHERE WatermarkValue IS NOT NULL
AND TableName = 'staging.mentions'

the Watermark is a table in SQL that is updated with the Max Date at the end of the process

The query for the Source

CreatedAt gt '@{activity('LookupWatermarkOld').output.firstRow.WaterMarkValue}'

Basically brings through records greater that the value in the lookup table

I have a pipeline over each separate pipeline to run them all sequentially

Next comes a pipeline to run all the stored Procedures to move data from staging to dims and facts in SQL

At the end of these stored procedures we move the date on to the max date in the watermark table (And at the beginning in case there is an error in the SQL pipeline)

Doing this means that Data Factory only loads new records and doesn’t have to reload the staging area ever time

The Top level pipeline runs all the incremental Copy pipelines and then the Stored Procedures

Lets have a look at our watermark table before and then after a load

And a look at our last import of tweets in the audit table.

There are more audit tables to help find any issues. This is after the run on the 13th March (Test 2)

Data Factory Trigger

Because its not in full use at the moment the data set is loaded once a week on a Sunday at 12:50 and until this is being retested its being set to off

Azure SQL Database

In here we have the Watermark Table. All the audit tables, the staging tables and dimensions and facsts

the Fact and Dimensions are currently created via Stored procedures but the hope is to try and change to data flows.

Power BI

the Data is imported into Power BI Pro (Full Process) so the model is dropped and recreated.

Azure Data Studio

Is there any way we can have a look at what is going on when we load. Yes, by using Azure Data Studio

https://docs.microsoft.com/en-us/sql/azure-data-studio/download-azure-data-studio?view=sql-server-ver15

Once installed, connect to the SQL Database that is your data source

So long as you have the profiler extention installed you can Launch Profiler

If you don’t have it, you can download the extension

Once Launched Start a Profiler Session

Now we need sometime to Profile. go into Power BI service, Datasets.


click on refresh now and then go to data Studio to see whats happening

From Logon to Logout during the run, it took 20 minutes because the entire model is refreshed. obviously it would be really good if we could get the time down using incremental refresh

Before you set up Incremental processing, ensure that the services preceding the Power BI Load have been well tested and signed off.

Incremental Processing in Power BI Pro

In Power BI desktop. Incremental refresh is now out of preview so no need to go to Options and Settings to turn it on anymore.

Define your Incremental refresh policy

If the system are acting as they should and there are no bugs or issues

  • New rows are added into the data set
  • No historical data is updated or deleted
  • Incremental loading can be added to every table apart from media tables because there are that many records. they can be left as full upload

Set up incremental refresh in Desktop. Create Parameters

It isn’t practical to hold all your data when you are working in desktop if you are working with a Large model.

Go to Power Query Editor

Select Manage Parameters

The two parameters that need setting up for incremental loading are RangeStart, RangeEnd

These are pre defined parameter names only available in Premium for Incremental processing

Range Start and Range End are set in the background when you run power BI. They partition the data

You need to be aware of Query folding here. This is when, you write lots of steps in M Query to transform the data and where possible they are applied at source. So RangeStart and RangeEnd will be pushed to the source system. Its not recommended to run incremental processing on data sources that cant query fold (flat files, web feeds) You do get a warning message if you cant fold the query

The suggested values are simply ones you add that get amended later during processing.

this start date was chosen because at present, the data only started to get collected in 2019 so there is only around a year of data

Filter the data in the model

Still in Power Query Editor, all the tables that require incremental load need to have the rangeStart and RangeEnd paramters adding to the filtered date column

Incremental refresh isn’t designed to support cases where the filtered date column is updated in the source system.

With this in mind, imagine you have a sales table with an Orderdate and an UpdateDate. the OrderDate is static. The UpdateDate will be updated if there are any changes to the record.

Order date would need to be chosen as its static, so lets go through the tweet tables and set the filters. Click on the Column header icon to get to the filters

In power BI you don’t need so much data to do the testing, so this is great to keep the data smaller in desktop. At the moment, its using the default settings we provided.

  • dim.mentionsKeyphrases Twittertimestamp
  • dim. mentionsTweet CreatedAtDateTime
  • dim.BusinessKeyphrases TwitterTimeStamp2
  • dim.BusinessTweets CreatedAt
  • dim.BusinessReplies CreatedAt
  • fact.Mentions Date (For this, date was created from the date time Key in Power BI Query Editor)

Close and Apply

Define your Incremental Refresh Policy in Power BI Pro

go to your first table and choose incremental refresh

Storing everything  for 5 years. its set to months so the partitions are smaller

If this is running every single day then you would only need to refresh rows in the last 1 day. However as a just in case 1 month has been used, in case for any reason the job is suspended or doesnt run.

Detect Data Changes has been used. The months data will only be refreshed if the ImportDate for this record is changed (Or there are new records)

No records are deleted so we don’t need to worry about this

Publish the new Power BI Report and Data Flow

You might be thinking at this point, but I dont want the filters that I have set for Desktop to be applied in Service. I want to see all my data in Service

Dont worry, in Service RangeStart and RangeEnd don’t keep the dates specified for the filters in Desktop.

they are set via your incremental refresh policy. So they are set as the partitions for our 60 months (Instead of setting it to 5 years, meaning there is one RangeStart and OneRangeEnd, you get RangeStart for Month one, RangeEnd for Month 1, RangeStart for Month2, RangeEnd for Month2 etc, breaking your 5 years down into much smaller partitions to work with,

You need to set up the Incremental Refresh policy for every table that has been filtered with RangeStart and RangeEnd

Test the Process

I have a visual for Number of Tweets

Today so far there are 11 Tweets

I also have an audit report

  1. The Logic App has been processing tweets realtime into Table Storage
  2. Run Data Factory (2 new records)
  3. Reprocess Power BI Pro data Error Resource name and Location need to match
  4. If there hadn’t been an error we would move to Azure Data Studio Check. Note that it now takes a second to run
  5. Check the Visual

Error Resource name and Location need to match

The data set now has a warning sign. after speaking to Microsoft this is a Known issue and should be fixed in April. it is believed to be something to do with detect Data Changes So basically…… to be continued

Power BI Switching Tenants and User Accounts (Emails) but can only see Dataflows from previous tenant

Recently we did a training course with two developers in our own tenant under two training accounts. the training was done on their own computers but they logged into our tenant with our training details.

Then they moved back to their own tenant in Power BI Pro with their own details to start creating a data flow.

They have an App workspace and are both Admins in the workspace

Developer 1 Created Workflows within this App Workspace.  Developer 1 can open these data flows but Developer 2 doesn’t have access because they are not the owner which seems like the correct logic. they should only be able to use them in Desktop.

However, when both Developer 1 and developer 2 both go into Power BI desktop and attempt to connect to the data flow (Both Desktop and Service is connected to the company account)  they see a dataflow that was created our tenant when we were doing the training course. the new data flows cant be seen by either Developer.

They have both tried to refresh the view (Above) but this didn’t work. This does seem to be something that happens with the users log out and use a different account in a different tenant

How to resolve the Issue

Use Files – Options and settings – Data source Settings

then find Power BI Dataflows in Global Permission, then Clear Permissions of it.

When you Connect to the Power BI Dataflows again you have to sign in. Sign in as the current user, then you can see the Dataflows of your current account.

So if ever you are working in more that one tenant, remember this fix if you can only see the dataflows from the previous account

Power BI 2020 Updates. Hierarchical Slicer

We currently have a report with a hidden slicer panel, and one of the slicers is for Date. We want the user to choose the month of the report

Click on the visual and you can see that its the Hierarchical slicer from the Market place

However, Power BI February 2020 updates includes this feature on the main slicer. Can we Change out hierarchical slicer to the original Slicer and see how it looks?

Options and Settings

Currently this is a preview feature and it needs to be turned on and Power BI re -started

Use Selection Pane to check where the original slicer is


The Slicer is currently in the Slicer Group on this specific Bookmark

Now, with the Slicer visual selected, simply click on the original slicer in Visuals to change the Visual


Lets see the difference

It looks great and there is no need to add an additional visual from the market place


I’ve set single select to ON so you can only choose one month

I think this looks like a great new update and one that clears extra visuals from your reports

How does Power BI cope with Source Data Changes?

You have a data source that you are connected to with a some nice reports and DAX

However, at some point your data source changes. In this example I have moved from this

To

The question is, what happens to our report when we load in this updated data set?


Straight away when we attempt to Refresh the pbix file we get an error

This would also happen if Power BI Service was being automatically refreshed. We need to go into Power BI Query to get everything sorted

Also we don’t want to have to go and change the DAX, We want to get this resolved with the less amount of effort as possible.

Click on Transform Data

When I click on Refresh All I get the same error

The new columns will need updating in your code so Power BI Understands that they are the same column.

It starts to go wrong here at Changed Type

You can click on this code and Update to the new names. From

= Table.TransformColumnTypes(#”Split Column by Position”,{{“street”, type text}, {“city”, type text}, {“zip”, Int64.Type}, {“state”, type text}, {“beds”, Int64.Type}, {“baths”, Int64.Type}, {“sq__ft”, Int64.Type}, {“type”, type text}, {“sale_date”, type text}, {“price”, Int64.Type}, {“latitude”, type number}, {“longitude”, type number}, {“sale_date – Copy.1”, type text}, {“sale_date – Copy.2”, Int64.Type}})

To

= Table.TransformColumnTypes(#”Split Column by Position”,{{“street”, type text}, {“city”, type text}, {“zip”, Int64.Type}, {“state”, type text}, {“Number of beds“, Int64.Type}, {“Number of baths“, Int64.Type}, {“Square Feet”, Int64.Type}, {“Type of House“, type text}, {“sale_date”, type text}, {“House Price“, Int64.Type}, {“latitude”, type number}, {“longitude”, type number}, {“sale_date – Copy.1”, type text}, {“sale_date – Copy.2”, Int64.Type}})

This section now works

Click on Each Applied Step just to check, in this case, I have resolved all the issues in this one step. Time to refresh and Close and Apply

Annoyingly the DAX is still referencing the original column names

The hope was that the DAX would recognise the column changes

Attempt 2

Lets go back to Power Query Editor and go back to the original code of

= Table.TransformColumnTypes(#”Split Column by Position”,{{“street”, type text}, {“city”, type text}, {“zip”, Int64.Type}, {“state”, type text}, {“beds”, Int64.Type}, {“baths”, Int64.Type}, {“sq__ft”, Int64.Type}, {“type”, type text}, {“sale_date”, type text}, {“price”, Int64.Type}, {“latitude”, type number}, {“longitude”, type number}, {“sale_date – Copy.1”, type text}, {“sale_date – Copy.2”, Int64.Type}})

This sets up the error again.

Add a new applied step before the changed type. What we are going to do is rename the new columns back to the old columns (Bear with me, this seems odd because we don’t want to use the old column names but this will make sense a little later

I click on the Applied step previous to the one that is erroring and rename the column headers in the table

Once done, move to your net step and check they now all work

If you close and Apply your DAX should work again because your column names are now old

But we don’t want to do that. Lets go back to Transform Data

Go to the end of your applied steps and rename your column to the new column names (By clicking on the column header and renaming

And this time when you refresh the data, DAX will accept your changes

So, its a little long winded but mak sure you know exactly what your old column names and new column names are

  • Before the error, rename back to old in Power Query
  • At the end of applied steps rename back to the new column names
  • And you should be in a position where all your reports have accepted the column name changes

The Dos and Donts of Power BI relationships and Modelling

Bad Modelling 1: Single flat file table e.g. Salesextract

When people first start out using Power BI as their Analytics platform, there is a tendency to say, lets import all the data in one big flat file, like an Excel worksheet.

This way of working is just not well organised and doesn’t give you a friendly analytics structure.

Avoid Wide Tables

Narrow tables are much better to work with in Power BI. As the data volumes grows it will affect performance and bloat your model and become inefficient. then, when you create measures, things will start getting even more overly complex in the one long and wide table.

Not to mention the point when you have to add another table and create joins. You may be faced with the many to many join because of your wide table.

STAR SCHEMA are the recommended approach to modelling in Power BI

Stars with a few Snowflaked dimensions are also ok.

If you have a flat file wide table its always important to convert to an above data model with narrow dimension tables and a fact table in the middle with all your measures.

Remember, Chaos is a flat file.

Model Relationships propagate filters to other tables.

In this example the ProductID propagates down to the sales table. 1 Product can be sold many times. (1 to many)

With a snowflake you can add another level

CategoryA Propagates down to the Sales Fact table

Deliver the right number of tables with the right relationships in place.

Power BI was designed for the people who never had to think about the design of data warehouses. originally, this self service tool would allow any one with little or no knowledge of best practice to import data from their own sources, excel spreadsheets, databases etc without any knowledge of how they were set up.

This becomes an issue when the recommended Power BI model is the fact and dimension schemas as above.

Understanding OLAP models go a long way to helping you set up Power BI

  • Dimensions Filter and group
  • Facts Summarise measures

Bad Modelling 2: Direct Query your Transactional Database

When you connect up to OLTP and drag in all your tables ( there may be hundreds of them) using Direct Query there are lots of things to consider.

the overall performance depends on the underlying data source

When you have lots of users opening shared reports, lots of visuals are refreshed and queries are sent to the underlying source. This means that the source MUST handle these query loads for all your users AND maintain reasonable performance for those using the OLTP as they enter data.

You are not the most important person in this scenario. The person(s) using the database to add data is the most important person

OLTP is designed for speedy data input. OLAP is designed for speedy retrieval of data for analytics. These are to very different things.

With OLTP, you have row-Store indexes (Clustered Index, Non-Clustered Index) and these are slow for data analysis. They are perfect for OLTP style workloads. Data Warehouse queries, consume a huge amount of data, this is another reason why using OLTP as your direct query data source isn’t the best approach.

Also your Direct Query means you loose a fair amount of DAX functionality time time based DAX calculations, What if Parameters, etc.

I was chatting to someone about this on the forums and they gave me a fantastic analogy

When you connect into a transactional database with Direct Query, its like being in a busy restaurant and getting all the customers to go and get their food from the kitchen.

It slows down the customers because of the layout of the kitchen. They don’t know where anything is, and other customers are also milling around trying to find where their starter is.

the Kitchen staff who are now trying to prepare the food are having to fight for physical space. Look at the pastry chef, trying to work around 10 customers asking where their various desserts are?

So you set up a reporting area. This is where the food gets placed, someone shouts service and a waiter will go and speedily deliver the course to the correct table.

No one needs to go into the kitchen unless they are in food prep. Everything works in the most efficient way.

Model relationships Dos

  • Only 1 ID to One ID. If you have composite keys they need to be merged
  • No recursive Relationships (relationships that go back to the same table. the example always used for this is the managerID in the employer table
  • the Cardinality is 1 to many. 1 to 1. many to one. (Many to Many needs a specific approach in Power BI)
  • Cardinality determines whether it has filter group behavior or summarise behavior
  • There can only be one active path (relationship) Between two tables. All your other paths will be inactive (But you can set up DAX to use them)

In this example OrderDateKey is the active relationship because we use this the most and joins to DateKey

ShipdateKey and DueDateKey also join to DateKey in the date table and are inactive.

DAX Functions for Relationships to help with modelling decisions

RELATED

When creating calculated columns you can only include fields from the same table. Unless you use RELATED

For example, I’m adding the column Colour into the SalesOrderDetail table which has a Many to One join to Products •Colour = RELATED(Products[Colour])

RELATED allows you to use data from the one side in the many side of the join

RELATEDTABLE

RELATEDTABLE Uses data from the Many side of the Join

TotalSales = SUMX(RELATEDTABLE(SalesOrderDetail),SalesOrderDetail[LineTotal])

USERELATIONSHIP

Forces you to use a relationship instead of the active relationship

=CALCULATE(SUM(InternetSales[SalesAmount]), USERELATIONSHIP(InternetSales[DueDate], DateTime[Date]))

CROSSFILTER

Modifies the filter direction Disables propagation. You can actually do this in the model by changing the filter to both directions instead of single. OR you can do it for a specific DAX query using CROSSFILTER

TREATAS

Create virtual relationships between tables

CALCULATE([Total Budget],

TREATAS(VALUES(Dates[Year]),’Unconnected Budged Data'[Year]))

Our Unconnected Budgeted Data is in Year only and its not joined to our main model.

Here we connect up to Year in Date. then we can create a visal with Date from the Date dimension. Total sales from our connected data which is at daily level and Total Budget from our unconnected budgeted data at a different level of granularity.

PATH

Naturalise a recursive relationship with the PATH function

Getting your model right and understanding your data sources is the most important thing to get right with Power BI. make sure you don’t have lots of headaches six months into your project. Its better to spend the time now, than having to start again later.

Moving Power BI to different Environments (Dev Test Production etc)

Currently we are in Development and I only have a Development database to connect too. However, very soon there will be multiple environments to connect too

  • Development
  • Q&A
  • Production
  • Test

Currently the development area is simply hardcoded

If you need to change to a different environment to test the Power BI Reports against this environment, this becomes quite a task.

Transform Data to get back to Power Query Editor

Add in the new environment, removing the old one and then check everything is still working

Change to Parameters

Go to Options and Settings and Options

Tick Always allow parameterization in data source and transformation dialogs

Server and Database can now be created by text or parameters.

Click on Server and New Parameter

Each environment is on a different server

Next comes the database (Environment)

Transform Data in Power Query Editor

You can see you environment in your parameters shown under queries

You can simply click on your parameter and change your current value

Then Publish to the correct Workspace

When you log into the App Workspace for that environment

Go to the data set and check the credentials

You cant change your environment parameters here, but its a good way of second checking that you have published to the correct App Workspace

Personally I would like to see more here, It would be nice to associate an App Workspace with an environment.

Design a site like this with WordPress.com
Get started