This function allows us to see every tag at resource level against the Resource group. So this script doesnt quite give us what we want. We know that there are no tags at resource group level. How do we establish this with Powershell?
And then run (Get-AzureRmResource -Name “AdventureWorksDW”).Tags to look at the tags, this script deletes all the tags and inserts just the one, this isn’t the logic we want to use. We want to Add tags to a Resource that already has tags
Tagging is a feature that has been introduced into the Azure Resource Manager model (ARM). They can be used to Logically group and track resources. The old (Classic) version of Azure was Azure Service Manager.
Azure tagging can be done using Azure Portal, Azure Powershell, CLI (Command Line User Interface) or ARM (Azure Resource Manager) JSON templates
Tags can then be used to select resources or Resource Groups and are useful when you need to organize resources for billing or management
You apply tags to your Azure resources giving metadata to logically organize them into a taxonomy.
Each tag consists of a name and a value pair. For example, you can apply the name “Environment” and the value “Production” to all the resources in production.
You can only apply tags to resources that support Resource Manager operations
VMs, Virtual Networks and Storage created through the classic deployment model must be re-deployed through Resource Manager to support tagging
A good way around this is to tag the resource group they belong to instead.
All ARM resources support tagging
Each resource or resource group can have a maximum of 15 tags.
Tags are key/value pairs, name is limited to 512 characters, value is limited to 256 characters
Tags are free-form text so consistent correct spelling is very important
Tags defined on Resource Groups exist only on the group object and do not flow down to the resources under them
Through the relationship you can easily find resource by filtering by tagged resource group Its recommended keeping the tags to the resource group unless they are resource specific.
Each tag is automatically added to the subscription-wide taxonomy
Application or resource specific tags will “pollute” the tag list for the entire subscription.
Issues with Tags
Using the recommended Tag procedure of tagging at resource group level causes issues because the Tags dont get inherited at Resource level.
The hope was that any tags that you apply at one level of the hierarchy will be inherited by the lower levels within the hierarchy and this doesnt happen.
You need to be careful that your tags stay Logical and don’t differ from higher levels. It may well be preferable to do this via Powershell Scrips that manually to ensure correct logic is maintained between resources and Resource Groups.
The underlying technology that powers resource groups is the Azure Resource Manager (ARM).
ARM was built by Microsoft in response to the shortcomings of the old Azure Service Manager (ASM)
ARM requires that resources be placed in resource groups, which allows for logical grouping of related resources.
Although creating a resource group requires specifying a region for it to be stored in, the resources in that resource group could span multiple regions.
In the ARM architecture, resource groups not only become units of deployment, but also units of management of related resources.
It allows role-based access control (RBAC) at the resource group level, making it much easier to manage user access to the resources in the group.
When users log into the Azure Portal, they will only see resource groups they have access to and not others within the subscription. u Administrators will still be able to assign access control for users to individual resources within the resource group based on their roles. This is great to see costs associated with Each Resource Group
Successful Azure Resource Groups
If an application requires different resources that need to be updated together, such as having a SQL database, a web app, a mobile app, etc. then it makes sense to group these resource in the same resource group.
Use different resource groups for dev/test, staging, or production, as the resources in these groups have different lifecycles.
All the resources in the group should share the same environment (Dev, Test etc) because you deploy, update and delete together
If you have for example the marketing analytics database in one Resource Group and a demo database in another resource group, Each resource group needs its own server
You cant Rename a resource Group
A good naming convention to use is rg-projectorapp-subprojectorapp-dev or projectorapp-subprojectorapp-dev-rg
Examples of Resource groups
Now you have logical resource Groups set up we can set up tags which are good for larger organisations.
Responsible Person or Party
This is your internal billing code
Name or email address
name of the Project
<production, Staging, QA>
Identifies the environment
<Public, Confidential, Restricted, Internal>
Data Sensitivity. Public: This information is public information, and can be openly shared on your website Internal: Internal information is company-wide and should be protected with limited controls. Confidential: Confidential information is team-wide and its use should be contained within the business. Restricted: Restricted information is highly sensitive and its use should be limited on a need-to-know basis.
Can this resource be shut down at 7pm
Looking at this example, the Tagging has been added to the resource Groups. However if you look at a resource, you wont see the tags.
We need to look at a way of ensuring that the Tags applied to the resource group are also applied for each resource.
Policies for Tags
Azure Policy is a service in Azure that you use to create, assign and, manage policies.
We can now start adding Subscriptions for all our different Business domains and environments. Azure allows you to quickly get started adding Resource groups and Resources to your subscriptions. Using Management groups above subscriptions allows for even more control. You can manage access, policy, and compliance across multiple subscriptions.
This is all great but what about controlling costs? You will very likely have team members that cant wait to try resources to see what they can do but what happens when you came from after a weekend and you test Cosmos DB has racked up a huge amount of money?
We want to be able to control things more so lets look at some options.
Azure Spending Limit
The following Subscriptions have the ability to apply Azure Spending Limits
0044P Free trial
0059P Visual Studio Professional subscribers
0060P Visual Studio Test Professional subscribers
0062P MSDN Platforms subscribers
0063P Visual Studio Enterprise subscribers
0064P Visual Studio Enterprise (BizSpark) subscribers
0029P Visual Studio Enterprise (MPN) subscribers
0025P Action Pack
0111p Azure in Open Licensing
0170p Azure for Students
0144P Microsoft Azure for Students Starter
When you get a free account, or are for example a Visual Studio Enterprise Subscriber you will get personal use credits.
for example. Going to a personal Subscription and looking at the overview
This Visual Studio Enterprise Subscription allows the user £115 Credits. You cant increase set credits.
If you reach your spending Limit, all services are disabled until the next billing period.
If you need to uncap you need to remove the spending limit by clicking on the above message.
Remove the Spending Limit for the current billing period will allow you to continue and the credits will then reset.
Remove the Spending Limit indefinitely will allow you to go over your Spending limit every month by adding your credit card details (Pay as you go)
However the recommendation is to only remove if required to keep more control
This is obviously a great choice for personal Azure Subscriptions, used for testing and trying things out.
Monitor Costs When using Azure Services
You cant cap the following subscriptions
0136P Microsoft Azure EA Sponsorship
0023P Pay-as-you-go Dev/Test
0148P Enterprise Dev/Test
0036P Microsoft Azure Sponsored Offer
0243P Azure Pass
0145P Azure in CSP
Track Costs with Budgets
Budgets allow you to set thresholds for your spending. You can then get alerts on these thresholds.
If for example, you have a budget for £500 and a threshold for £400 you could also start adding in automation to start shutting down resources, like VMs at the threshold.
Setting the Budget at the Subscription Level
Within the Subscription Overview
Application Insights Smart Detection has been used as the action Group if the costs go up to 90% of the budget that has been set
‘Smart Detection automatically warns you of potential performance problems and failure anomalies in your web application’
Targeting a Resource with a Budget
Again going to a Subscription and Clicking Cost Analysis
The resource that has a habit of becoming expensive is the database in this subscription
We want to make sure it doesn’t get out of control
Create a new budget for this subscription and then add a filter (the filter will only work once the graph is displayed
Resource Type is selected
then Microsoft SQL Servers. You could also put tags to good use in Budgets. You could for example check for tags of resources created by specific people.
You can now have Budgets that specifically check on certain services. Great if you have Services that have a tendency to create lots of costs.
So now we can rest a little easier knowing that Azure is going to let us know what is happening. It would be great if we could apply automation to shut something down if it was accumulating costs over the threshold. We can look at this next time
Its time to start tracking Projects with Azure DevOps boards.
With Boards, teams can manage software Projects. They can track user stories, backlog items, tasks, features etc. You can choose the environment you want to work with like AGILE or SCRUM.
For this example, there is only one developer (me) and I’m am tracking my progress on a project where I have been the single developer
AGILE is the process that going to be used
Agile is an iterative approach to project management and software development that helps teams deliver value to their customers faster and with fewer headaches. Instead of betting everything on a “big bang” launch, an agile team delivers work in small, but consumable, increments.
Now we have a new project we can start working with Boards but first we need to understand what our AGILE work items are and how they interact with each other.
I have specific Epics I want to achieve
Reporting from the companies main system
Social Media reporting
Reporting for the Surveys
Reporting for all the telephone enquirers
Reporting for Complaints
Main Reporting Area for all the data Auditing
Reporting for Report usage
So, just looking at this. I want 7 Epics to work with (To start with)
A feature is some complete behavior to implement a new business process. So for example. for the Social Media Epic we want
Overall View of Business Performance as provided by the surveys
Monthly level reporting on customer satisfaction with drill through
User stories are within a feature. These are the smallest change that will result in behavior change. If you don’t observe a change then it cant be demonstrated
For example, as the Customer Satisfaction Manager I want to see the Survey results by month and have the ability to see how we are doing by over the year and at the same point in the year because we need to know if we are doing well as a company to see our trends in satisfaction
As the Company Head of Service I want a full review of our performance using our scoring system against customer satisfaction and how our competitors are doing for benchmarking
These are within a User Story and are the smallest independently deployable change.
Get file of Survey data (Pilot project)
Move Survey data into the Azure Data Warehouse (Staging area) Incremental loading using Data Factory
Establish dimensions and facts
Create Dim 1
Create Dim 2…….
Create Power BI Data Flows
Create Top level report by Month of Customer Satisfaction containing last 12 months
Create KPIs for Satisfaction against this time last year
Drill through to detailed report
Drill through to lowest level
a Bug is an error in the code
Incremental Refresh is causing Duplicates
NULL data Items in Survey Data set
An issue is more related to a process, when the System fails to meet user expectation
Created a report based on poorly Served Customers but this needs changing to the new business logic.
Test cases can validate individual parts of your code. We will look at this item in another blog post
Lets start with boards. These boards are Kanban Boards
A Kanban board is one of the tools that can be used to implement Kanban to manage work at a personal or organizational level.
When the board is first opened up, Epics don’t seem to be available
With Boards Selected Go to Configure Team Settings
Make sure that Epics are ticked in Under Backlogs
And now with Epics selected click on New Items are start adding in the required Epics
Next we need to start adding some features. It would seem that you cant add the features and then connect them to the epics. You have to create the Features from the epics
go back to the epic, click on … and Add Feature
And you can then see the feature within the Epic
Now we have a Feature we can add the user Story. Go to Features and click on the Add User Story
Same again. Move to User Stories and add Tasks
You can also go into the items and add lots more detail
This link to the Microsoft Documentation gives you lots of information regarding, effort, story points, business value, Priority etc.
Its always good to create the epic and work your way down into the Tasks
For this example, items are being added for a sprint that was closed some time ago because the project is being retrospectively moved into Azure Boards
I am starting them all from the beginning of this particular Project and for this I can add a Start Date
The Start Date also Applies to Stories but these will be set when the stories were originally created
However when you close a task and move the whole story into completed, you cant set a completed date
If you click on History and look at the state graph, you cant change the New and Resolved Times. These are set at the time of the action which makes it difficult to add past information into the Board
Backlogs help you to Quickly define work (User Stories, backlog items, requirements)
You can reorder the backlog so you work on the highest priority first
Add details and estimates
Assign items to team members and sprints by either bulk update or drag and drop
Map items within a hierarcy
Review the portfolio of work
Forecase work to estimate deliveries
Display rollup progress, counts and totals to show completion of work
Basically your backlog displays work items as lists and boards display them as cards
The Remaining Active User Stories have been dragged to Iteration 1
All the work items you create can be viewed in here as well as created
Hopefully this gives you a little head start into the world of Azure DevOps boards
This is really the most exciting update for probably all of us working with Power BI. Currently we already have a backlog or reports in Pro that have required Incremental loading for some time so its great to be able to finally get the chance to try it
Our Project, to view Tweets with sentiments etc in a Power BI Report involves the following Services:
A Logic app that is triggered when a new tweet is posted regarding a specific company
Content Moderator – Logic Apps uses the content moderator to check for profanity
Cognitive Services – Logic apps uses the Cognitive Service to add a sentiment score and to find key phrases
There is also a second Logic App that uses the same logic for Tweets posted by the company.
Azure Storage Account – Tabular Data Storage
The Logic Apps loads the tweet information into a Tweets Table
The Keyphrases into a Keyphrases table that connects to the tweets table
The Media items into a Media table that connects to the tweets table
Data Factory is used to load the information from the Tabular Data Storage into a SQL Server Data base staging area incrementally
The logic is fairly straight forward in that data items are inserted. Nothing is updated or deleted
There is a Pipeline for each single table
The SQL For the Lookup for the data set Mentions
SELECT MAX(WatermarkValue) AS WatermarkValue From [staging].[watermarktable] WHERE WatermarkValue IS NOT NULL AND TableName = 'staging.mentions'
the Watermark is a table in SQL that is updated with the Max Date at the end of the process
Once installed, connect to the SQL Database that is your data source
So long as you have the profiler extention installed you can Launch Profiler
If you don’t have it, you can download the extension
Once Launched Start a Profiler Session
Now we need sometime to Profile. go into Power BI service, Datasets.
click on refresh now and then go to data Studio to see whats happening
From Logon to Logout during the run, it took 20 minutes because the entire model is refreshed. obviously it would be really good if we could get the time down using incremental refresh
Before you set up Incremental processing, ensure that the services preceding the Power BI Load have been well tested and signed off.
Incremental Processing in Power BI Pro
In Power BI desktop. Incremental refresh is now out of preview so no need to go to Options and Settings to turn it on anymore.
Define your Incremental refresh policy
If the system are acting as they should and there are no bugs or issues
New rows are added into the data set
No historical data is updated or deleted
Incremental loading can be added to every table apart from media tables because there are that many records. they can be left as full upload
Set up incremental refresh in Desktop. Create Parameters
It isn’t practical to hold all your data when you are working in desktop if you are working with a Large model.
Go to Power Query Editor
Select Manage Parameters
The two parameters that need setting up for incremental loading are RangeStart, RangeEnd
These are pre defined parameter names only available in Premium for Incremental processing
Range Start and Range End are set in the background when you run power BI. They partition the data
You need to be aware of Query folding here. This is when, you write lots of steps in M Query to transform the data and where possible they are applied at source. So RangeStart and RangeEnd will be pushed to the source system. Its not recommended to run incremental processing on data sources that cant query fold (flat files, web feeds) You do get a warning message if you cant fold the query
The suggested values are simply ones you add that get amended later during processing.
this start date was chosen because at present, the data only started to get collected in 2019 so there is only around a year of data
Filter the data in the model
Still in Power Query Editor, all the tables that require incremental load need to have the rangeStart and RangeEnd paramters adding to the filtered date column
Incremental refresh isn’t designed to support cases where the filtered date column is updated in the source system.
With this in mind, imagine you have a sales table with an Orderdate and an UpdateDate. the OrderDate is static. The UpdateDate will be updated if there are any changes to the record.
Order date would need to be chosen as its static, so lets go through the tweet tables and set the filters. Click on the Column header icon to get to the filters
In power BI you don’t need so much data to do the testing, so this is great to keep the data smaller in desktop. At the moment, its using the default settings we provided.
dim. mentionsTweet CreatedAtDateTime
fact.Mentions Date (For this, date was created from the date time Key in Power BI Query Editor)
Close and Apply
Define your Incremental Refresh Policy in Power BI Pro
go to your first table and choose incremental refresh
Storing everything for 5 years. its set to months so the partitions are smaller
If this is running every single day then you would only need to refresh rows in the last 1 day. However as a just in case 1 month has been used, in case for any reason the job is suspended or doesnt run.
Detect Data Changes has been used. The months data will only be refreshed if the ImportDate for this record is changed (Or there are new records)
No records are deleted so we don’t need to worry about this
Publish the new Power BI Report and Data Flow
You might be thinking at this point, but I dont want the filters that I have set for Desktop to be applied in Service. I want to see all my data in Service
Dont worry, in Service RangeStart and RangeEnd don’t keep the dates specified for the filters in Desktop.
they are set via your incremental refresh policy. So they are set as the partitions for our 60 months (Instead of setting it to 5 years, meaning there is one RangeStart and OneRangeEnd, you get RangeStart for Month one, RangeEnd for Month 1, RangeStart for Month2, RangeEnd for Month2 etc, breaking your 5 years down into much smaller partitions to work with,
You need to set up the Incremental Refresh policy for every table that has been filtered with RangeStart and RangeEnd
Test the Process
I have a visual for Number of Tweets
Today so far there are 11 Tweets
I also have an audit report
The Logic App has been processing tweets realtime into Table Storage
Run Data Factory (2 new records)
Reprocess Power BI Pro data Error Resource name and Location need to match
If there hadn’t been an error we would move to Azure Data Studio Check. Note that it now takes a second to run
Check the Visual
Error Resource name and Location need to match
The data set now has a warning sign. after speaking to Microsoft this is a Known issue and should be fixed in April. it is believed to be something to do with detect Data Changes So basically…… to be continued
In a nutshell, you will cause yourself a lot of headaches in the future if you do this because OLTP systems are specifically for users to enter data and not for analytics. You dont want to cause issues for this people trying to do the actual work by entering information onto your systems.
So what do you do when there is some parts of your data base that you need to have as real time reporting?
The first thing to understand is that, just because you need some real time reporting, it does not mean that you have to treat everything as real time reporting. You need to separate out your needs. It may be that only one report needs to be real time and the rest can be dealt with as overnight imports etc.
You also need to make sure that you do need real time reporting. In many cases, after some investigation, this isn’t actually required.
There are a few key things to look at here
Power BI Push data sets
With a push dataset, data is pushed into the Power BI service. The Power BI service automatically creates a new database in the service to store the data. Reports can be created over this data set rather than over your OLTP with Direct Import and your visuals will be updated in real time.
You cannot do as much with this data set as we will see later but it separates the two functions to a greater extent as your analytics will be run over the push data set
You can only keep 200K rows. Any more any old rows will be removed which is something to keep in mind
We will have a look at this in more detail later
Column Store Indexing in SQL
“A column store index is an index that was designed mainly for improving the query performance for workloads with very large amounts of data (eg. data warehouse fact tables). This type of index stores the index data in a column based format rather than row based as is done with traditional indexes.”
These can be added to OLTP systems to help you run real time from your transactional systems but there are a lot of issues with this and would need lots of admin time, so may not be the way forward. This will be looked at in another post.
“A trigger is a stored procedure in database which automatically invokes whenever a special event in the database occurs. For example, a trigger can be invoked when a row is inserted into a specified table or when certain table columns are being updated.”
This could be used for creating a seperate table to work with, allowing you to run analysis over this rather than your tables within your OLTP Schema.
You could have this table in a separate database which would be ensure that no useability issues occur for your transactional system users.
I will have a look at triggers in another blog post
Example 1 Power BI Push data sets
Two very simple tables have been created in the Azure SQL Database. A customer table and a sales table (So we can see how more than one table is affected)
An Updatedate has been added by adding GetDate() into the binding for the column for both tables
Once the seed data is in (The starting test data), make sure the ID of the dim auto populates and you can set the key as a primary ID
The last table I want is an audit table to record what the max update date is for each table
Push Data Restrictions
It should be noted that you cant work with more than one data set if you use push
Notice here, where an example Push Dataset is imported into Desktop, thats the only data set you can add.
This is the same if you create the Report in the Service
If you go into Power BI Service and to the data set (This is just an example, we haven’t created our data set yet)
And Click to Create Report
A table is added but you can’t do much to the data. You cant create measures or remove rows you don’t want in Power BI within Service
Two need to become one. For this, a view is added in SQL
CREATE VIEW dbo.VW_RealTimeDataStreamExample AS SELECT d.CustKey, f.DateKey, f.ProductKey, f.DateOfSale, f.Quantity, f.Cost, d.UpdateDate AS CustomerUpdateDate, F.UpdateDate as FactUpdateDate, d.Name FROM [fact].[RealTimedatatestingDE]f INNER JOIN[dim].[RealTimedatatestingcustomerDE] d ON f.CustKey = d.CustKey
Power BI Service
Go to power BI service and choose + Create and Streaming Dataset
create one push dataset for the view
Setting Historical Data Analysis to On changes it into a Push data set. the only time you would ever want to leave this as off is if you are looking at IoT data in real time
Get the Powershell script
You now have your streaming data sets in your data sets list within the App Workspace. Click on the i to see the API info for the streaming data set
This gives you the sample scripts for pushing to your data set. The Push URL is all you need to populate your dataset.
Copy the Powershell Script
We have one streaming table to work with over a view of two tables
Test the Streaming DataSet
We can either create a report in Service
By clicking on Create Report, or in desktop by connecting to the hybrid dataset RealTimeDataTestingDE
For this test I am going to create the report in Service
You cant create measures in service. I am simply going to add a table in order to view the data
Search for Powershell and then Run as Administrator
Quick tip, cls will clear the screen if you make any mistakes
then you can copy the script into Powershell. For this example, the dim table will be created first
There are a few things changed from the code taken from the push data set. Firstly, a While loop is added simply to incrementally send rows from the data set to Power BI, but you don’t need to use the while loop.
Then then default values are replaced with values from the data set
Straight away you can spot an issue. This is fine for new records but what if the data is also being updated?
This would involve actually deleting a record in real time streaming based on the key and at present I don’t know if that’s possible.
In this meantime lets stick to inserts only.
Test the Powershell Script
Time to add some data to see if its working via SQL
INSERT INTO [dim].[RealTimedatatestingcustomerDE] (Name) SELECT 'Example 3'
INSERT INTO [fact].[RealTimedatatestingDE]
(CustKey, DateKey, ProductKey, DateOfSale, Quantity, Cost)
SELECT CustKey, 20200220, 255, '2020-02-20 00:00:00.000', 100, 90 FROM [dim].[RealTimedatatestingcustomerDE] WHERE Name = 'Example 3'
Go back to the report in Report Server
The data can now be viewed in the report
The push data set can only show data added from the time its turned on so we dont get to see any of the data already in the tables before the creation of the push data set
Using the information above its not going to be able to deal with updates. Inserts only. This can be looked at in a later post, but this would actually involve updating data in the push data set or deleting based on ID and then recreating.
Orchestrating the Powershell Script
Add another clean record with the above insert SQL Script
Going back to Service you can click Refresh but note that there is no new data in the push dataset. It only worked for the one record. We need to automate the push data to Power BI Automatically.
Azure Automation with Powershell
Currently the services being used are Azure SQL Database in POC_Resource_Group so it would be good to add an Automation account in here
Note that Create Azure Run As Account is Yes. This means its a Service Principal and has a contributer role in this subscription
Set up Credentials
Runbook credentials are very similar to Azure SQL Server credentials. in your new Automation Account click on Credentials
The new account is set as azure_pocmaintenance with the same user name and and a password that has been saved for later.
Next make sure that all the Powershell modules are up to date
Next go to Runbooks and Create a Runbook
Now we are ready to add the powershell script that we have created
You can Test Pane to check everything is working
It looks like there there is an issue. The the Server isn’t accessible to the run book so we need to give it access.
We need to amend the script slightly in order for it to work in the Runbook with one extra row (Row 1)
$connectionString ="Server=$dataSource;Database=$database;trusted_connection=false; user id =$userName;Password=$password;Integrated Security=False"
relates to the data source variable
This time the Test Completes.
(At this point you will need to be able to add some records into SSMS (SQL Server Management Studio) in order to test and use the INSERT INTO SQL above
Publish the RunBook
Has simply publishing the run book worked. Lets add a record and check the report
No the new records haven’t come through.
Schedule the Run Book
Back in the run book
We want to link the run book to a Schedule Link to Schedule
The schedules are only for hour day week or month. Is this a one time run book because it just continually pushes data into Power BI. I have started it for 11 AM. In 10 minutes time so this can be tested then.
Ran the views again from above
Its missing attempts before the schedule was created. Let try adding another one
Having a look at the Runbook in azure
It completed, but it seems to have just completed the one record and nothing after the last update. This means its not a one time run book. It has to be scheduled for a recurrance.
You can set the Runbook to run Once an hour Via Recurring but we need it to run more than this
Run via a Logic App
Can we sort out this issue using a Logic App?
Start with a Recurrance Trigger
For this example, set to 3 minutes. Choose Start Time and Time Zone as extra parameters for the recurrence trigger
Next, search for and choose Azure Automation – Create Job Azure Automation
For this example, sign in to the Tenant
Once Signed in, Add the Subscription, Resource Group, Automation Account and Runbook name of the Runbook above
You can Save and Test the Logic App to ensure it runs successfully
Fully Test the Logic App
Now the Logic App is active and is set to run every three minutes from the start date, Start inserting Records using the insert scripts above. You can check that the Logic App is running jobs
So 4 jobs have run. Lets have a look at the Report in Power BI Service
The records are being transferred to Power BI, although it seems that you have to refresh the screen for them to appear.
So, there are issues with this, you can only add new records, and you miss data in the table previous to the Push data set. Also, the Logic App probably needs setting to Service Principal. But for this example we have:
Set up a Power BI Real Time Data set (Set to Push)
Created a Powershell script moving data from the SQL View to the Push Data set.
An Azure Automation Account and a Runbook containing the Powershell script
An Azure Logic App to run the Automation Runbook every 3 minutes
Lets have a look in a little more detail at the Settings for the DevOps Organisation Level and at what the Organisation is.
When setting up DevOps, it created at Organisation level of [My Name] , and another Organisation was then created manually for the company
Organisations can be treated as accounts and each organisations has its own URL. You will also hear organisations being called Collections
You must always start with one organisation but you can have multiple.
Go to Organisation Settings
This gives you the overview of the Devops organisation and there are a few options in here to be aware of
Q Do you have a privacy document already in place?
Release Management (Any application within Release Management Service)
Analytics (Any application within the Analytics Service)
And time period, for example, the last 7 days.
You can browse the marketplace for additional DevOps Services. Like Analytics above to gain insight into the health and status of your DevOps Projects
Just Searching for Git brings up lots of free Git Services that you can use
Azure Active Directory
When DevOps was accessed, an email was used that was connected to a Tenant in Azure because we use Office 365
“Office 365 uses Azure Active Directory (Azure AD) to manage user identities behind the scenes. Your Office 365 subscription includes a free subscription to Azure AD so that you can integrate Office 365 with Azure AD if you want to sync passwords or set up single sign-on with”
Because of this, Azure DevOps connected up the Azure Active Directory
If we never want to allow Public Projects we can simply set the above policy to Off.
We can also dis-allow external guest access through policies
Project Collection Administrator Has permissions to administer build resources and permissions for the collection
A collection is a container for a number of projects in Azure DevOps and corresponds to the Organisation.
Lets set up another Project Collection Administrator
We want everyone in the POC Admin Group to also be a Project Collection Administrator
The DevOps Groups that have been created and Collection Level (Enterprise Level) Groups can all be seen here
You can also look at the users and access all the Settings per user from the User tab.
Gives you more information on what will be available on the board for each type, e.g. Agile, Scrum etc.
DevOps Pipeline Pools are scoped to the entire organisation so they can be shared across Projects. You don’t need to manage Agents individually. You can organise them into Pools
An agent is installable software that runs one job at a time.
With Microsoft hosted agents, every time you run a pipeline you get a fresh virtual machine which is discarded after use
This is an example status badge in Boards
You can set variables at queue time unless the above option is enabled. Only variables Settable at queue time can be set if its limited
By default, the collection (Enterprise)-scoped identity is used, unless the Limit job authorization scope to current project is set in Project Settings > Settings.
An agent pool (Above) defines the sharing boundary for all agents in that pool. Whereas deployment Pools are about deployment target machines that have agents installed on them.
Deployment pools are convenient when you want to deploy a project to different machines
Deployment groups represent the physical environments; for example, “Dev”, “Test”, “UAT”, and “Production”.
The deployment group is a layer over the deployment pool which makes the targets available to the release definitions within a project
As we have already established. The free tier allows you to have 1 parallel job but What is a parallel job?
A job is a set of one or more build tasks that run sequentially on the same target.
You could have for example a project that contains a web app and a Web API. They dont need to be run sequentially, they can be run in parallel to save time
Each App type can equate to 1 job within a build definition. Each build definition as its own build pool/agent
If you have two licensed build pipelines you could run them concurrently which will decrease overall build time.
You could also split up tests across jobs to be executed across parallel builds which will reduce the overall time
You need to look at your Projects and what you are building and see if Parallel jobs are relevant to you, or if you are happy for the agents to run your builds sequentially
Use this option if you already have a GitHub account
In this example Start for free option is used. Because [I] have an azure account, DevOps is already logged in and it knows what my tenant is.
At this point there is no suggestion of a 30 day trial or any other information. so for the time being lets get started by adding a project (This is because I have a Visual Studio account. This may be different for uses without a Visual Studio Subscription)