Azure SQL Database Dev to Production Part 4

I have had quite a lot of issues in regards to the whole dev to prod process with the SQL database. my last attempt which I wrote in this blog worked well until I shut the project down. Then once reopened I would always lose my Project or GIT, so I went back to the drawing board and did a lot more research and here are my new findings.

There is a Data Factory part to this but I have already made a blog about this and it has consistently worked ever since setting up the dev to prod process

Resources used

  • Azure SQL Database and Server
  • Visual Studio (Enterprise)
  • Azure DevOps

Azure Devops Repository

First of all you need to have an Azure Devops set up (I wont go into detail on this here)

In the Devops repos I have a Folder for Data Factory in the repository. The Folder for SQLDB will be created later.

  • In Devops Ensure you have a GIT repos Created Then Click CLONE to copy the GIT location URL

In this example I am cloning right at the top of the repository

  • Click the Copy Button

Visual Studio

You cannot do this as yet in Visual Studio code. It has to be Visual Studio and I have Visual Studio Enterprise 2019

  • You can Copy the Repository Location from the Clone Copy (Or Browse a Repository – Azure DevOps or GitHub)
  • make sure you are happy with the path for the Local Copy
  • Click Clone. Your Local repository is then shows in Solution Explorer
  • This has added the folder on your C Drive (It added the top level and the dataFactory and SQL DB Folders)
  • And You can see this project in Solution Explorer
  • Copy path of C Drive (And the folder for example SQLDB)
  • Visual Studio – In the top bar choose create File – New – Project
  • Choose SQL Server Database Project – Next

The Project with be SQLDB and will contain the SQL Objects

  • Click Create

In Visual Studio SOLUTION EXPLORER: You can see your empty database objects.

On the C Drive, Note you now have a SQLDB folder along with the Data Factory Folder

  • Right click on the database name in Solution Explorer and go to properties

Its important to be in the right version for the target platform

  • Right click on database name and Import – database
  • Select the Connection location of the development database (and Show connection properties to make sure your username and password are ok and the database connects.
  • Import objects into the Local Project (No need to Select it in the above box)
  • then Click Finish Note that all your objects are now in Solution Explorer and on the C: Drive (Your local copy)
  • Is the project Complete? – Build – Rebuild solution which checks and validates the objects

Any time anything changes you need to rebuild your solution to update the code.

Warnings and Errors

For Warning and errors you can see all the issues by clicking on them. The build may fail because of errors. These always need resolving before you send to the target DB e.g. Production

Error Example Warning:  SQL71558: The object reference [staging].[].[KEY] differs only by case from the object definition [staging].[ST2].[Key].

  • Click click on the database in Solution Explorer and go to Properties.
  • In project Settings untick validate casing on identities

Error Example Warning SQL71502: Procedure: [dim].[USP_Date] has an unresolved reference to object [sys].[all_objects].

You can add the master database as a reference (Right Click on references)

Add Database Reference

Rebuild your codebase. Its important here to make sure your warnings and errors have been dealt with

Rebuild updates your project locally after updating – I will look at how making changes with for example SQL Server Management Studio changes the process in a later blog.

  • Publish to Git Repository – GIT Changes and make a note of your change Commit All

Then click the arrow to Push changes to the GIT repository

We now have the code in the repository in DevOps

It seems annoyingly easy to slightly mess your folder structure up. Here I have a SQL DB Folder and another SQLDB Folder inside.

I only wanted the one. This does keep happening to me and its very frustrating. Any pointers to where I went wrong would be really appreciated

Create your CI (Continuous Integration) Pipeline in Azure Devops

Now we have the Code in GIT we can create our artifacts for the release pipeline.

  • In Azure Devops go to Pipelines and Release Pipelines
  • Click New Pipeline
  • And choose your repository
  • Select a template

Right click on tasks and remove selected tasks until you are left with the following

You don’t really have to do much with these three jobs

At the Pipeline level ensure you use the right Agent. For example Windows -2019. We had errors because we use an  OpenJson function in the SQL code but setting the right agent resolved this issue

All the other jobs are parameterised. This should now be all set

  • Save and Queue and you can then run the pipeline to create your artifacts Save and Run
  • There may be warnings here. For some reason the warnings you clear in Visual Studio seem to show in Devops. I would like to do a bit more research on this.
  • But If a warning hasn’t failed the process you should now have your Continuous Integration artifacts.

Create your CD (Continuous Delivery) Release Pipeline in Azure Devops

Now we are onto Continuous Delivery. Moving the new code into Azure SQL DB

  • In Azure Devops to to Pipelines – Releases – New release
  • So for Artifacts click Add

We want to use the latest build

now Add a Stage. In our case we are using Prod so its a simple release

  • Start with an Empty Job

Add a task to the job

Here we chose the production Subscription

And we link to the DACPAC file that was created with the build from visual Studio. The DACPAC contains all the objects in SQL

The database uses variables and you can set these up in the variables tab

You can create a release to update your Production database

Once pushed, check your SQL database in Production to make sure you are happy that your changes have gone through.

And you can save your visual studio project and reopen. the next stage is to update some objects and go through the process again so watch this space

Azure Data Factory Moving from development and Production – Part 2. Using Key vault for Linked Services

In Azure Data Factory Moving from development and Production We looked at how we can use Azure DevOps to move the Json Code for Development Data Factory from development to Production.

Its going well, I have however been left with an issue. every time I move into Production details for the Linked Services have to be re added. Lets have a look at the SQL Server and the Data Lake gen 2 account.

Development

Notice that the information has been entered manually including the Storage account Key.

Again, in this instance the information has been entered manually. SQL Server Authentication is being used because we have a user in the SQL DB with all the privileges that Data Factory Needs.

DevOps Data Factory release Pipeline

Go into Edit of the Release Pipeline

Within Prod Stage we have an Agent Process

We are looking for the Section Override Template Parameters

Note that currently Account Key and SQL Database Connection String are null.

Provisioning Azure Key vault to hold the Secrets

Managed Identity for Data Factory

Copy your Azure Data Factory Name from Data Factory in Azure

You need to have a Key vault set up in Development

GET and LIST allows Data Factory to get information from the Key Vault for secrets

Paste the data factory name into Select Principal

Key Vault, create a Secret for the Azure Data Lake Storage

For the Key Vault Secret. I gave it the Secret value by copying across the Access Key from the Azure Storage Account Keys Section

The Content type was simply set as the name of the Storage Account for this excercise

In Data Factory Create a Linked Service to the Key Vault

Test and ensure it successfully connects

Use the New Key Vault to reset the data Lake Linked Service

How does this Data Lake Linked Service change the DevOps Release Pipeline?

Its time to release our new Data factory settings into Production. Make sure you have Published Data Factory into Devops Git.

Production Key vault Updates

We need to update Production in the same way as Development

  1. In Production Key vault add the Production data factory name to Access Policies (as an Application) With Get and List on the Secret
  2. Ensure that there is a Secret for the Production Data Lake Key AzureDataLakeStorageGen2_LS_accountKey
  3. Check your Key vault connection works in Production before the next step

Azure DevOps Repos

In Azure Devops go to your Data Factory Repos

Notice that your Linked Service information for the Data Lake now mentions the Key Vault Secret. its not hardcoded anymore which is exactly what we want

Azure DevOps Release Pipeline

Go to Edit in the Data Factory release pipeline

When the job in Prod is clicked on, you can go to the Override Parameters Section. And notice there is now an error

AzureKeyVault1_properties_typeProperties_baseUrl is the missing Parameter. Basically at this point you need to delete the code in the Override template Parameters box and then click the button to regenerate the new parameters

Override with production information (I saved the code so I could re copy the bits I need.

Once done, notice that the -AzureDataLakeStorageGen2_LS_accountKey “” parameter is now gone because its being handled by the key vault.

Lets Save and Create a Release

New failures in the Release

2021-02-08T13:45:13.7321486Z ##[error]ResourceNotFound: The Resource ‘Microsoft.DataFactory/factories/prod-uks-Project-adf’ under resource group ‘prd-uks-Project-rg’ was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix

Make sure that your override parameters are ok. I updated:

  • Data Factory name from Data Factory
  • Primary endpoint data Lake Storage from Properties
  • Vault URI from Key vault Properties

Repeat the Process for SQL Database

With everything in place we need to introduce a connection string into Key Vault

I have a user set up in my SQL database. the user has GRANT SELECT, INSERT, UPDATE, DELETE, EXEC, ALTER on all Schemas

I want to include the user name and password in the Connection string and use SQL authentication

Secret Name

AzureSQLDatabase-project-ConnectionString

Secret

 Server=tcp:dev-uks-project-sql.database.windows.net,1433; Database= dev-uks-project-sqldb; User Id= projectDBowner;Password= Password1;

the connection string has been set as above. For more information on connection strings see SQL Server connection Strings

Go back to Data factory and set up the new secret for SQL Server

This is successful

Data Factory and DevOps

  1. back in Data Factory Publish the new linked Service Code
  2. go into Dev Repos and check in Linked Service code you are happy with the new Key vault information
  3. Go to Prod Key vault and make sure you are the Secret is set with the Connection String for SQL DB
  4. Test the Key vault secret works in Prod
  5. Back in DevOps Go to Release pipelines and Edit for the adf Release CD pipeline (Release Pipelines are Continuous Delivery. Build pipelines are CI for Continuous Integration)
  6. Edit Prod Stage (I only have Prod) Arm Template Deployment Task, Copy Overwrite Template Parameters code into a file for later
  7. Delete the code and click the … to get the latest parameter information
  8. Re add your production parameters, most can be taken from the code you just copied.
  9. Create a new Release
  10. go to Linked Services in Data Factory and check they are still Production. They still use Key vault and they still work

Now this is all in place, Development Data factory can be published up to production. there is no need to reset Linked Services and all your information about Keys and passwords are hidden in the Key Vault

Azure SQL Database. Publishing from Development to Production Part 2

The Dev to Prod Process

Initially in part one we set up the process with Visual Studio 2019 and Devops and moved all our objects across to Production. Then with Data Factory we moved data from the production data lake into Production Azure SQL DB

We have all the Source data in a data lake (and its been assured that the Production data lake is the same as the Development data Lake)

We have a Data Factory in Production that goes through a DevOps release pipeline so we should now be able to use the Production Data Factory to Load all the Production Data into the Production SQL database on a regular basis.

What happens when you already have Objects and data in Target?

Only the changes will be released. So the next time you release into production you are releasing the delta

Lets see this in action

  • The Initial SQL database was released into Production with Visual Studio
  • A Production Data Factory moved all the data into the new Objects
  • Now we have an updated Dev SQL Database.

Open your visual Studio Project

Open the project that was created in the last session.

In SQL Server Object Explorer

You have the Azure Server and Database. Right click and Refresh

And you have the Local Project DB which contains all the Objects. We can Check the Schema differences between the project DB and the latest DB within Azure

Go to the dev SQL Database (Click on the icon to the left of the server to open it up)

on the Azure SQL Database. Right click and Choose Schema Compare

For Select Target , select Select Target

Open the Local DB Project. The Database in this case is the same name as your Solution in Solution Explorer. (Now I know I should have given my solution the project name and my Project an extension of _DB to differentiate the two)

Click OK

Click Compare.

Now you get to see what has been deleted. in this case a table and a Procedure has been dropped

Next we can see changes. If the Table Change is clicked on, you get more information about that change in the object definitions. In this case a test column has been added.

This creates a quandary when it comes to loading in the data because this table should be fully populated but the new column will be blank. Is it possible to do a full load for these updated tables with Data Factory, OR do we need to look at something more complex?

And finally additions. In this case there are lots of new Tables procedures and two new functions.

Once happy Click Update and your changes will be published into Solution Explorer

To Check, have a look for some of the new tables, SPs etc in Solution Explorer

Once completed you can click the x icon to close the comparison window and you can save your Comparison information

Rebuild the Project in Visual Studio

Now we want to Rebuild our Project within Solution Explorer

Right click on the Project in Solution Explorer and choose Rebuild. this rebuilds all the files.

  • Rebuild rebuild your entire project
  • Build just rebuilds on the changes

Process your Changes with GIT

Now its in your project you need to process those changes with GIT

in Git changes. Commit all and Push

And remember to add a message

This image has an empty alt attribute; its file name is image-94.png

These objects should now be in Devops. You can go to Devops Repos. then to your database specific project and check for new tables, SPs etc

My new junk dimension object are there so this is all working.

Release the new database objects into Production database

now all the code is in the repos we can push the new and updated objects into production with a DevOps Release Pipeline.

There is already data in my production database. As as initial starting point I do a quick check on a few tables to get a feel of the data.

This SQL Script allows you to do a quick check on row counts in the production database

SELECT
QUOTENAME(SCHEMA_NAME(sOBJ.schema_id)) + '.' + QUOTENAME(sOBJ.name) AS [TableName]
, SUM(sPTN.Rows) AS [RowCount]
FROM
sys.objects AS sOBJ
INNER JOIN sys.partitions AS sPTN
ON sOBJ.object_id = sPTN.object_id
WHERE
sOBJ.type = 'U'
AND sOBJ.is_ms_shipped = 0x0
AND index_id < 2 -- 0:Heap, 1:Clustered
GROUP BY
sOBJ.schema_id
, sOBJ.name
ORDER BY [TableName]
GO

Azure DevOps

Choose the database repository (You should also have a repository for data factory)

Build Pipelines

Go to Pipelines. Before releasing to Prod we actually have to build all our code into an artifact for the release pipeline

Click on your Project _DBComponentsCI pipeline (Continuous integration) set up in Part 1

Lets remind ourselves of this pipeline by clicking Edit

This image has an empty alt attribute; its file name is image-188.png

We build the solution file from the Repos in devops. Then Copy files to the staging directory. Finally publish the artifact ready for release.

Come out of Edit and this time choose Run Pipeline

And Run.

Once its run, there are warnings again but for the time being I’m going to ignore these

Otherwise the build pipeline runs successfully.

Release pipeline

Now we have rebuilt the artifact using the build pipeline, go to pipelines, Releases and to your DB_CD release (continuous delivery)

We have a successful release. I can run the above SQL and check for differences. for a start, there were 39 objects and now there are 43 so you can immediately see that our production database has been updated.

The row count shows that we haven’t lost any data. We have simply updated the objects.

Part three will allow us to look more in-depth at the data side. How we deal with the data in Dev and Prod

Azure Data Factory – Moving from Development to Production

When working on larger projects we need to merge changes from Developers. When all the changes are in the central branch we can then have an automated process to move development to Production

Smoke tests

In computer programming and software testing, smoke testing is preliminary testing to reveal simple failures severe enough to, for example, reject a prospective software release

Integration testing

Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. Integration testing is conducted to evaluate the compliance of a system or component with specified functional requirements. It occurs after unit testing and before validation testing

Resources Involved with the current Project

  • Azure DevOps
  • Azure SQL Server
  • Azure SQL Database
  • Azure Data Factory
  • Azure Data Lake Gen 2 Storage
  • Azure Blob Storage
  • Azure Key vault

Each resource has its own specific requirements when moving from Dev to Prod.

We will be looking at all of them separately along with all the security requirements that are required to ensure that everything works on the Production side

This post specifically relates to Azure Data Factory and DevOps

Azure data factory CI/DC Lifecycle

GIT does all the creating of the feature branches and then merging them back into main (Master)

Git is used for version controlling.

In terms of Data Factories, you will have a Dev Factory, a UAT factory (If Used) and a Prod Data factory. You only need to intergrate your development data factory with GIT.

The Pull request merges feature into master

Once published we need to move the changes  to the next environment, in this case Prod (When ready)

This is where Azure Devops Pipelines come into play

If we are using the Azure Devops Pipelines for continuous development the following things will happen

  • The devops Pipeline will get the powershell script from the master branch
  • The get the ARM template from the publish branch
  • Deploy the Power Shell script to the next environment
  • Deploy the arm template to the next environment

Why use Git with data Factory

  • Source control allows you to track and audit changes
  • You can do partial saves when for example you have an error. Data Factory wont allow you to publish but with Git you can save where you are and then resolve issues another time
  • It allows you to collaborate more with team members
  • Better CI/CD when deploying to multiple environments
  • Data Factory is time times faster with a GIT back end that it is when authoring against the data factory service because resources are downloaded from GIT
  • Adding your code into Git rather than simply into the Azure Service is actually more secure and faster to process

Setting Up Git

We already have an Azure Devops Project with Repos and Pipelines turned on

We already have an Azure Subscriptions and Resource Groups for both Production and Development Environments

There is already a working Data Factory in development

In this example Git was set up through the Data Factory management hub (the toolbox)

DevOps Git was used for this project rather than GitHub because we have Azure DevOps

Settings

The Project Name matches the Project in Devops

The Collaboration branch is used for Publishing and by default it’s the master branch. You can change the setting in case you want to publish from another branch.

Import existing resources to repository means that all the work done before adding Git can be added to the repository.

Devops is now set up.

Close Azure Data Factory so we can reopen it again to go through the GIT process (If it is open)

Where to find your Azure DevOps

You should now be able to select your own Area in Devops / Repos and select the created project within Azure Devops and Repos

You will need an Account in Azure DevOPs Click on Repos and that account must be higher than Stakeholder to access Repos

You can then select the Project you have created

Using Git with Azure Data Factory

In Azure Open up your Data Factory

Git has been Enabled (Go to Manage to review Git)

The master branch is the main branch with all the development work on it

We now develop a new Feature. Create a Feature Branch + New Branch

We are now in the feature branch and I am simply adding a description to a Stored Procedure Activity in the pipeline. However this is where you will now do your development work rather than within the master

For the test, the description of a Pipeline is updated. Once completed my changes are in the feature 1 branch. I can now save my feature

You don’t need to publish to save the work. Save all will save your feature, even if there are errors

You can go across to Devops to see your Files and history created in Azure Devops for the feature branch (We will look at this once merged back into Production)

Once happy Create the pull request

This takes you to a screen to include more details

Here I have also included the Iteration we are currently working on in Devops Boards.

A few tags are also added. Usually, someone will review the work and will also be added here.

The next screen allows you to approve the change and Complete the change

In this case I have approved. You can also do other things like make suggestions and reject

Completing allows us to complete the work and removes the feature branch. Now all the development in the feature branch will be added to the main, master branch.

In Data Factory, go back to the master branch and note that your feature updates are included

We now publish the changes  in Master Branch which creates the adf publish branch. This publish branch creates the ARM template that represents the pipelines, linked services, triggers etc.

Once published, In Devops Repos , there are now files to work with

You can see your change within the master branch

(The changes would normally be highlighted on the two comparison screens)

Here we open the Pipelines folder got the compare tab and find the before and after code

And you can also see your history

The Arm templates is in the adf_Publish branch, if you select this branch

Once done we need to move the changes  to the next environment, in this case Prod (When ready)

This is where Azure Devops Pipelines come into play

Continuous Development using Azure DevOps

We need another Data Factory object to publish changes to

In this case, the Production has been created with Azure Portal within the Production Subscription and Production Resource Group

Git Configuration is not needed on the Production resource. Skip this step

Create your tags and Review and Create

DevOps Pipelines

For this specific Project, We don’t want to update production automatically when we publish to Dev. We want this to be something that we can do manually.

Go to Pipelines and create a new release Pipeline (In DevOps)

Click on Empty job because we don’t want to start with a template

And because for this project there is no UAT, just Production name the Release Pipeline Prod

Click on the X to close the blade

We need to sort out the Artefact section of the Pipeline
 
Click on Add an Artefact and choose an artefact from Azure Repos
 
We may as well add adf_Publish branch which contains the ARM templates
And the Master branch

the Source alias was updated with _adf_publish

Both Pipelines are Azure Repos artefacts

Next We move to Prod and Start adding tasks

Click on 1 job, 0 tasks to get to tasks

Click + against Agent Job to add the task Our task is for ARM Template deployment


Click Add

Then click on the new Task to configure

The first section is where you select your production environment

Next you need to select the ARM template and the ARM template parameters file. These are always updated in the Devops artefact everytime you publish to dev.

The JSON templates are in the adf_publish branch

Now you need to override the template parameters because these are all for Dev and we need them to be production.  These are:

These will be specific to your own data Factory environment. In this instance we need to sort out the information for the Key vault and data lake storage account

factoryName

This one is easy. The only difference is changing dev to prd

AzureDataLakeStorageGen2_LS_properties_typeProperties_url

The Production Data lake Storage account must be set up in dev and prod before continuing. Go to this Storage account resource

This information is also stored in our Key Vault as a secret which we can hopefully use at a later date.

It is taken from Storage Account, Properties. We want the Primary endpoint Storage for the data lake

Copy the Primary Endpoint URL and override the old with the new Prod URL in DevOps

AzureKeyVault1_properties_typeProperties_baseUrl

We need to update https://dev-uks-project-kv.vault.azure.net/

Lets get this overridden. We already have a Key vault Set up in production. Get the URI from Overview in the Production Key Vault Service

and lets add this into our DevOps parameter

AzureDataLakeStorageGen2_LS_accountKey

This is empty but we could add to it later in the process.

Account keys are the kind of things that should be kept as secrets in Key vault in both Dev And Prod

Lets get them set up. Just for the time being, lets ensure we have the Data Lake storage account key within our development and Production Key vaults

Key Vault.   

Within Key vault in development create a secret with the nameAzureDataLakeStorageGen2LSaccountKey

And the key from the storage account comes from……

And Repeat for Production Key vault

For the time being through lets leave this blank now we have captured the information in the key vault. It should come useful at a later date

AzureSqlDatabaseTPRS_LS_connectionString

This was also empty within the parameters for dev.

You can get the connection string value by going to your SQL data base. Connection Strings. PHP and finding the try statement

And here is the Connection String value for production

Server=tcp: prd-uks-project-sql.database.windows.net,1433; Database= prd-uks-project-sqldb;

You can also add this information into Key Vault as a secret and repeat for Production

For the first instance we are going to leave empty as per the dev parameters. At some point we should be able to set up the Security Principal so we can change the hardcoded values to Secrets

The parameters created in dev are now overridden with the production values

The Pipeline is then named

Create a release

Once Saved. Click back on Releases

For this tye of release we only want to do it manually

Create a Release for our very first manual release

Click back on releases

And click on release 1 to see how it is doing

You can click on Logs under the Stages box to get more information

Now you should be able to go back to the production data Factory and see that everything has been set up exactly like Dev.

Go and have a look at linked Services in the Production data Factory

Note that they are all set with the Production information

We now have a process to move Dev to Prod whenever we want

The Process

Throughout the sprint, the development team will have been working on Feature branches. These branches are then commited into the master pipeline and deployed to Dev

Once you are happy that you want to move your Data Factory across from dev into Prod. Go to DevOps Release pipeline

Create Release to create a new release

It uses the Artefact of the Arm template which is always up to date after a publish.

This will create a new release and move the new information to Prod

All your resources will be to be able to quickly move from Dev to Prod and we will look at this in further posts

Setting up a Board in Azure DevOps (AGILE)

Its time to start tracking Projects with Azure DevOps boards.

With Boards, teams can manage software Projects. They can track user stories, backlog items, tasks, features etc. You can choose the environment you want to work with like AGILE or SCRUM.

For this example, there is only one developer (me) and I’m am tracking my progress on a project where I have been the single developer

AGILE is the process that going to be used

Agile is an iterative approach to project management and software development that helps teams deliver value to their customers faster and with fewer headaches. Instead of betting everything on a “big bang” launch, an agile team delivers work in small, but consumable, increments.

First Open Azure DevOPs

https://azure.microsoft.com/en-gb/services/devops/

Sign into your DevOps account.

And create a new Project under your Enterprise

Work Items

Now we have a new project we can start working with Boards but first we need to understand what our AGILE work items are and how they interact with each other.

Epic

I have specific Epics I want to achieve

  • Reporting from the companies main system
  • Social Media reporting
  • Reporting for the Surveys
  • Reporting for all the telephone enquirers
  • Reporting for Complaints
  • Main Reporting Area for all the data Auditing
  • Reporting for Report usage

So, just looking at this. I want 7 Epics to work with (To start with)

Feature

A feature is some complete behavior to implement a new business process. So for example. for the Social Media Epic we want

  • Overall View of Business Performance as provided by the surveys
  • Monthly level reporting on customer satisfaction with drill through

User Stories

User stories are within a feature. These are the smallest change that will result in behavior change. If you don’t observe a change then it cant be demonstrated

For example, as the Customer Satisfaction Manager I want to see the Survey results by month and have the ability to see how we are doing by over the year and at the same point in the year because we need to know if we are doing well as a company to see our trends in satisfaction

As the Company Head of Service I want a full review of our performance using our scoring system against customer satisfaction and how our competitors are doing for benchmarking

Task

These are within a User Story and are the smallest independently deployable change.

  • Get file of Survey data (Pilot project)
  • Move Survey data into the Azure Data Warehouse (Staging area) Incremental loading using Data Factory
  • Establish dimensions and facts
  • Create Dim 1
  • Create Dim 2…….
  • Create Power BI Data Flows
  • Create Top level report by Month of Customer Satisfaction containing last 12 months
  • Create KPIs for Satisfaction against this time last year
  • Drill through to detailed report
  • Drill through to lowest level

Bug

a Bug is an error in the code

  • Incremental Refresh is causing Duplicates
  • NULL data Items in Survey Data set

Issue

An issue is more related to a process, when the System fails to meet user expectation

  • Created a report based on poorly Served Customers but this needs changing to the new business logic.

Test Case

Test cases can validate individual parts of your code. We will look at this item in another blog post

Boards

Lets start with boards. These boards are Kanban Boards

A Kanban board is one of the tools that can be used to implement Kanban to manage work at a personal or organizational level.

When the board is first opened up, Epics don’t seem to be available

With Boards Selected Go to Configure Team Settings

Make sure that Epics are ticked in Under Backlogs

And now with Epics selected click on New Items are start adding in the required Epics

Next we need to start adding some features. It would seem that you cant add the features and then connect them to the epics. You have to create the Features from the epics

go back to the epic, click on … and Add Feature

And you can then see the feature within the Epic

Now we have a Feature we can add the user Story. Go to Features and click on the Add User Story

Same again. Move to User Stories and add Tasks

You can also go into the items and add lots more detail

This link to the Microsoft Documentation gives you lots of information regarding, effort, story points, business value, Priority etc.

Its always good to create the epic and work your way down into the Tasks

Retrospective Items

For this example, items are being added for a sprint that was closed some time ago because the project is being retrospectively moved into Azure Boards

Epics

I am starting them all from the beginning of this particular Project and for this I can add a Start Date

Stories

The Start Date also Applies to Stories but these will be set when the stories were originally created

However when you close a task and move the whole story into completed, you cant set a completed date


If you click on History and look at the state graph, you cant change the New and Resolved Times. These are set at the time of the action which makes it difficult to add past information into the Board

Backlogs

  • Backlogs help you to Quickly define work (User Stories, backlog items, requirements)
  • You can reorder the backlog so you work on the highest priority first
  • Add details and estimates
  • Assign items to team members and sprints by either bulk update or drag and drop
  • Map items within a hierarcy
  • Review the portfolio of work
  • Forecase work to estimate deliveries
  • Display rollup progress, counts and totals to show completion of work

Basically your backlog displays work items as lists and boards display them as cards

The Remaining Active User Stories have been dragged to Iteration 1

Work Items

All the work items you create can be viewed in here as well as created

Hopefully this gives you a little head start into the world of Azure DevOps boards

DevOps Organisation Settings

Lets have a look in a little more detail at the Settings for the DevOps Organisation Level and at what the Organisation is.

When setting up DevOps, it created at Organisation level of [My Name] , and another Organisation was then created manually for the company

Organisations can be treated as accounts and each organisations has its own URL. You will also hear organisations being called Collections

You must always start with one organisation but you can have multiple.

Go to Organisation Settings

General

Overview

This gives you the overview of the Devops organisation and there are a few options in here to be aware of

Privacy URL

Your Privacy URL can be added here to link to your organisations document describing how we handle internal and external guest data. If you have a public website or app you are required to have a dedicated Privacy policy URL.

Q Do you have a privacy document already in place?

https://www.freeprivacypolicy.com/blog/privacy-policy-url/

Organisation Owner

the owner is set against the person who created the organisation account but this can be changed.

Projects

New Projects can be set up here. See https://debbiesmspowerbiazureblog.home.blog/2020/03/06/create-a-devops-project/

Users


A good area to find out the Access levels for each user

  • Stakeholder: Partial access and can be assigned to unlimited users for free
  • Basic: Provides access to everything but test plans. Up to 5 users free, then £4.48 per month
  • Basic + Test Plans: Includes test plans. £38.76 per month
  • Visual Studio subscription: For users with a Visual Studio Subscription and features are enabled based on Visual Studio Enterprise, Professional Test Professional or MSDN Platform

Group Rules

DevOps includes group based licensing for Azure Active Directory (Azure AD) Groups and Azure DevOps groups.

Azure Active Directory Groups are created in Azure Active Directory


DevOps Groups and Collection Level Groups can be found within the Permissions section so we can look at this in more detail later.

Add a group Rule to assign an access level or extention to the group and resources in Azure DevOps are assigned to all members of the group.

When users leave a group the licenses are freed and returned to your pool.

Imagine this scenario

It will be easier to add the following Groups

  • Project A Contributor Group and add Debbie and Jo
  • Project A Reader Group and add Tess
  • Project A Administrator Group and add Sarah
  • Project B Reader Group and add Debbie and Tess
  • Project C Contributor and add Jo
  • Project C Administrator and add Debbie

To manage licenses and group rules, you must be a Project Collection Administrator (PCA) for the organization. Check this within Security > Permissions

Click Add a group rule

Both users within this new group have a visual Studio account so Group Level is set to to Visual Studio. However with Group Rules you assign the Access level for the Users at Group Rule Level

At this point I also added a new Group.

You can click on the … Button after Last Evaluated to get to either Manage Group Rules or Manage Members

These are the two rules for this group

What happens when your users have a mix of access levels within the one User Group?

The Users get their Access Levels from the Rule so their Access level would be reset

Billing

Before Looking at Billing you can use the Azure Calculator to get more of a feel of what you need

https://azure.microsoft.com/en-gb/pricing/calculator/?service=azure-devops

Note that you are paying for 5 basic plans, the first 5 are free but the 10 developers who need the Basic + Test Plans License are the ones adding to the monthly cost


Why would you up Additional Parallel CI/CD jobs on either Microsoft Hosted of Self hosted Pipelines?

See Parallel jobs for more information

Azure DevOps Billing is through Azure and because Billing has not yet been set up we only have access up to the Free Tier limits

If you click on Set up Billing You need to choose an Azure Subscription to add it too

You cant add it to your Personal credits because of the spending Limit caps. Once set up you can then manage the paid access for your users, bearing in mind the free usage tier

https://docs.microsoft.com/en-us/azure/devops/organizations/billing/buy-basic-access-add-users?view=azure-devops

Auditing

Allows you to see all the audit-able events and you can export the log and filter on specific events

Global Notifications

You will recieve notifications for lots of Actions in DevOps like when a build completes or a Deployment approval is pending. You can configure the organisation notifications from here

Usage

Usage allows you to see whats been going on in DevOps


You can also filter the information and choose your columns that you are interested in

You can Select

  • TFS (Any application within the Organisation Service account, e.g. TFS https://dev.azure.com/Organisation/)
  • Release Management (Any application within Release Management Service)
  • Analytics (Any application within the Analytics Service)

Statuses

  • Normal
  • Delayed
  • Blocked

And time period, for example, the last 7 days.

Extensions


You can browse the marketplace for additional DevOps Services. Like Analytics above to gain insight into the health and status of your DevOps Projects

Just Searching for Git brings up lots of free Git Services that you can use

Azure Active Directory

When DevOps was accessed, an email was used that was connected to a Tenant in Azure because we use Office 365

Office 365 uses Azure Active Directory (Azure AD) to manage user identities behind the scenes. Your Office 365 subscription includes a free subscription to Azure AD so that you can integrate Office 365 with Azure AD if you want to sync passwords or set up single sign-on with”

Because of this, Azure DevOps connected up the Azure Active Directory

Security

Policies

If we never want to allow Public Projects we can simply set the above policy to Off.

We can also dis-allow external guest access through policies

Permissions

Project Collection Administrator Has permissions to administer build resources and permissions for the collection

A collection is a container for a number of projects in Azure DevOps and corresponds to the Organisation.

Lets set up another Project Collection Administrator

We want everyone in the POC Admin Group to also be a Project Collection Administrator

The DevOps Groups that have been created and Collection Level (Enterprise Level) Groups can all be seen here

You can also look at the users and access all the Settings per user from the User tab.

Boards

Process

Gives you more information on what will be available on the board for each type, e.g. Agile, Scrum etc.

Pipelines

Agent Pools

DevOps Pipeline Pools are scoped to the entire organisation so they can be shared across Projects. You don’t need to manage Agents individually. You can organise them into Pools

An agent is installable software that runs one job at a time.

With Microsoft hosted agents, every time you run a pipeline you get a fresh virtual machine which is discarded after use

Settings

This is an example status badge in Boards

You can set variables at queue time unless the above option is enabled. Only variables Settable at queue time can be set if its limited

By default, the collection (Enterprise)-scoped identity is used, unless the Limit job authorization scope to current project is set in Project Settings > Settings.

Deployment Pools

An agent pool (Above) defines the sharing boundary for all agents in that pool. Whereas deployment Pools are about deployment target machines that have agents installed on them.

Deployment pools are convenient when you want to deploy a project to different machines

Deployment groups represent the physical environments; for example, “Dev”, “Test”, “UAT”, and “Production”.

The deployment group is a layer over the deployment pool which makes the targets available to the release definitions within a project

Parallel Jobs

As we have already established. The free tier allows you to have 1 parallel job but What is a parallel job?

A job is a set of one or more build tasks that run sequentially on the same target.

You could have for example a project that contains a web app and a Web API. They dont need to be run sequentially, they can be run in parallel to save time

Each App type can equate to 1 job within a build definition. Each build definition as its own build pool/agent

If you have two licensed build pipelines you could run them concurrently which will decrease overall build time.

You could also split up tests across jobs to be executed across parallel builds which will reduce the overall time

You need to look at your Projects and what you are building and see if Parallel jobs are relevant to you, or if you are happy for the agents to run your builds sequentially

Oauth Configurations

Register and Authorise your app

https://docs.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/oauth?view=azure-devops

Artifacts

Storage

Check the storage sizes of your artifacts. Remember you get 2 GB free for your artifacts.

Hopefully this gives you a little more information of each sections of your organisational settings. You need to understand at a basic level

  • Who your teams will be and what they will require (Set Access Levels, Groups and Rules up)
  • What you will be creating and can they be built in parallel?
  • what environments will you be releasing too

Each section can then be looked at in much more detail

Create a DevOps Project

At this point we know that we are using DevOps Service rather than the on premesis DevOps Server so go to https://azure.microsoft.com/en-us/services/devops/?nav=min

To get to your initial DevOps page

Which gives you three routes in. Do you…

  • Start Free?
  • Start Free with GitHub?
  • Or Sign in to Azure DevOps if you already have an account?

At this point, we don’t have an account

Making Sense of DevOps Pricing

Start Free or Start Free with Github

Start free with GitHub

Use this option if you already have a GitHub account

In this example Start for free option is used. Because [I] have an azure account, DevOps is already logged in and it knows what my tenant is.

At this point there is no suggestion of a 30 day trial or any other information. so for the time being lets get started by adding a project (This is because I have a Visual Studio account. This may be different for uses without a Visual Studio Subscription)

Also Note the level https://dev.azure.com/debbieedwards

This is the organisation level and this is my own personal devOps account. What happens if you want to set up a New Organisation to connect related projects and scale out to enterprise Level?

Select New organisation

And New Project

the New Organisation level has been set up with the Companies name

I can now start working with DevOps at an organisational level. We can have 5 basic users for free so for the time being, this is what we will stick to.

We know we want a Private DevOps area

Version Control Git has been selected because this seems to be the one that other team members are the most comfortable with

Work Item Process Agile, Basic, CMMI, Scrum

The default is Agile but lets have a quick look at each of these processes

Basic

The simplest model.

Agile

Agile includes scrum, Works great if you want to track user stories and bugs on a kanban board

Scrum

Supports the Scum methodology. Really good for tracking backlog items and bugs on a kanban board.

CMMI

if your team follows more formal methods use CMMI (capability Maturity Model Integration.

For this Project, the Agile approach is selected

Invite a user into the Project.

Under 5 users and DevOps is free. We wont be doing anything with test plans at the moment so lets add one other user into this area

DevOps Access Levels

  • Stakeholder can be assigned to users with no license or subscriptions and need a limited set of features
  • Basic Provides access to most features. up to 5 users is free
  • Basic + Test Plans The user has access to test plans but this costs around £34 a month extra and you dont get any free accounts
  • Visual Studio Subscription – Assign to users with a Visual Studio Subscription

Check Levels of Access


Go back up to the organisational level and select Organisation settings in the bottom left hand corner of the screen


Next go to users and you can check the settings. Note that one account is under a Visual Studio Enterprise Subscription so this is assigned to users who already have a Visual studio Subscription.

We have 1 basic account which will be free at the moment

Checking Spending

Obviously the one thing you want to do is check that you aren’t spending money unnecessarily.

Still in Organisation Settings

Currently there is no billing applicable but this section will need a more detailed how to later on

Checking what has been done at organisation level

Here you can see that Tess has been added by myself to a Group and her access was set to basic.

There should be a post coming a long soon that will look into Organisational Setting in more detail

We have created a project and set up new users with Basic level accounts.

Getting back to your Project

Close Devops down and then re open


This time you can sign in and go straight to your new project

Next time we will start using some of the services on offer like Boards, Repos, pipelines, test Plans and Artifacts

Making Sense of Pricing for Azure DevOps to get started

You decide that Azure DevOps is the way to go because you want to make use of all the features. Specifically

  • Azure pipelines to build and release code
  • Boards to do all your planning
  • Repos so you can use, for example GiT as your code repository
  • Artifacts to share packages across projects
  • Test Plans to help you test what you have built

Azure DevOps Services Costings

DevOps is free for Open Source projects and small projects up to 5 users

https://azure.microsoft.com/en-gb/pricing/details/devops/azure-devops-services/

Azure DevOps Services

Individual Services

Taking Azure DevOps Services as the starting point, The first area to look at is Individual Services


There are only two individual services to choose from. Pipelines and Artifacts. this would be useful if you choose if you simply want to be able to build code and release it into specific environments and save your artifacts for use across projects

CI/CD -Continuous Integrations and Continuous Delivery or deployment

Along with these two services there are sliders so you can optimise the services for your requirements and it would be helpful if there were more information about the different options

Azure pipeline Options

First of all we need to understand the Microsoft Hosted will be in the public cloud. The jobs are run on a pool of Microsoft Hosted Agents. Basically, each time a job is run, a fresh VM gets created and then discarded after use. and Self Hosted will be ion premises using self hosted agents. .

With the above option you can only run one job at a time for free that runs for 1,800 mins at a time. And remember that your job is building and releasing code.

lets see what happens with Microsoft hosted if move the Microsoft hosted slider to 10?


Its not clear what this actually means. would you be paying £298 per month at the top end if you had 10 developers using the service concurrently, or do you simply pay for what you use.? So if you don’t go to more than one concurrent job at a time its still free?

Azure Artifact Options

This is clearer. Artifacts are stored so you are paying for storage.

User Licenses

If you want to use all or most of the services you can get an individual user License, Much in the same way that you would but in Power BI Pro per user license.

The only difference between the Basic plan and the Basic + Test plans is Test plans but there is a fairly big price difference.

The question is, how useful is testing plans and can you do with out them? Testing plans will be looked at in more detail later

Basic Plan

If you are happy to go without Test plans its worth looking in more detail at the fine points


However….

Azure Pipelines: Includes the free offer from INDIVIDUAL SERVICES and the free offer is specifically for 1 Free parallel job

Does that mean that even paying 4.48 per license you may have extra charges if you run a job in parallel to another developer?

If two developers are running at the same time who gets hit with the charges?

Could this be understood as being if a user puts two jobs out concurrently and if two users have a job running each, this wouldn’t be charged as its per user?

Artifacts 2 GB free per month and then the assumption is that you move on to pay for extra storage. Is this a pay as you go model?

Basic + Test Plans

The same criteria applies to the plan so the same questions still apply

There are no Free plans with this and the cost is £38.76 per user per month so this assumption is that this plan would only be required for users who will need to test the system?

More information is required in regards to Test plans and are they worth the extra £34.28 a month?

Azure DevOps Server

DevOps Server is the on premises offering built on a SQL Server backend. DevOps Server is a good option when all your services are on premises and you have, for instance Microsoft SQL Server 2019

You can either pay month to month through Azure or buy a 3 year software license.

If you buy through Azure it entitles you to use the cloud service.

With either option you need Windows or Windows Server Licenses for the Servers running Azure DevOps Server 2019

Team Foundation Server is now Azure DevOps Server.

Pricing is not established on the web site so it may need to be a call to the Microsoft Sales team to ensure you get the right fit for your needs.

If you already use the cloud the recommendation will be to go for a DevOps Service

However there are still some questions in relation to this and what is the best option, and can these options be mixed and matched when dealing with different types of users?

Introduction to Azure DevOps

What is Azure?

Taking the first part of Azure DevOps, Azure is Microsoft’s Cloud computing platform. It hosts hundreds of Services in over 58 regions (e.g. North Europe,West US, UK South) and available in over 140 countries.

As you can see, lots of Azure services have already been consumed throughout these blogs. Azure SQL Databases, Azure Data Lake gen2, Azure Blob Storage, Azure Data Factory, Azure Logic Apps, Cognitive Services etc.

Business Processes are split into Infrastructure as a Service Iaas (VMs etc) , Platform as a Service PaaS (See the services above) and Software as a Servie SaaS (Office 365, DropBox, etc)

You can save money by moving to this OpEx model (Operational Expenditure) from the CapEx model (Capital Expenditure) because you pay for what you need as you go, rather that having to spend money on your hardware, software, data centers etc

Cloud Services use Economies of Scale, in that Azure can do everything at a lower cost because its operating at such a large scale and these savings are passed to customers.

On Demand Provisioning

When there are suddenly more demands on your service you don’t have to buy in more hardware etc. You can simply provision extra resources very quickly

Scalability in Minutes

Once demand goes down you can easily scale down and reduce your costs. Unlike on Premises when you have to have maximum hardware requirements just in case.

Pay as you Consume

You only pay for what you use

Abstract Resources

You can focus on your business needs and not on the hardware specs (Networking, physical servers, patching etc)

Measurable

Every unit of usage is managed and measurable.

What is DevOps?

A set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production, also ensuring high quality

Testing, Reviews, Moving to production. This is the place where developers and the Operations team meet and work together

Pre DevOps

If we dont work within a DevOps Framework. What do we do?

Developers will build their apps, etc and finally add it into Source Control

Source Control or Version Control allows you to track and manage code changes. Source Control Management Systems provide a history of development. They can help resolve conflicts when merging code from different sources

Once in Source Code the Testing team can take the source code and create their own builds to do testing

This will then be pushed back to the development team and will go back and forwards until everyone is happy. Here we can see that the environments we are using are Dev and Test

Once Complete, it is released into Production.

This is a very siloed approach. Everyone is working separately and things can take time and you will get bottlenecks

The DevOpsApproach

Everyone works together. You become a team and time to market becomes faster. Developers and Operations are working as a single team

DevOps Tools

Each stage uses specific tools from a variety of providers and here are a few examples

  • Code – Eclipse, Visual Studio, Team Foundation Services, Jira, Git
  • Build – Maven, Gradle, Apache Ant
  • Test – JUnit, Selenium
  • Release – Jenkins, Bamboo
  • Deploy – Puppet, Chef, Ansible, SaltStack
  • Monitor -New Relic, SENSU, Splunk, Nagios

We need all these tools to work together so we don’t need to do any manual intervention. This means that you can choose the ones that you have experience in.

Components of Azure DevOps

Azure Boards

People with a Scrum and Project Management background will know how to create the features within the Boards. Epics, Stories, Tasks etc

Developers create and work on tasks. Bugs can be logged here by the testers

Azure Repos

Push the development into Source Control to store your information. Check in your code within Azure Repos.

There are lots of repositories to choose from in Repos to suit your needs like GIT or TFS

Azure Pipelines

Developers build code and that code need to get to the Repos via a Pipeline. The code is built within the Pipeline.

The code is then released into Dev, Test, Prod, Q&A etc, And from, say the test or Dev environments we can……..

Azure Test Plans

Test, using Azure Test plans. For example, if you have deployed a web service, you want to make sure its behaving correctly. Once tested the code will go back to the pipeline to be built and pushed to another environment

Azure Artifacts

Collect dependencies and put them into Azure Artifacts

What are dependencies?

Dependencies are logical relationships between activities or tasks that means that the completion of one task is reliant on another.

Azure Boards

Work Items

The artifact that is used to track work on the Azure board.

  • Bug
  • Epic
  • Feature
  • Issue
  • Task
  • Test Case
  • User Story

So you create work items here and interact with them on the board

Boards

Work with Epics, Features, Tasks, Bugs etc.

Includes support for Scrum (agile process framework for managing complex work with an emphasis on software development) and Kanban (a method for managing and improving work across human systems. Balances demands with capacity)

Backlogs

How do you prioritise your work items?

Sprints

Say your Sprint is 20 days (2 weeks) What work can be accomplished within this sprint?

Dashboards

Overall picture of the particular sprint or release

Repos

We can use GIT or Team Foundation Server TFS. The example uses GIT

  • Files
  • Commits
  • Pushes
  • Branches
  • Tags
  • Pull Requests

You create your own branch from the master branch. Do your testing and changes and push back from your branch to the master branch.

Pipelines

Where is your Code? Its in GiT

Select the GiT source like Azure Repos GIT or GiTHub etc

Get the code from the master branch

How do you want to build the project?

Choose from lots of templates, Azure Web App, ASP.net , Mavern, Ant, ASP.NET, ASP.NET with containers, C# function, Python package, Andriod etc

Next provide the Solution path and the Azure Subscription that you want to deploy to

This takes the source code from the GiT repository and builds the source code.

The build will then give you logs to show you how the build of the project happened

Next time when you check in code, it will automatically trigger the pipeline to build the code

Then the build needs to be Released via a Release Pipeline

This is where you release to the correct Azure Subscription and the code will be deployed. You can also add in approvals to ensure you get the pre approval required to release the code.

Conclusion

This is just a whistle stop tour of Dev Ops. Test Plans and Artifacts haven’t been discussed in much detail but it gives you the basics of what is included in DevOps and how you can start to think about using it.

What do you create in Azure and can it be handled within DevOps?

Can we start using the Boards?

How do we can started with Azure Devops?

Which teams members have the right interests in the specific DevOps areas?

Create your website with WordPress.com
Get started