In this post I want to share some tips for deploying to multiple SQL Server database types using Azure DevOps. Because I’ve had a few questions about this.
To explain why I am doing this post I thought I better cover a demo I did last week. Last Monday myself and Sander Stad presented our ‘Azure DevOps Duet’ session. Because it was the last time we were presenting that session in 2020, I decided to do something special.
With this in mind I decided to add some more database types to one of our existing demos. Just for good measure, I also decided to do the demo completely live for the first time ever.
Which meant that everybody got to see the below pipeline run in real time.
Types of SQL Server databases
To give the above pipeline a bit more context the below types of SQL Server databases were updated after being unit tested (initial unit testing yaml courtesy of Sander):
- Three SQL Server 2019 instances in three Docker containers. Representing Integration, Staging and Production environments.
- At the same time the Git repository in Azure Repos would sync with a GitHub Repo. Which would then start a GitHub Action to update another database.
- An Azure SQL Database.
- Finally, a Synapse Analytics Dedicated SQL Pool was also updated.
It was able to do this by creating four different dacpacs. One right at the start and three more during the course of the pipeline to deploy to the three different types of SQL Server databases in Azure.
I wanted to show it was possible to update various database types from one single database project. Which can save the hassle of managing different database projects for each type.
In reality, I also decided to do this demo live for other reasons.
For a start, I wanted to make this session unique. In addition, I wanted to show what can be done in Azure Pipelines with a bit of imagination. Plus, the testing had gone well.
As you can see below the whole pipeline completed on Monday evening in ten minutes and forty seconds. You can watch the whole session in detail here.
After sharing the link for the video online I got asked if I could provide some tips. With this in mind, I thought I would put some together. So, I hope you all enjoy the below tips.
Take advantage of the YAML schema to make the flow of your pipeline as efficient as possible. Split the work up logically using stages and jobs.
If you intend to create a complex pipeline use the dependsOn syntax as well. Doing this allows you to manage the order of the stages better. In addition, use the displayName syntax to make your overview more readable.
Be aware that if multiple stages will run in parallel, they might not run in the order that they are in your YAML. You can find the full YAML schema reference here.
Do not put your passwords directly in the code. Even if you are just testing at the start. Because those passwords will stay in your Git repository history which could become public in the future. Look to use local or library variables right from the start.
Something I mentioned in a previous post here. If you are looking to test deployments using local containers in Windows, I highly recommend making sure your operating system is current. In addition, make sure you keep Docker Desktop updated as well.
If you do this, you can use WSL 2 for your containers instead which will improve the performance of your containers. Another advantage of using the latest version of Docker Desktop is that it includes a new Docker Dashboard. So that you can see your Docker containers visually.
On a side note, if you have upgraded Docker Desktop and enabled WSL 2 support, I strongly recommend removing Hyper-V if you are not using Hyper-V for anything else. I have done this myself and it was fine.
Keep database schema simple
If you decide to test deploying to different SQL Server database types from one project keep your database schema simple initially. Otherwise you may end up having to resolve incompatibility issues straight away instead of proving your pipeline works.
Especially if you’re testing updating a Synapse Analytics SQL Pool initially based on a SQL Server database schema. Because the two are very different.
In fact, Microsoft provides a query here to help you. Because it identifies data types in your SQL Server database which are not supported in a SQL pool.
Installing Managed Instances
Managed Instances currently take a fair amount of time to install. In fact, I think the fastest time I have done it in is three hours and forty-two minutes. With this in mind, make sure you select the lowest spec possible when creating your initial managed instance.
Test latest Microsoft-hosted agents
Test using the latest Microsoft-hosted agents instead of older ones in your pipeline. I did this and it made a couple of the build stages a lot faster.
If your computer has a decent spec, you can test using that as a self-hosted agent instead of a Microsoft-hosted one. Because that will be faster than a self-hosted agent. You can read more about how you can do that here.
I used my laptop as both an Azure DevOps Agent and a self-hosted GitHub. So, I decided to run them both from the command line instead of running them as services.
Doing this means you don’t have to remember to disable unused services. Plus, it means you get updates from the command line once a job has completed.
Reduce number of artifacts
I strongly suggest you use as few artifacts as possible. In the above example I created a different artifact for each different type. However, in reality you can use the same compatibility level for some different types.
For example, the compatibility level for SQL Server 2019 also works with Azure SQL Database Managed Instances. In addition, the current Azure SQL Database compatibility level also appears to work for SQL Server 2019 deployments.
Reducing the number of dacpacs created makes your pipeline more efficient.
Checking compatibility level
You may find you have to change the Database Schema Provider (DSP) in your sqlproj file like I have in this pipeline. You can keep looking online to find what you have to change it to.
However, there is a quicker solution if you have Visual Studio installed. Because a quicker solution is to make a copy an existing project locally and then change the target platform in the copy you made.
It’s important to do this on a copy because the sqlproj file is changed as soon as you change the target platform. After you have done that you can open up the sqlproj file in notepad to see what the DSP file has been changed to.
Another advantage of doing this for some is that the link underneath the target platform changes as well. So, you can find out more about that specific version.
I hope my tips for deploying to multiple SQL Server database types using Azure DevOps proves to be useful.
I will add more tips in a follow up post. However, if you have any queries or comments about any of these feel free to reach out to me.