In this post I cover two ways to perform CI/CD for SQL databases in Fabric using YAML pipelines in Azure DevOps.
By showing two different ways you can create the dacpac file with YAML pipelines. One way from on a database project and another by directly extracting the schema of the database into a dacpac file. As per the below diagram.
To clarify, YAML pipelines are the pipelines you work with in the Azure Pipelines service in Azure DevOps that are defined by YAML syntax.
Those who wish to see a GUI-based method to perform CI/CD for SQL databases in Fabric can read my other post. Where I cover two ways to perform CI/CD for SQL databases in Fabric using classic pipelines in Azure DevOps.
SQL database in Fabric is a new offering that is currently in public preview. You can read more about it in a post I wrote that covers spreading your SQL Server wings with SQL database in Fabric.
This post consists of six main sections:
- Sample repository.
- Prerequisites.
- Method one to create a dacpac file.
- Method two to create a dacpac file.
- Deploying the created dacpac to other SQL databases in Fabric.
- Final words.
For good measure I show you how to configure an approvals process. Plus, I provide plenty of links along the way.
If you need help with any jargon used in this post, then I recommend that you read one my other posts. Which is a Microsoft Fabric Git integration jargon guide for Fabricators.
Sample repository
I created a public GitHub repository called AzureDevOps-SQLDatabaseinFabric to accompany this post.
It is free for you to download/clone and use as a template to perform either of the methods shown in this post. My only request is that if it proves to be useful, please give it a star in GitHub.
Prerequisites to perform CI/CD for SQL databases in Fabric using YAML Pipelines
For the benefit of this post, I specify Microsoft-hosted agents so you can setup your pipeline quickly. However, I recommend looking to configure a self-hosted agent ongoing.
Plus, for the benefit of this post I configured a Azure Pipeline variable group. Which is why you can see variables used in the pipelines. For example, $(BuildConfiguration).
Method one to create a dacpac: From a SQL database in Fabric database project
In order to create a dacpac file based on a database project you must first create the initial database project and synchronize it to Azure DevOps.
Create a database project based on a SQL database in Fabric
In reality, this section to create a database project is the same as it was in my previous post that covered two ways to perform CI/CD for SQL databases in Fabric using Azure DevOps.
Therefore, if you know about this already you can go straight to the next section about creating the dacpac file via a YAML pipeline.
You can create a database project based on a SQL database in Fabric with a variety of applications. In this post I show how to create one using Azure Data Studio with the SQL Database Projects extension installed.
You can get the details to connect to the SQL database in Fabric by going into the SQL query editor for the database in Microsoft Fabric. From there, select the Open in button to connect and copy to connect with Azure Data Studio.
You can then create the database project; initialize the folder as a Git repository and synchronize it with a repository in Azure DevOps.
I covered how to do this in detail in a post about how to create a dacpac for an Azure Synapse Analytics dedicated SQL Pool using Azure DevOps. You can find the relevant details in the below sections of that post.
- Create Git repository for SQL Pool in Azure DevOps
- Synchronize Azure Synapse SQL Pool repository to Azure DevOps
You can follow the above steps in that post. Because the only difference is that you synchronize a database project for a SQL database in Fabric to Azure Repos instead of a SQL Pool.
One thing I did notice is that the wrong target platform was specified for the database project. I changed the target platform to 160 (SQL Server 2022). However, you can decide which one will work best for yourselves.
Creating the dacpac file via a YAML Pipeline
Once the database project is synchronized in Azure DevOps you can create a dacpac via a YAML pipeline. By going into Azure Pipelines, selecting “New pipeline” and then selecting “Azure Repos Git”. Like in the below example.
You can specify the variable groups, triggers and agent at the start of the YAML pipeline. Like in the below code.
variables:
- group: SQLDBinFabric
trigger:
- main
# To reference a self-hosted agent swap around the commented and uncommented references
pool:
vmImage: 'windows-latest'
# name: $(agentpool)
Afterwards, you can add the code to create and publish the dacpac with two additional tasks within the same job. As per the below example.
# Build the project using a .NET task
- task: DotNetCoreCLI@2
displayName: 'Build the dacpac using dotnet'
inputs:
command: 'build'
projects: 'FabricDWProject.sqlproj'
arguments: '--configuration $(BuildConfiguration)'
#Then publish it as part of artifact
- task: PublishBuildArtifacts@1
displayName: 'Publishes dacpac as an artifact'
# Publishes the dacpac as part of an artifact within Azure DevOps
inputs:
PathtoPublish: 'bin/$(BuildConfiguration)'
ArtifactName: $(SQLDBartifactname)
publishLocation: 'Container'
In the ‘Build dacpac’ .NET core task you can state the location of the sqlproj file and then set the configuration in the ‘Arguments’ section. In addition, you can set the ‘Path to Publish’ section of the Publish Artifact task to be ‘bin\$(BuildConfiguration)’.
You can then run the pipeline. Once the pipeline has completed you can see that an artifact has been published. Which contains the dacpac file.
Method two to create a dacpac: Extracting the schema from an existing SQL database in Fabric
Alternatively, you can use SqlPackage in Azure DevOps to extract the schema of an existing SQL database in Fabric into a dacpac file.
In order to read from the database, you can add a service principal to the database by managing the database access. Personally, I added the service principal manually first.
CREATE USER [MSFabric] FROM EXTERNAL PROVIDER;
However, to keep this post simple I also added the service principal to the db_ddladmin role using the functionality to manage SQL database-level roles from Fabric portal.
Just like in the first method you create your YAML pipeline in Azure Pipelines. You can then specify the variable groups, triggers and agent at the start of the YAML pipeline.
Afterwards, you can add the code to create and publish the dacpac with three tasks within a single job.
First of all, you must install the latest version of SqlPackage with the below task.
# Install updated version of SqlPackage
- powershell: 'dotnet tool update -g microsoft.sqlpackage'
displayName: 'Install SqlPackage'
After doing so you can extract the schema of an existing SQL database in Fabric with SqlPackage. Setting the destination to be the artifact staging directory. To ensure that the pipeline only publishes the dacpac file. As you can see in the below two tasks.
# Extract schema from existing database into a dacpac file
- task: PowerShell@2
inputs:
targetType: 'inline'
script: 'SqlPackage /Action:Extract /TargetFile:"$(Build.ArtifactStagingDirectory)\SQLDBinFabric.dacpac" /SourceConnectionString:"$(SourceConnString)"'
# Publish dacpac
- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact'
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: '$(SQLDBartifactname)'
I opted to add the entire connection string as a variable to keep my syntax nice and clean. Below is an example of a connection string format that works with a service principal.
Server=tcp:{PROVIDED SERVER NAME IN FABRIC}.database.fabric.microsoft.com,1433;Initial Catalog={PROVIDED CATALOG/DATABASE NAME};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Authentication=Active Directory Service Principal; User Id={SERVICE PRINCIPAL CLIENT ID}; Password={SERVICE PRINCIPAL SECRET}
Note: The database name required in the connection string will vary from the database name you provided. Make sure you enter the right one.
You can get the correct server and database names in Microsoft Fabric by going into SQL query editor. From there, click to connect to Visual Studio code or navigate to get the connection strings.
After adding the tasks, the pipeline can be run to confirm it works. Before adding the deployment logic.
Verifying the dacpac has the SQL database in Fabric target platform
After the pipeline has completed you will see the published dacpac in the pipeline results.
You can verify that the dacpac exists by selecting the published artifact within the pipeline results. From there you can click to the left of the artifact name to view its contents.
To verify the contents of the dacpac you can download it and rename it with a “.zip” extension. In order extract its contents. You can then check that the correct target platform is stated in the “model.xml” file, as below.
DspName="Microsoft.Data.Tools.Schema.Sql.SqlDbFabricDatabaseSchemaProvider"
Deploying the dacpac to other SQL databases in Fabric using YAML Pipelines
Once the dacpac is created it is time to add the logic to the YAML pipeline to deploy the dacpac. Which can be added to either of the above methods to create the dacpac.
However, three prerequisites are required first.
- The service principal needs the relevant permissions in the target database.
- You must ensure the connection string for the destination database is added as a variable. Alternatively, choose another connection strategy.
- Ideally, you should create the same environment you specify in your pipeline. Plus, configure any approvals or checks you wish. Like in the below example where the Production environment has an approval configured.
You can then create a new stage in your YAML pipeline that contains the method to create the dacpac.
First, you configure what is known as a deployment job. Which allows you to specify an environment. Like in the below example which specifies a Production environment which has been configured in the Environments feature.
- stage: SQLDBinFabric
displayName: 'Deploy dacpac to SQL DB in Fabric'
jobs:
- deployment: 'SQLDBinFabric'
displayName: 'SQL DB in Fabric'
environment: Production
Currently, you do not need to update SqlPackage if you created a dacpac with either of the above methods when working with a Microsoft-Hosted agent. However, you can add the below task at the start of the deployment job to update SqlPackage for certainty.
- task: PowerShell@2
displayName: 'Install Sqlpackage'
inputs:
targetType: 'inline'
script: 'dotnet tool install --global Microsoft.SqlPackage'
From there, you can download the artifact that contains the dacpac file onto the agent and then specify it as part of the deployment.
- task: DownloadBuildArtifacts@0
displayName: 'Download Artifact'
inputs:
buildType: 'current'
downloadType: 'specific'
artifactName: '$(SQLDBartifactname)'
downloadPath: '$(System.ArtifactsDirectory)'
- task: SqlAzureDacpacDeployment@1
displayName: 'Install DACPAC on SQL DB in Fabric'
inputs:
azureSubscription: $(AzureSubscription)
AuthenticationType: connectionString
ConnectionString: $(DestConnString)
DatabaseName: '$(DestinationDB)'
DacpacFile: '$(System.ArtifactsDirectory)\$(SQLDBartifactname)\$(Targetfile)'
AdditionalArguments: '/p:AllowIncompatiblePlatform=True'
If you added an approval to your environment, you will be asked to approve the deployment whilst it is in progress. Once approved the pipeline will complete. As you can see in the below completed pipeline results for the two different methods.
Of course, you can customize these pipelines as you see fit.
Final words about these two ways to perform CI/CD for SQL databases in Fabric using YAML Pipelines
I really hope that showing these two ways to perform CI/CD for SQL databases in Fabric using YAML Pipelines in Azure DevOps helps to speed up your own implementations of either of these methods.
Do bear in mind that there are other ways you can update your SQL databases in Fabric to other environments. Such as Microsoft Fabric deployment pipelines which I covered in a previous post.
Of course, if you have any comments or queries about this post feel free to reach out to me.
[…] Those who wish to see an Azure DevOps alternative can read my other post. Where I cover two ways to perform CI/CD for SQL databases in Fabric using YAML Pipelines. […]