Skip to content

Azure DevOps version of the FUAM deploymenator

Reading Time: 10 minutes

In this post I introduce the Azure DevOps version the FUAM deploymenator. Which is a FUAM deployment accelerator that I developed in order to push FUAM deployments from Azure DevOps to a Microsoft Fabric tenant. Like in the overview diagram below.

Overview of FUAM deploymenator process with Azure DevOps
Overview of FUAM deploymenator process

As you can see above, the solution utilizes both the Fabric Command Line Interface (Fabric CLI) and the fabric-cicd Python library. In reality, there is a bit more to the process than shown in the above diagram. Which I cover in this post.

In my previous post I introduced a GitHub version of the FUAM deploymenator. I decided to create an Azure DevOps version as well since Azure DevOps is very popular in enterprises. With some slight variations to my previous post, including:

  • Deployment process has been separated into four separate stages. Which can be changed.
  • Id value of the new workspace identified once and carried between stages.
  • Display names in some tasks dynamically reference new workspace name.
  • Important point about authentication for fabric -cicd highlighted.
  • Slight modifications in some PowerShell tasks.

I show how the solution deploys to a single Microsoft Fabric tenant in this post. However, you can extend this to deploy to multiple tenants. Just like the recommended CI/CD option for managing multiple customers/solutions.

You can find a template for the FUAM deploymenator in my AzureDevOps-FUAM-Deploymenator GitHub repository. Which you can either clone or import and customize to suit your needs.

About FUAM

FUAM stands for Fabric Unified Admin Monitoring and is a popular monitoring solution developed by two Microsoft employees. You deploy FUAM in a Fabric workspace. So that it can extract metrics and provide a holistic monitoring view.

Typically, you perform the steps in the FUAM deployment guide to perform a pull-based deployment. However, with this accelerator you run a YAML pipeline that automates the majority of steps 1-4 in the FUAM deployment guide with a push-based method.

I provide an overview of how the FUAM deploymenator works in this post. Highlighting any key points along the way. Including an important manual task that needs to be performed after deployment. Plus, I share plenty of links.

Prerequisites

All the prerequisites covered in the FUAM deployment guide still apply. Plus, you need the below details.

  • Object ID for a Microsoft Entra user. So that they are granted permissions to view created connections. Plus, be an additional administrator for FUAM alongside the service principal. Ideally this user should be a Fabric admin.
  • Name of the Fabric capacity that the new workspace will be connected to in the target tenant.

In addition, the below Tenant admin settings need to be enabled. If they are currently disabled, I recommend that you add the service principal to a Microsoft Entra group. You can then enable the setting for that Entra group only.

  • Service principals can call Fabric can create workspaces, connections, and deployment pipelines. – Required for the workflow to complete.
  • Service principals can call Fabric public APIs. – Also required for the workflow to complete.
  • Service principals can access read-only APIs – Required for data Pipelines to work after deployment.

Variable group for Azure DevOps version of FUAM deploymenator

You must create a variable group in Azure DevOps and add the below variables. I recommend calling the variable group fuam-ns.

  •  resourceUrl – To get token.
  • FirstItemsInScope – First set of items deployed by fabric-cicd.
  • SecondItemsInScope: Second set of items deployed by fabric-cicd.
  • Notebook1Name- Name of first Post-Deployment notebook.
  • Notebook2Name- Name of second Post-Deployment notebook.
  • Notebook3Name- Name of third Post-Deployment notebook.
  • Notebook4Name-: Name of fourth Post-Deployment notebook.
  • pbi_connection_name- Name of Power BI connection that gets created.
  • pbibaseUrl – Base URL added to Power BI Connection.
  • pbiaudience – Audience added to Power BI Connection.
  • fabric_connection_name – Name of Power BI connection that gets created.
  • fabricbaseUrl – Base URL added to Fabric Connection.
  • fabricaudience – Audience added to Fabric Connection.

You can find my recommended values for the variables in the README file in the GitHub repository.

In addition, you must import or clone the GitHub repository that I made available. From there, create a new YAML pipeline from the existing YAML file in the repository. In order to start the pipeline in your own Azure DevOps environment.

To start the FUAM deploymenator

To start the FUAM deploymenator you must start the Pipeline in Azure DevOps. You do this by going to the pipeline in Azure DevOps. You must then click on the “Run pipeline” button and enter the requested parameters as below.

The FUAM deploymenator pipeline parameters in Azure DevOps
Pipeline parameters

When the pipeline first runs, you may need to grant permissions to resources. Once the pipeline finishes the below stages are shown as completed.

Completed stages of the FUAM deploymenator in Azure DevOps
Completed stages of the FUAM deploymenator in Azure DevOps

I cover what the stages do in the next section.

FUAM deploymentator stages in Azure DevOps

At the start of the YAML pipeline the parameters shown in the previous section are specified. It then specifies the variable group to work with which contains non-sensitive values.

variables:
- group: fuam-ns

Afterwards, it specifies that the trigger to “none” so that the pipeline is ran manually and specifies a Microsoft-hosted agent.

trigger: none

pool: 
   vmImage: 'windows-2025'

After specifying the above the YAML pipeline performs four stages. Which I cover in the next sections.

Stage to create the connections with the FUAM deploymenator

First step in the job to create the connections is to specify the version of Python to work with. Via a use Python version task.

- task: UsePythonVersion@0
  displayName: 'Use Python 3.12'
  inputs:
    versionSpec: 3.12  

Afterwards, a PowerShell task installs the ms-fabric-cli library in order to work with the Fabric Command Line Interface (Fabric CLI).

- task: PowerShell@2
  displayName: 'Install necessary libraries'
  inputs:
    targetType: 'inline'
    script: |
      python -m pip install --upgrade pip
      pip install ms-fabric-cli
    pwsh: true

Once done, another PowerShell task states two fab commands. One to set “command_fallback_enabled” to true and another to login with the Fabric CLI as a service principal.

fab config set encryption_fallback_enabled true
fab auth login -u ${{parameters.azure_client_id}} -p ${{parameters.azure_client_secret}} --tenant ${{parameters.azure_tenant_id}} 

After the login another PowerShell task adds the “fuam pbi-service-api admin” connection in Microsoft Fabric if the connection does not exist.

$pbiconnections = fab ls .connections | Select-String "$(pbi_connection_name)"
$pbiconnections

if ($pbiconnections) {
    Write-Host "✅ Connection $(pbi_connection_name) already exists."
} else {
    Write-Host "Creating Connection $(pbi_connection_name)."
    fab create ".connections/$(pbi_connection_name).connection" -P "connectionDetails.type=WebForPipeline,connectionDetails.creationMethod=WebForPipeline.Contents,connectionDetails.parameters.baseUrl=$(pbibaseUrl),connectionDetails.parameters.audience=$(pbiaudience),credentialDetails.type=ServicePrincipal,credentialDetails.tenantId=${{ parameters.azure_tenant_id }},credentialDetails.servicePrincipalClientId=${{ parameters.azure_client_id }},credentialDetails.servicePrincipalSecret=${{ parameters.azure_client_secret }}"
}

Note that there is one slight difference between the above code and version in the GitHub version of the FUAM deploymenator. When specifying the fab create command in a YAML Pipeline you must put double quotes around the connection name.

Anyway, afterwards the same logic is applied in another task to add the “fuam fabric-service-api admin” connection.

Adding the connections like this fine. However, because the service principal created them you will not be able to see them yourself in Microsoft Fabric. Even if you are a Fabric admin.

To resolve this issue, a separate task assigns permissions to the Power BI connection for the Entra user stated in the parameters with the below code.

$permissions = fab acl get .connections/$(pbi_connection_name).Connection -q [*].principal.id | Select-String ${{parameters.EntraObjectId}}

if ($permissions) {
    Write-Host "✅ Permissions for ${{parameters.EntraObjectId}} already exist on the Power BI connection."
} else {
    Write-Host "Adding permissions for ${{parameters.EntraObjectId}} to the Power BI connection."

    $pbiconnectionid = fab get .connections/$(pbi_connection_name).connection -q id

    $body = @{
    principal = @{
        id = "${{parameters.EntraObjectId}}"
        type = "User"  
    }
    role = "Owner"
    } | ConvertTo-Json -Compress
    
    # Create a temp file path and rite with UTF-8 WITHOUT BOM
    $tempFile = [System.IO.Path]::GetTempFileName() + ".json"
    $utf8Encoding = New-Object System.Text.UTF8Encoding $false
    [System.IO.File]::WriteAllText($tempFile, $body, $utf8Encoding)

    fab api -X post "connections/$pbiconnectionid/roleAssignments" -H "Content-Type=application/json" -i $tempFile
  }

As you can see above, you can call the role assignment API directly with the fab api command. Which saves creating a more complex API statement.

Final task in this stage is a repeat of the above code to add permissions to the Fabric connection that was created.

Stage to create the new workspace

Second stage of the YAML pipeline creates a new workspace. Which will only start after the stage to create the connections has completed thanks to the dependsOn syntax below.

- stage: CreateWorkspace
  displayName: 'Create Workspace'
  dependsOn: CreateConnections 

First three tasks in the stage specifies the Python version, adds the ms-fabric-cli Python library and logs into Fabric CLI again.

Afterwards a PowerShell task creates the new workspace and connects it to the capacity specified within the parameters. Specifying the workspace name dynamically as part of the task display name.

- task: PowerShell@2
  displayName: 'Create the ${{parameters.WorkspaceName}} workspace with ms-fabric-cli'
  inputs:
    targetType: 'inline'
    script: |
      fab create ${{parameters.WorkspaceName}}.Workspace -P capacityname=${{parameters.CapacityName}}
    pwsh: true 

Another PowerShell task then adds the Entra user specified in the parameters as an admin of the workspace.

fab acl set ${{parameters.WorkspaceName}}.Workspace -I ${{parameters.EntraObjectId}} -R admin -f

Final task in this particular stage gets the ID value of the newly created workspace. Which can then be passed on to other stages.

$WorkspaceId = fab get ${{parameters.WorkspaceName}}.Workspace -q id
Write-Host "##vso[task.setvariable variable=WorkspaceId;isOutput=true]$($WorkspaceId)"

Stage to populate the new workspace created by the Azure DevOps version of the FUAM deploymenator

Third stage of the YAML pipeline populates the new workspace. Which will only start after the stage to create the workspace has completed.

First task in the stage specifies the Python version. Followed by a task which installs the ms-fabric-cli and fabric-cicd Python libraries.

python -m pip install --upgrade pip
pip install ms-fabric-cli
pip install fabric-cicd

Like in previous steps, a PowerShell task logs into Fabric CLI again. Afterwards, a separate PowerShell task dynamically updates the copy of “parameter.yml” file with the PBI and Fabric connection IDs that exist on the target Tenant.

$pbiconnectionid = fab get .connections/$(pbi_connection_name).connection -q id
$fabricconnectionid = fab get .connections/$(fabric_connection_name).connection -q id

# Path to parameter.yml file
$filePath = "$(System.DefaultWorkingDirectory)\workspace\parameter.yml"

# Read file, replace value, overwrite file
(Get-Content $filePath) -replace 'pbiconnectionid', $pbiconnectionid | Set-Content $filePath
(Get-Content $filePath) -replace 'fabricconnectionid', $fabricconnectionid | Set-Content $filePath

I must stress that only the copy of the file on the Azure Pipeline Agent gets updated. It does not affect the source repository in Azure DevOps.

Next task authenticates as a service principal to work with fabric-cicd.

Install-Module -Name Az.Accounts -AllowClobber -Force

$SecureStringPwd = ConvertTo-SecureString '${{parameters.azure_client_secret}}' -AsPlainText -Force
$pscredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList ${{parameters.azure_client_id}}, $SecureStringPwd
                  
Connect-AzAccount -ServicePrincipal -Credential $pscredential -Tenant ${{parameters.azure_tenant_id}}

$fabricToken = (Get-AzAccessToken -ResourceUrl $(resourceUrl)).Token

Note that there is one slight difference between the above code and version in the GitHub version of the FUAM deploymenator.

When specifying the fab create command you must put quotes around the secret parameter in your YAML pipeline.

If you do not the authentication will fail but will not show as a failure in your pipeline. Instead, you will get an error when you attempt to deploy with fabric-cicd.

Explanation as to why two deployments of fabric-cicd in the Azure DevOps version of the FUAM deploymenator

Both of the required semantic models for the deployment process require the correct databaseid values for the SQL Endpoint created in the new workspace.

Currently, there is no easy way to replace the databaseid value for the SQL Endpoint with fabric-cicd. So I opted for a method that performs two deployments with fabric-cicd.

During the first deployment, a Python script task utilizes fabric-cicd to deploy all items apart from reports and semantic models.

- task: PythonScript@0
  displayName: 'Run script for first FUAM items to deploy'
  inputs:
    scriptPath: 'auth_spn_secret_AzDo.py'
    arguments: '--WorkspaceId $(WorkspaceId) --Environment "Prod" --RepositoryDirectory "$(Build.SourcesDirectory)\workspace" --ItemsInScope  $(FirstItemsInScope)'   

Another task then gets the databaseid value for the deployed SQL Endpoint via a combination of PowerShell and the fab api command. In order to replace the databaseid text in the “parameter.yml” file on the GitHub Runner.

$response = fab api -X get "/workspaces/$(WorkspaceId)/items?itemType=SQLEndpoint"

$data = $response | ConvertFrom-Json

$databaseid = ($data.text.value | Where-Object {
    $_.displayName -eq "FUAM_Lakehouse" -and $_.type -eq "SQLEndpoint"
}).id

# Path to parameter.yml file
$filePath = "$(Build.SourcesDirectory)\workspace\parameter.yml"
# Write-Host "Path to parameter file is $($filePath)"

# Read file, replace value, overwrite file
  (Get-Content $filePath) -replace 'databaseid', $databaseid | Set-Content $filePath

Once the parameter file has been updated, fabric-cicd is utilized again to deploy the reports and semantic models.

- task: PythonScript@0
  displayName: 'Run script to deploy reports and semantic models'
  inputs:
    scriptPath: 'auth_spn_secret_AzDo.py'
    arguments: '--WorkspaceId $(WorkspaceId) --Environment "Prod" --RepositoryDirectory "$(Build.SourcesDirectory)\workspace" --ItemsInScope  $(SecondItemsInScope)'  

One key point I want to highlight is that I commented out the unpublish command in the “auth_spn_secret_AzDo.py” file. So that the script does not attempt to unpublish the default semantic models for Lakehouses.

Stage to perform Post deployment tasks for the FUAM deploymenator

When you look in the original “deploy.ipynb” notebook to deploy FUAM you can see there are some post-deployment tasks.

I extracted these tasks and separated them into multiple notebooks in different subfolders. To make it clear that the tasks in the notebooks are to be performed after the main deployment.

These tasks are part of the final stage in the YAML pipeline. Which only starts after the new workspace has been populated.

First two tasks in the YAML pipeline specify the Python version and logs into Fabric CLI. Afterwards, a PowerShell task imports the notebooks with the fab import command.

$env:PYTHONIOENCODING = "utf-8"
fab import -f /${{parameters.WorkspaceName}}.Workspace/$(Notebook1Name) -i $(Notebook1Name) 
fab import -f /${{parameters.WorkspaceName}}.Workspace/$(Notebook3Name) -i $(Notebook3Name) 
# Note that this imports the Refresh_Semantic models notebook you can run manually
fab import -f /${{parameters.WorkspaceName}}.Workspace/$(Notebook4Name) -i $(Notebook4Name)

Note that the above PowerShell task specifies UTF-8. This is to resolve a character error that can occur.

Plus, the above code is missing a Notebook2Name. Due to the fact that I decided to reserve the Notebook2Name variable for the “Init_FUAM_Lakehouse_Tables” notebook. So that the final step to run the notebooks has a more logical flow.

This is reflected in the final PowerShell task, which runs the post-deployment notebooks.

$env:PYTHONIOENCODING = "utf-8"
# Run the post-deployment notebook which contains all tasks up until refreshing SQL Endpoint for Config_Lakehouse
fab job run /${{parameters.WorkspaceName}}.Workspace/$(Notebook1Name)  -P _inlineInstallationEnabled:bool=true
# Run the Init_FUAM_Lakehouse_Tables notebook separately to avoid permission issues
fab job run /${{parameters.WorkspaceName}}.Workspace/$(Notebook2Name)  -P _inlineInstallationEnabled:bool=true
# Run the Refresh_SQLEndpoints_and_SemanticModels notebook separately to separate processes
fab job run /${{parameters.WorkspaceName}}.Workspace/$(Notebook3Name)  -P _inlineInstallationEnabled:bool=true

Manual tasks to perform after the deployment

After the pipeline for the Azure DevOps version of the FUAM deploymenator has completed you can follow the FUAM deployment guide from step five. Where you need to configure the Capacity Metrics app.

However, I need to highlight one very important manual task you must do before you run the orchestration pipeline.

Which is that you must make a change to the Load_Capacity_Metrics_E2E Data Pipeline and save your change before you run the Load_FUAM_Data_E2E Data Pipeline. Otherwise this pipeline will fail due to permissions context.

I recommend copying the name of one of the notebooks into the description for the activity as below.

In addition, you need to do this with the Load_Inventory_E2E as well if you are not using the Key Vault Parameters for the orchestration pipeline.

Even though this workaround is unorthodox it works.

Final words about the Azure DevOps version of the FUAM deploymenator

I hope my overview of the Azure DevOps version of the FUAM deploymenator has proven to be insightful.

I am proud to provide this solution to the community because I believe this solution will help a lot of people. Which is one of the reasons why I decided to create a unique name for it.

For further advice about implementing FUAM feel free to read my other post that covers tips on implementing FUAM in Microsoft Fabric.

Please remember to give the repository a star in GitHub if you work with the solution. If you have any comments or queries about this post, feel free to reach out to me.

Published inAzure DevOpsMicrosoft Fabric

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *