In this post I will be introducing the FUAM deploymenator. Which is a FUAM deployment accelerator that I developed in order to push FUAM deployments from GitHub to a Microsoft Fabric tenant. Like in the overview diagram below.

As you can see, it utilizes both the Fabric Command Line Interface (Fabric CLI) and the fabric-cicd Python library. Just like in my previous post. In reality, there is a bit more to the process than shown in the above diagram. Which I cover in this post.
I show how the solution deploys to a single Microsoft Fabric tenant in this post. However, you can extend this to deploy to multiple tenants. Just like the recommended CI/CD option for managing multiple customers/solutions.
You can find a template for the FUAM deploymenator in my GitHub-FUAM-Deploymenator GitHub repository. Which you can either clone or fork and customize to suit your needs.
About FUAM
FUAM stands for Fabric Unified Admin Monitoring and is a popular monitoring solution developed by two Microsoft employees. You deploy FUAM in a Fabric workspace. So that it can extract metrics and provide a holistic monitoring view.
Typically, you perform the steps in the FUAM deployment guide to perform a pull-based deployment. When working with this accelerator you run a GitHub workflow that automates the majority of steps 1-4 in the FUAM deployment guide with a push-based method.
I decided to publish this post to coincide with the new release of FUAM. Which was made available on Friday, July 18th, 2025. In addition, this also contains the fix that was made available on Tuesday, July 22nd.
I provide an overview of how the FUAM deploymenator works in this post. Highlighting any key points along the way. Including an important manual task that needs to be performed after deployment. Plus, I share plenty of links.
Prerequisites
All the prerequisites covered in the FUAM deployment guide still apply. Plus, you need the below details.
- Object ID for a Microsoft Entra user. So that they are granted permissions to view created connections. Plus, be an additional administrator for FUAM alongside the service principal. Ideally this user should be a Fabric admin.
- Name of the Fabric capacity that the new workspace will be connected to in the target tenant.
In addition, the below Tenant admin settings need to be enabled. If they are currently disabled I recommend that you add the service principal to a Microsoft Entra group. You can then enable the setting for that Entra group only.
- Service principals can call Fabric can create workspaces, connections, and deployment pipelines. – Required for the workflow to complete.
- Service principals can call Fabric public APIs. – Also required for the workflow to complete.
- Service principals can access read-only APIs – Required for data Pipelines to work after deployment.
For the FUAM deploymenator to work it is crucial that the “parameter.yml” file is in-place and is accurate. Because it replaces the below values:
- Original workspace GUID.
- FUAM_Backup_Lakehouse GUID.
- FUAM_Lakehouse GUID.
- SQL endpoint connection string.
- Value for the fuam pbi-service-api admin connection Id (dynamically updated).
- Value for the fuam fabric-service-api admin connection Id (also dynamically updated).
- SQL Endpoint database id (again dynamically updated)
In addition, you must fork, import or clone the GitHub repository that I made available. In order to start the workflow from the Actions tab within your own GitHub repository. Because it will not work in my public repository.
To start the FUAM deploymenator
To start the FUAM deploymenator you must start the workflow in GitHub. You do this by going to the Actions tab and selecting the “Deploy FUAM” workflow. You must then click on the “Run workflow” button and enter the requested parameters as below.

Once the GitHub Actions workflow has started it goes through the steps specified in the next section. You can find the file that contains the workflow in the “.github/workflows” subfolder.
FUAM deploymentator steps
At the start the workflow defines a workflow_dispatch event. Which declares the previously shown parameters. It then specifies the below environment variables.
env:
resourceUrl: https://api.fabric.microsoft.com
FirstItemsInScope: "Notebook,Environment,Lakehouse,DataPipeline"
SecondItemsInScope: "Report,SemanticModel"
Notebook1Name: "Deploy_FUAM_post_deployment.Notebook"
Notebook2Name: "Init_FUAM_Lakehouse_Tables.Notebook"
Notebook3Name: "Refresh_SQLEndpoints.Notebook"
Notebook4Name: "Refresh_SemanticModels.Notebook"
pbi_connection_name: "fuam pbi-service-api admin"
pbibaseUrl: "https://api.powerbi.com/v1.0/myorg/admin"
pbiaudience: "https://analysis.windows.net/powerbi/api"
fabricbaseUrl: "https://api.fabric.microsoft.com/v1/admin"
fabricaudience: "https://api.fabric.microsoft.com"
fabric_connection_name: "fuam fabric-service-api admin"
After specifying the variables the workflow performs two jobs. One to create the connections in Microsoft Fabric and another to create and populate the workspace.
Job to create the connections with the FUAM deploymenator
First step in the job to create the connections is to specify the version of Python to work with. Via a setup-python action.
- name: Setup Python
uses: actions/setup-python@v5.5.0
with:
python-version: 3.12
Afterwards the job installs the ms-fabric-cli library in order to work with the Fabric Command Line Interface (Fabric CLI).
- name: Install necessary python libraries
run: |
python -m pip install --upgrade pip
pip install ms-fabric-cli
Once done, the job states two fab commands. One to set “command_fallback_enabled” to true and another to login with the Fabric CLI as a service principal.
fab config set encryption_fallback_enabled true
fab auth login -u ${{github.event.inputs.Client_ID}} -p ${{github.event.inputs.Client_Secret}} --tenant ${{github.event.inputs.Azure_Tenant_ID}}
After the login the job adds the “fuam pbi-service-api admin” connection in Microsoft Fabric if the connection does not exist.
$connections = fab ls .connections | Select-String '${{env.pbi_connection_name}}'
if ($connections) {
Write-Host "✅ Connection ${{env.pbi_connection_name}} already exists."
} else {
Write-Host "Creating Connection ${{env.pbi_connection_name}}."
fab create .connections/${{env.pbi_connection_name}}.connection -P connectionDetails.type=WebForPipeline,connectionDetails.creationMethod=WebForPipeline.Contents,connectionDetails.parameters.baseUrl=${{env.pbibaseUrl}},connectionDetails.parameters.audience=${{env.pbiaudience}},credentialDetails.type=ServicePrincipal,credentialDetails.tenantId=${{github.event.inputs.Azure_Tenant_ID}},credentialDetails.servicePrincipalClientId=${{github.event.inputs.Client_ID}},credentialDetails.servicePrincipalSecret=${{github.event.inputs.Client_Secret}}
# fab create .connections/${{env.pbi_connection_name}}.connection -P connectionDetails.type=WebForPipeline,connectionDetails.creationMethod=WebForPipeline.Contents,connectionDetails.parameters.baseUrl="https://api.powerbi.com/v1.0/myorg/admin",connectionDetails.parameters.audience="https://analysis.windows.net/powerbi/api",credentialDetails.type=Anonymous
}
Afterwards, the same logic is applied. In order to add the “fuam fabric-service-api admin” connection.
Adding the connections like this fine. However, because the service principal created them you will not be able to see them yourself in Microsoft Fabric. Even if you are an administrator.
Permissions are added to the connections for the Entra user stated in the parameters to resolve this issue. With the below code.
$permissions = fab acl get .connections/${{env.pbi_connection_name}}.Connection -q [*].principal.id | Select-String ${{github.event.inputs.EntraObjectId}}
if ($permissions) {
Write-Host "✅ Permissions for ${{github.event.inputs.EntraObjectId}} already exist on the Power BI connection."
} else {
Write-Host "Adding permissions for ${{github.event.inputs.EntraObjectId}} to the Power BI connection."
$pbiconnectionid = fab get .connections/${{env.pbi_connection_name}}.connection -q id
$body = @{
principal = @{
id = "${{github.event.inputs.EntraObjectId}}"
type = "User"
}
role = "Owner"
} | ConvertTo-Json -Compress
# Create a temp file path and rite with UTF-8 WITHOUT BOM
$tempFile = [System.IO.Path]::GetTempFileName() + ".json"
$utf8Encoding = New-Object System.Text.UTF8Encoding $false
[System.IO.File]::WriteAllText($tempFile, $body, $utf8Encoding)
fab api -X post "connections/$pbiconnectionid/roleAssignments" -H "Content-Type=application/json" -i $tempFile
}
As you can see, you can call the role assignment API directly with the fab api command. Which saves creating a more complex API statement.
Final step in this particular job is a repeat of the above code to add permissions to the “fuam fabric-service-api admin” connection.
Job to create and populate the FUAM workspace with the deploymenator
The second job to create the workspace only starts when the first job has completed successfully. First step in the second job specifies the Python version again. From there, a step to install the necessary libraries.
python -m pip install --upgrade pip
pip install ms-fabric-cli
pip install fabric-cicd
Another step logs in to Fabric CLI as a service principal after the libraries are installed. Once done, another step creates the new workspace and connects it to the capacity specified within the parameters.
fab create ${{github.event.inputs.WorkspaceName}}.Workspace -P capacityname=${{github.event.inputs.CapacityName}}
Following step then adds the Entra user specified in the parameters as an admin of the workspace.
fab acl set ${{github.event.inputs.WorkspaceName}}.Workspace -I ${{github.event.inputs.EntraObjectId}} -R admin -f
From there, a checkout GitHub Action is specified. In order to ensure that the GitHub Runner has a local copy of the Git repository.
- uses: actions/checkout@v4.2.2
Next step dynamically updates the copy of “parameter.yml” file with the PBI and Fabric connection IDs that exist on the target Tenant.
$pbiconnectionid = fab get .connections/${{env.pbi_connection_name}}.connection -q id
$fabricconnectionid = fab get .connections/${{env.fabric_connection_name}}.connection -q id
# Path to parameter.yml file
$filePath = "workspace\parameter.yml"
# Read file, replace value, overwrite file
(Get-Content $filePath) -replace 'pbiconnectionid', $pbiconnectionid | Set-Content $filePath
(Get-Content $filePath) -replace 'fabricconnectionid', $fabricconnectionid | Set-Content $filePath
I must stress that only the copy of the file on the GitHub Runner gets updated. It does not affect the source repository in GitHub.
Next step authenticates as a service principal to work with fabric-cicd.
Install-Module -Name Az.Accounts -AllowClobber -Force
$SecureStringPwd = ConvertTo-SecureString ${{github.event.inputs.Client_Secret}} -AsPlainText -Force
$pscredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList ${{github.event.inputs.Client_ID}}, $SecureStringPwd
Connect-AzAccount -ServicePrincipal -Credential $pscredential -Tenant ${{github.event.inputs.Azure_Tenant_ID}}
Both of the required semantic models for deployment require the correct databaseid values for the new SQL Endpoint.
Currently, there is no easy way to get the databaseid value for the SQL Endpoint with fabric-cicd. So I opted for a method that performs two deployments with fabric-cicd.
During the first deployment, the new workspaceId is identified and then fabric-cicd deploys all items apart from reports and semantic models.
$WorkspaceId = fab get ${{github.event.inputs.WorkspaceName}}.Workspace -q id
python auth_spn_secret_AzDo.py --WorkspaceId $WorkspaceId --Environment "Prod" --RepositoryDirectory ".\workspace" --ItemsInScope ${{env.FirstItemsInScope}}
Another step then gets the databaseid value for the deployed SQL Endpoint via a combination of PowerShell and the fab api command. In order to replace the databaseid text in the “parameter.yml” file on the GitHub Runner.
$WorkspaceId = fab get ${{github.event.inputs.WorkspaceName}}.Workspace -q id
$response = fab api -X get "/workspaces/$WorkspaceId/items?itemType=SQLEndpoint"
$data = $response | ConvertFrom-Json
$databaseid = ($data.text.value | Where-Object {
$_.displayName -eq "FUAM_Lakehouse" -and $_.type -eq "SQLEndpoint"
}).id
# Path to parameter.yml file
$filePath = "workspace\parameter.yml"
# Read file, replace value, overwrite file
(Get-Content $filePath) -replace 'databaseid', $databaseid | Set-Content $filePath
Once the parameter file has been updated, fabric-cicd deploys the reports and semantic models.
$WorkspaceId = fab get ${{github.event.inputs.WorkspaceName}}.Workspace -q id
python auth_spn_secret_AzDo.py --WorkspaceId $WorkspaceId --Environment "Prod" --RepositoryDirectory ".\workspace" --ItemsInScope ${{env.SecondItemsInScope}}
Post-deployment steps
When you look in the original “deploy.ipynb” notebook to deploy FUAM you can see there are some post-deployment tasks. I extracted these tasks and separated them into multiple notebooks. In order for the tasks to be performed after the main deployment.
I opted to create ipynb notebooks for the separate tasks and import them. In order to keep them separate from the original Fabric items created for FUAM. Which the next step imports with the fab import command.
fab import -f /${{github.event.inputs.WorkspaceName}}.Workspace/${{env.Notebook1Name}} -i ${{env.Notebook1Name}}
fab import -f /${{github.event.inputs.WorkspaceName}}.Workspace/${{env.Notebook3Name}} -i ${{env.Notebook3Name}}
# Note that this imports the Refresh_Semantic models notebook you can run manually
fab import -f /${{github.event.inputs.WorkspaceName}}.Workspace/${{env.Notebook4Name}} -i ${{env.Notebook3Name}}
As you can see, the above code is missing a Notebook2Name. Due to the fact that I decided to reserve the Notebook2Name variable for the “Init_FUAM_Lakehouse_Tables” notebook. So that the final step to run the notebooks has a more logical flow.
You can see this in the final step which runs the post-deployment notebooks.
# Run the post-deployment notebook which contains all tasks up until refreshing SQL Endpoint for Config_Lakehouse
fab job run /${{github.event.inputs.WorkspaceName}}.Workspace/${{env.Notebook1Name}} -P _inlineInstallationEnabled:bool=true
# Run the Init_FUAM_Lakehouse_Tables notebook separately to avoid permission issues
fab job run /${{github.event.inputs.WorkspaceName}}.Workspace/${{env.Notebook2Name}} -P _inlineInstallationEnabled:bool=true
# Run the Refresh_SQLEndpoints_and_SemanticModels notebook separately to separate processes
fab job run /${{github.event.inputs.WorkspaceName}}.Workspace/${{env.Notebook3Name}} -P _inlineInstallationEnabled:bool=true
Manual tasks to perform after the deployment
After the deployment you can follow the FUAM deployment guide from step five. Where you need to configure the Capacity Metrics app.
However, I need to highlight one very important manual task you must do before you run the orchestration pipeline.
Which is that you must make a change to the Load_Capacity_Metrics_E2E Data Pipeline and save your change before you run the Load_FUAM_Data_E2E Data Pipeline. Otherwise this pipeline will fail due to permissions context.
I recommend copying the name of one of the notebooks into the description for the activity as below.

In addition, you need to do this with the Load_Inventory_E2E as well if you are not using the Key Vault Parameters for the orchestration pipeline.
Even though this workaround is unorthodox it works.
Updating FUAM deploymenator with items in your own tenant
One thing I must stress is that if you decide to update the FUAM deploymenator based on Fabric items you deployed in another Fabric workspace there are a few things to do. Note, this only applies if you wish to overwrite the existing Fabric items in the Git repository.
- First of all, enable Git integration for your own FUAM workspace.
- Then extract the contents and overwrite the items in the workspace folder of the FUAM deploymenator repository. Whilst keeping the “parameter.yml” file in the root location.
- Update the “parameter.yml” file based on your replacement values. I added comments so you can see which items require updates.
- Check the post-deployment notebooks and update where required.
Final words
I hope my Introduction to the FUAM deploymenator has proven to be insightful.
I am proud to provide this solution to the community because I believe this solution will help a lot of people. Which is one of the reasons why I decided to create a unique name for it.
For further advice about implementing FUAM feel free to read my other post that covers tips on implementing FUAM in Microsoft Fabric.
Please remember to give the repository a star in GitHub if you work with the solution. If you have any comments or queries about this post, feel free to reach out to me.
Be First to Comment