Hello everyone! We continue the topic of Cross-region Workspace Migration, while handling Large Semantic Models. I highly recommend checking previous articles on this topic, to get better understanding of what we are trying to achieve here:
- Workspace migration between different regions
- Cross-region Workspace migration using PowerShell Scripts
Algorithm and overall approach are very similar to PowerShell script, but there is one major difference in this approach. Full code you get from my GitHub.
Problem statement
Prerequisites
- Tenant Admin role activated on your account
- By definition, Workspace will not be moved to a different Region if it contains Fabric Items -> script works for Power BI Items only.
- Import the notebook to Microsoft Fabric
- Finish configuration in the code in Configuration section:
- SCRIPT_MODE: (0 – to fetch all workspaces from specified Capacity, 1 – to provide list of workspaces manually)
upn – your organizational user email, it will be used to grant Workspace access in case it is missing - SOURCE_CAPACITY/SOURCE_WORKSPACES: depending on Script Mode selected, provide either Capacity ID or IDs of Workspace you would like to migrate
- TARGET_CAPACITY: Capacity where workspaces will be migrated
- OUTPUT_FILE_PATH: if provided, script will save logs to given location
- ADMIN_UPN: User Principal Name of your account. It will be used to grant Workspace Access to handle Semantic Models
- SCRIPT_MODE: (0 – to fetch all workspaces from specified Capacity, 1 – to provide list of workspaces manually)
How script handles authentication
- https://api.powerbi.com/v1.0/myorg/xxx
def get_access_token():
client_id = mssparkutils.credentials.getSecret('https://akv-reference.vault.azure.net/', 'your_client_id')
client_secret = mssparkutils.credentials.getSecret('https://akv-reference.vault.azure.net/', 'your_client_secret')
tenant_id = mssparkutils.credentials.getSecret('https://akv-reference.vault.azure.net/', 'your_tenant_id')
api = 'https://analysis.windows.net/powerbi/api/.default'
auth = ClientSecretCredential(authority = 'https://login.microsoftonline.com/',
tenant_id = tenant_id,
client_id = client_id,
client_secret = client_secret)
token = auth.get_token(api)
return token.token
# Get Fabric Capacities
def get_fabric_capacities(p_access_token):
api_url = f"https://api.fabric.microsoft.com/v1/admin/capacities"
headers = {
'Authorization': f'Bearer {p_access_token}',
'Content-Type': 'application/json'
}
tenant_settings_response = requests.get(api_url, headers=headers)
return tenant_settings_response.json() #Initiate the client
client = fabric.PowerBIRestClient()
# Get Fabric Capacities
def get_capacities():
api_url = "v1.0/myorg/admin/capacities"
response = client.get(api_url)
return response.json() Script test scenario
- Workspace 1:
- Contains Large Semantic Models
- All Models should convert fine
- Workspace will be migrated to new Capacity
- Workspace 2:
- Contains Large Semantic Models, among them one is bigger than 10 GB
- One Semantic Model is expected to fail conversion
- Workspace will not be migrated, due to failed conversion
- Workspace 3:
- Doesn’t contain Large Semantic Models
- Workspace should be migrated to a new Capacity
How script works
- Check if End User has workspace access, grants it if needed.
- Generate a dataframe with all Semantic Models in a Workspace, with Large Storage Format enabled. If there are no Large Semantic Models, Workspace is ready for migration. Script continues with step 7.
- Process Semantic Models one by one:
- Try converting Semantic Model to Small Storage Format.
- If conversion is successful -> log item in the proper collection. If conversion fails -> raise conversion_error and log item in the proper collection.
- Wait for conversion to complete. This information is recorded in basic Workspace level API as capacityMigrationStatus. Script continues when status changes to “Migrated”.
- If there was no conversion_error raised, Workspace is migrated to a new Capacity.
- Semantic Models are restored to a Large Storage Format. This happens regardless of step 5. In case conversion failed in step 3.2., script checks if there were any semantic models converted in the current workspaces, and will restore them in this step to keep Workspace in “untouched” state. When conversion is successful, script restores Large Semantic Model simply after moving Workspace to a new Capacity.
- Revoke Workspace access if it was granted at the beginning.
Code prints the summary for each Workspace, allowing to track the progress. Let’s look at the summary generated for Workspace 1:
As you can see, script was executed as expected. Large Semantic Models were found and processed, Workspace was moved to a new Capacity, and Semantic Models were reverted back to Large Storage Format (PremiumFiles in API). Now, let’s have a look at what happened in Workspace 2:
Here we see expected complications. First of all, one Semantic Modell was too big to be converted, therefore, script no longer processes remaining Semantic Models. Conversion Error is detected, so Workspace was not moved to a new Capacity. Finally, script noticed that one Semantic Model was converted to Small Storage Format, before conversion_error was raised, therefore, it is now being restored to initial state.
Finally, let’s have a look at Workspace 3:
Again, script executed as expected. No Large Semantic Models were detected; therefore, Workspace was moved to a New Capacity without any problem.