(For this post I’m going to be referencing a project that I uploaded last week to GitHub. It creates some VM’s and installs the self-hosted integration runtime used by ADF activities. The last bit is not really relevant, but to get the most out of this post some experience with ARM and terraform would be handy. But you should be able to look at that project and follow it, hopefully!)

It’s amazing to think that it is possible to have a blob of json is source control and that it can be “deployed” into a VM. It is even more amazing that with an expressions as simple as “copy” I can parallelise that whole process and have a bunch of VM’s all deployed into a VNET with an exe running on the servers connected to something else in Azure.

However, json tends to be a little… .bloated. If you’ve ever worked with terraform you’ll know it is much more pithy to describe a resource using HCL than it is an ARM template. One of the things that’s neat about terraform is that I can have separate .tf files for different resources that I want deployed: I could for example have one for the resource group, then another for a KeyVault to go in to that Resource Group, and then I could have separate files for keys and secrets and so on. By referencing by name and using dependson I can keep the resources in separate files yet still ahve them deployed together. Overall, this makes resources much neater to organise.

Sadly, the same can not be said for ARM templates. When deploying via ARM the expectation is that everything is shoved all into one file. This is pretty lousy because no one wants to open up a file with over 1,000 lines of json and find the pertinent bit htey need to update. “But wait!” I hear you cry: “there is such a thing as nested templates!” OK yes, I am omitting nested templates from this. Part of the problem with nested templates is you have to tell ARM where the nested templates are, unlike terraform where you can reference another resource by it’s HCL-based name. Another part of the problem is that you cannot deploy nested templates locally. Well, you can, but not really. OK, so when you deploy an ARM template using New-AzureRmResourceGroupDeployment, the ARM template is pushed to Azure and the deployment takes place there. Note it is only the -templatefile that is pushed, nothing else. So any files that you reference have to be pushed to the referenced location, or artifactslocation in the repo, or pushed to a storage account prior to the fact and then referenced from there. Let’s separate these two options and talk about why they’re not ideal.

Push to artifactslocation

Chances are you are using GitHub or some other repo to reference the nested templates. So any references need to exist or have the applied changes made to them. The issue here is that you have to push changes to a place prior to testing the changes locally. You probably could do something fancy with branching, setting the branch name as a parameter and then setting the artifactslocation as a variable. So you could go from this -

"_artifactsLocation": {
      "type": "string",
      "metadata": {
        "description": "The base URI where artifacts required by this template are located."
      "defaultValue": "https://raw.githubusercontent.com/richiebzzzt/vms-with-selfhost-integration-runtime-and-custom-vm-config/master/"

…to this, but now it is getting messy.

#set param
"branch_name": {
      "type": "string",
      "metadata": {
        "description": "The base URI where artifacts required by this template are located."
      "defaultValue": "master"
	#... set var
"artifactslocation": "[concat('https://raw.githubusercontent.com/richiebzzzt/vms-with-selfhost-integration-runtime-and-custom-vm-config/',parameters('branch_name')'/')]"

Push Nested Templates To Storage Account

The alternative, which is the most popular way, is to push the nested templates to a storage account and then reference them from there. If you are going to take this route then I suggest that in your deployment pipelines you follow the same practice, if only because there’s no deviation or “two ways of doing the same thing.”

In fact this method is even suggested by Microsoft: if you are using the DeployResourceGroup template there is whole section dedicated to managing this process. Personally I think it is a little over-engineered and so here is a more pithy version of the same-

##upload artifacts for linked templates

$ArtifactStagingDirectory = $PSScriptRoot
$StorageAccountName = 'aeg' + ((Get-AzureRmContext).Subscription.SubscriptionId).Replace('-', '').substring(0, 8)
$StorageAccount = Get-AzureRmStorageAccount -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -ErrorVariable noStorageAccountForYou -ErrorAction SilentlyContinue
if ($noStorageAccountForYou) {
    $StorageAccount = New-AzureRmStorageAccount -StorageAccountName $StorageAccountName -Type 'Standard_LRS' -ResourceGroupName $ResourceGroupName -Location $ResourceGroupLocation
$StorageContainerName = Get-Date -Format "yyyy-MM-dd-HH-mm-ss"
$_artifactsLocation = $StorageAccount.Context.BlobEndPoint + $StorageContainerName + '/'
Write-Host "Uploaded to $($_artifactsLocation)"

New-AzureStorageContainer -Name $StorageContainerName -Context $StorageAccount.Context -ErrorAction SilentlyContinue *>&1
$ArtifactFilePaths = Get-ChildItem $ArtifactStagingDirectory -Recurse -File | ForEach-Object -Process {$_.FullName}
foreach ($SourcePath in $ArtifactFilePaths) {
    Set-AzureStorageBlobContent -File $SourcePath -Blob $SourcePath.Substring($ArtifactStagingDirectory.length + 1) `
        -Container $StorageContainerName -Context $StorageAccount.Context -Force | Out-Null

So, we’re uploading our linked templates to a storage account and setting the artifactsLocation to the storage account for use by ARM deployments, so we’re home and dry, correct? Wrong. We could not even be said to be home and vigourously towelling ourselves off. You see somewhere along the bottom of that bit of PowerShell there is a sas token being generated, that will be passed as a parameter when the ARM template deployment is kicked off. This token is for accessing the storage account. However, and this is the thing that really makes me grit my teeth, you are not able to use sas token authentication when running a custom script extension. In that instance you have to use the storage account key. This can be relatively simple to add to an ARm Temapltes as a parameter, and then in turn passed to the nested template that execute the custom script -

$OptionalParameters["customScriptStorageAccountName"] = $StorageAccountName
$OptionalParameters['customScriptStorageAccountKey'] = (Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -AccountName $StorageAccountName).Value[0]

OK so, one extra piece of configuration required, not too much trouble then, I suppose. But really if terraform can cope with resources being stored in multiple files why does ARM struggle so much?