Run Terraform Locally and In Azure DevOps Pipeline
Hello!
I’ve been working with Terraform on and off for about 3 years now. In fact I first heard about terraform way back in 2016, whilst I was busy cursing ARM templates and their verboseness. It was around this time that Microsoft and Terraform announced their working partnership (this was also around the time that Microsoft announced that they hearted Open Source). I’m not going to go into ARM template bashing though; all Infrastructure as Code solutions have their advantages and drawbacks.
instead it is one of the drawbacks of terraform that I want to talk about today; running terraform both locally and in a pipeline and how to set up backend. When I work on a software project one of the first things I focus on is “how am I going to deliver this thing that I’m writing on this machine via a release pipeline”? This is important to me because things like builds and release can be a pain if they’re not properly implemented. I want the build and release process/pipeline to be able to be run locally so that if a pipeline fails I can run the same process locally to see what failed. And I want to write tests around the code that runs the builds/releases so that I can say with confidence that it works.
In the case of terraform, it has a process of setting up the backend to store the state files. The state files can be stored on any number of configurable backends, however because I’m using Azure then it makes sense to use containers on Azure storage accounts. But when I’m running terraform locally I don’t necessarily want to be using containers: it’s just not necessary, I’d need to write scripts for devs to run to create and manage containers, and then we’d probably end up with many containers holding one state file and people being too scrared to drop any containers because it might break an actual pipeline of terraform. But because I have to declare the backend in the terraform files I sort of need to, and this has troubled me.
What I have been doing recently is not running terraform locally and using Azure DevOps pipeline to run everything, but this is slow and cumbersome. What I could do is comment out the backend and put local backend in it’s place and change it before I cehck in, but I can’t trust myself to not forget and then break the pipeline.
And so, spurred on by a tweet I decided to try to get find a solution that would mean I can run terraform locally and also in a pipeline without having to remember to update the backend config before I checked in.
Let’s start and take a look at how I set up the backend for the pieplines before moving ont to local setup. Instead of passing in any vars to authenticate with the backend, I set some environment variables up. I also pass in variables via backend-config
to configure the AzureRM backend. By not having to pass in any vars I won’t have to touch the variables.tf
file.
- script: |
export ARM_CLIENT_ID=$(spId)
export ARM_CLIENT_SECRET=$(spKey)
export ARM_SUBSCRIPTION_ID=${{parameters.subscriptionId}}
export ARM_TENANT_ID=$(tId)
terraform init -input=false -backend-config="resource_group_name=$(resource_group_name)" -backend-config="storage_account_name=$(storage_account_name)" -backend-config="container_name=$(container_name)" -backend-config="key=$(key)"
terraform get --update=true
displayName: terraform init
workingDirectory: $(System.DefaultWorkingDirectory)/tf
This means that my backend setup in my main.tf
can look like this:
terraform {
backend "azurerm" {
}
}
and the environment variables and passed in values can do the rest. So theoretically I could simply comment out this block when working locally and replace with something like so -
terraform {
backend "local" {
}
}
but I don’t want to have to rely on me remembering to change the backend setup each and every time I check in. I want a way for this to work locally and remotely with as little effort/forethought as possible.
One way that this could work is by using an override.tf
file. Override files should only be used in special circumstances, but in absence of any other way to mange dofferent backend setups I think this is the best way to go, until someone finds a better way.
In the root of my terraform directory I can create a override.tf
file and place the local backend config above. Then when I run terraform init
the contents of the override file will replace the backend in the main.tf
file. Pretty simple. All I need to do is add the override.tf file to the .gitignore
file and include a override.tf.example
file in my repo for other users to mimic the setup without the file being run and we’re done!
I’m glad that I have come to a non-hacky (sort of) resolution to this issue; as I said it has troubled me how to work with terraform locally and run in a pipeline because the backend setup has proved to be a bit problematic.
Bonus Content: How To Set Service Principal Credentials to Environment Variables
In the yaml above I’m setting the env vars of the service principal using some other variables. Looking at this in isolation I can appreciate how this would make no sense without some explanation. In an Azure DevOps pipeline first you need to use the AzureCLI@2
task. This has an option to include the spn details, or in other words the login details of the Azure Resource Manager subscription you have specified in the azureSubscription
parameter. You do this by setting addSpnToEnvironment
to true
. In this task you create environment variables of the service principal id, the key, and the tenant id.
- task: AzureCLI@2
displayName: "Get Service Principal Variables"
inputs:
azureSubscription: $(azureSubscription)
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "##vso[task.setvariable variable=spId]$servicePrincipalId"
echo "##vso[task.setvariable variable=spKey]$servicePrincipalKey"
echo "##vso[task.setvariable variable=tid]$tenantId"
addSpnToEnvironment: true
Now you have these values set and can use them multiple times throughout the pipeline. The values are still masked as sensitive, but you can also dispose them once you don’t need them anymore.
- task: PowerShell@2
displayName: "Remove Vars"
inputs:
targetType: "inline"
script: |
Remove-Item env:\SPKEY
Remove-Item env:\SPID
Remove-Item env:\TID
pwsh: true
This has the advantage that you’re using the credentials set up in the service connections in Azure DevOps as opposed to having to pass them in as variables. You could also use a variable group that is linked to an Azure Key Vault, that is equally as valid, but we have service connections set up so it makes sense to use them. The point of re-using stored credentials instead of storing them in many places means that refreshing values becomes much less of a pain to deal with.