Over the last 6 months or so I’ve been using terraform to deploy infrastructure to Azure. Prior to using terraform I was using ARM Templates and publishing them using Azure PowerShell, again for about 6 months. So I reckon I’ve got a reasonable grasp on both technologies and their similarities/differences and their advantages/disadvantages. But one thing I have done for both is run Pester tests post-deploy to verify the deployment was correct. When using ARM I used Azure PowerShell cmdlets to return the object and check the config. And so when I started using terraform I copied the scripts verbatim and ran those.

However it was fairly clear pretty quickly that this was less than optimal: you see in terraform I’m logged in using the az cli instead of the Azure PowerShell method, and so I had to log in again, and ensure that whatever boxes were running the tests had the module. To add to this, there were variables that I was having to pass to the script that already existed in the terraform.tfvars file, so I had to add them to the PowerShell scripts, which could take a along time. This extra overhead was a headache from the start. And so when I wrote a new module recently I thought to myself “if I’m already logged in and terraform has the variables already set, what is stopping me from running Pester tests and using the az cli instead?” Turns out there not really all that much.

So if you mosey on over to GitHub you’ll find a repo that demonstrates running Pester tests as part of the deploy. Neat! But make sure you read the readme before anything as there’s some variables you need to set, and a secrets file. And if you’ve never used terraform before you may want to go and look at the getting started pages for terraform before getting into this, as I’m going to assume some prior knowledge.

So once you’ve run terraform init and the providers have been set, you can go and run terraform apply. This will create any resources specified in the .tf files in Azure. In this repo we have a resource group, an azure sql server instance, a database, and some firewall rules: one to allow objects in azure to connect to the database and another to allow the local machine to connect to the instance. This last one is critical to running the tests!

But there are two other resources, or rather null_resources that are created. The first is an InstallTools resource, which checks that Pester is installed, and the second which runs the tests. They’re both very similar resources because their implementation is identical, so let’s look at the tests one -

resource "null_resource" "run-pestertest" {
  provisioner "local-exec" {
    command = ".\\scripts\\runTests.ps1 -resourceGroupname ${azurerm_resource_group.rg.name} -databaseName ${var.database_name} -serverName ${var.environment_prefix}${var.application}sql${var.location}${var.environment_suffix} -sqlAdministratorLogin ${var.sqladminuser} -sqlAdministratorLoginPassword ${var.sqladminpassword}",
    interpreter = ["PowerShell"]
  }
  depends_on = ["azurerm_sql_database.db", "null_resource.run-installtools"]
  triggers = {
    always_run = "${timestamp()}"
  }
}

The resource type means that it is not tied to any infrastructure in particular, and the local-exec means it will execute a PowerShell file in the repo. Happily we’re able to reuse the terraform variables when calling the PowerShell script. The trigger ensures that the resource is always run: despite it being a null resource, it is still added to the state file, so if it already exists without the trigger it will only run the first time. note that there are also two depends one: one for the database resource to exist, and the other that Pester was installed via the InstallTools module.

The script run is a standard PowerShell file that runs Invoke-Pester. The [test itself] (https://github.com/RichieBzzzt/terraform-azure-sql-pester/blob/master/tests/testSql.ps1) establishes that it can connect and that the database name is correct. The first time you run this it will fail as the database name is hard-coded into the test. If you re-run the apply the two PowerSHell scripts will be re-run.

All told this made testing infinitely easier, as well as run on every environment deployed to. And whilst this sample doesn’t include any tests that use az cli, as we’re already authenticated using az we won’t have to re-establish any connection to Azure to run those tests. There is literally zero overhead in setting up and running the Pester tests.

But wait! One thing that has troubled me about this test is whether it really is a test or not: what I like about this is that I can be confident that I can establish a connection to an Azure SQL Instance and confirm that the database exists. That way when I come to deploy the schema I’ll know that there shouldn’t be any issues with regards to connecting to the box. However what this really is is not a “test”, but rather a “check”. I think it is very important to make this distinction when talking about testing and will cover this in another post… eventually.