tsqllogo

Hello!

Time for another T-SQL Tuesday, and this month Garry Bargsley invites us to write on two tasks for this months topic: • What do you want to automate or what automation are you proud of completing? • What is your go-to technology for automation?

Go-To Technology For Automation

Second part first: for the automation I have done, which has largely been around building and deploying SQL/SSIS/SSAS, and more recently Azure tech like ADF’s and Infrastructure in Azure, my go-to tool has been PowerShell. I’ve heard it described as “glue”, and that is exactly how I see it; it puts together the constituent parts of a pipeline to get the code from source to production. But it’s important not to become unstuck (pun intended) when using large amounts of PowerShell to create a pipeline. What I mean is, much like the code you’re deploying, it needs to be stored in source, needs to be tested, and the consumers of the PowerShell need to be able to control which version of the code they are using.

A great example of this being done badly is Octopus Deploy and their step template/script modules: they cannot be sourced easily ( in fact Asos have released their code that they use to version them, and it shoes how hard it is to do it), and whilst users can control which version of the steps they are using, they cannot control which version of the module is used: it’s always the latest one. This is a horrible for a company that has any more than 3 or 4 teams relying on shared code.

A better way of managing script modules for Octopus Deploy would be to create the Modules not as script modules but as NuGet packages, store them in a package manager and add the Nuget packages to a release, so that when a new release is created then a version number can be specified, or left blank to always pull the latest. This allows for a pipeline to be created for the module itself, so that it can be stored in source, tests run, changes tracked etc.

Automation I’ve Done

Now, whilst it sounds a bit much to create a pipeline for code that run a release, and I’ll concede it can get a bit like Inception at this point, having a build/deployment pipeline ready to go for a given technology can be super-useful. By having something sourced and tested, such as the code for building/deploying a database for example, where you can add value quickly in the project is by focusing on the requirements the business really cares about rather than focusing on the how of delivering the code. And more importantly, you can get deploying those quickly to prod.

If you take a look at the projects I’ve worked on in GitHub, you can see I really have worked on quite a bit of automation. And whilst PoshSSDTBuildDeploy is great, the one I’m probably most pleased with is SQLDWSchemaMigrate, because I basically had about 2 months to write a PowerShell Module that does for Azure SQLDW what SSDT does for SQL Server databases. However PoshSSDTBuildDeploy is very much my own thing, and not only is the code on GitHub, but the project to publish the module to PowerShell Gallery and NuGet is public in VSTS, so anyone should be able to view the process.

But then there’s also salt and AssistDeploy that I’ve worked on that seem to be popular with hte people who have been kind enought to give me feedback on them. And there’s a common theme amongst these four projects I’ve mentioned: they’re all very light on dependencies. You see back some years ago I was a build/release/DBA/dogsbody sort of role in a small company, and large parts I enjoyed a great deal. But one thing I really didn’t enjoy was managing dependencies on the build servers. So when it came to writing deployment modules I wanted to find the path of least dependencies. I also wanted the modules to be able to be runnable on a developers machine, so that prior to a check in, a developer could download the module and run the build/deploy locally. This is also super useful in terms of debugging issues.

Automation I’ve Not Done

It was acknowledged in the intro to the invite that “Automate All The Things” is a buzz phrase, and I’ll totally agree on that. Much more pertinent is “Automate Most The Things”: sometimes it’s just not worth spending an age in writing something to automate a task that is going to be done very infrequently. The trick is to know what to automate and what to ignore. For my part, if you have to write a manual or a run-book to do something every release, and it usually goes wrong, that needs automating. If it’s a one-shot when a new environment is created nAzure, then forget about it. In fact adding an ADF key to a self-hosted Integration Runtime (a first time only task) is only a manual process, so it’s interesting to see even new tech be a manual process.

Sum Up

I wanted to keep this short (like 500 words or so) to avoid rambling, but I’ve gone some way over that. However, as the saying goes, a picture paints a thousand words, if there’s one thing I want to leave you with, it’s this xkcd which we have taped up on our wall at work: as engineers we seek to solve all our problems with computers, but maybe we should seek to solve our problems with common sense first.

xkcd