I’ll admit this post is a little off-piste, so let’s see how this works out. I’ve ruminated over this topic as a blog post that I have had in the back of my head for a couple of years now. And what with it being T-SQL-Tuesday, and having just off the back of playing Minecraft for the first time in a while with my nephew, now is as good a time as any to write this post. At the very least, if I fail in articulating the similarities between Minecraft and DevOps, chances are you’re going to come away with a better understanding, if not appreciation of Minecraft.

What Has Minecraft Got To Do With Dev Ops?

Glad you asked. For those of you who are not ITK, Minecraft is a fairly recent phenomenon that has taken the gaming world by storm. Minecraft is set in a world that is largely uninhabited, save for all sorts of spawning mobs. These mobs range from harmless animals like chickens and pigs, to the downright destructive Ender Dragon, and every sort of hazardous creature in between. Upon starting the game, our protagonist/player (let’s call him “Steve”) is armed with nothing but his fist to begin punching blocks in earnest. During the day the surface world is relatively harmless, but come the night, there is all sorts of nefarious creatures such as Zombies, arrow-shootin' Skeletons and self-destructive Creepers intent on inflicting Steve with some fatal blows. In short, it is a hostile world, and time is not on your side.

And let’s be clear on something: you will get hurt. Poor Steve will either get blown up occasionally by a Creeper, or bitten by a spider (just hope it’s not one of those poisonous cave spiders), or just get freaked out by an Enderman. Point is, mistakes are made and lessons are learned.

Some things are of course already apparent: lava burns. You shouldn’t need to walk into some lava to learn this in the context of the game. However consider this trick: carry a bucket of water around with you, so that when you are mining for obsidian you can pour some water over the top to extinguish any lava that is underneath. This is to prevent any lava from either burning you or destroying your obsidian. And considering how precious your diamond pick-axe is, you don’t want to lose that obsidian, amirite?!

Depending on your level of knowledge of Minecraft, your response to that previous paragraph will broadly fall into one of three categories below:

And so from hereon-in you can create increasingly more sophisticated tools to help you achieve your end-goals.

Confesstion time: you know when I said that there was no end game in Minecraft? That’s not entirely true. There is a realm called The End, where there be the Ender Dragon, a boss mob. Once defeated it can re-spawn, but so the player has completed the game, in a sense. But it all starts at the same place: Punching Blocks With Your Bare Fist.

If you take one thing from this section about Minecraft, it is this: You can’t go mining obsidian and building portals to go and slay the Ender Dragon before the first nightfall: you need to build up the tools you have until you are capable of doing this.

No Really, What Has All This Got To Do With DevOps?

At the fear of banging on about Minecraft for another 700 words, let’s move on and draw some parallels to DevOps.

For the other half of the readership that make up this Venn diagram; DevOps is a fairly recent phenomenon that has taken the IT world by storm. There’s many a definition of DevOps, but broadly it is the idea of being able to deploy changes faster, with cooperation from both sides. I don’t believe it is a role in a company: it is a working practice, and understanding that in order to increase the pace of deployments, we must be able to do so in a repeatable manner across the different environments, from Dev to Production.

But this doesn’t mean that DevOps is solely about automating deployments. Well OK it does. But it’s not the thing and the whole of the thing. It sort of is, and yet it isn’t.

Let me be clear about something: If you don’t have your databases in source control, there’s no point in thinking about anything else. Everything else follows on from this point. Getting your code in source control is the absolute starting point of all deployment pipelines. Some people have very strong views about whether to use git or TFS, but frankly I’m less concerned about the SVN of choice and more concerned about whether all code that is deployed is in source control. But the point is there’s no point in fretting about how to use Octopus Deploy if you haven’t got your code in source control.

Jut like in Minecraft though you will make mistakes and you will move on and learn from them. At one point I was obsessed with the idea that the script that SSDT has to be the one executed, so I spent ages writing PowerShell to execute the scripts generated. Come to find out there are issues with Invoke-Sqlcmd and there were scripts failing when I didn’t even realise it. I got burnt pretty bad here.

Likewise I’m also less concerned about whether you use something like ReadyRoll or SSDT cough use SSDT cough, but again there’s little point thinking about writing your own version of the Chaos Monkey if you’re not compiling your code every time someone checks in and, providing the build goes green, packaging the changes into something like Nuget packages. And have you thought about version numbers? Thankfully, someone already has, so you don’t have to. But you still need to implement it.

Essentially, as mentioned, if you aim to slay the Ender Dragon in the first half hour of spawning in Minecraft, you’re going to fail, or worse, burst into flames whilst you try to mine obsidian with your wooden pickaxe. Likewise, if you think that you’re going to go from no automated deployments to achieving Continuous Delivery to Production for all databases in one go without putting the hard graft in, you’re going to fail (but hopefully not burst into flames), even if you have some level of Continuous Delivery set up.

A Case Study

I’m loathe to bring up the Chaos Monkey as some sort of gold standard of DevOps. Yes it is great that Netflix have a rogue application that will delete AWS instances to see how they cope, and yes it’s great that they have shared that code on GitHub. But there’s two things that bother me: people bring it up in presentations and blog posts like it’s something new, and it isn’t: it’s been talked about for 7 years, and yet it’s still the only example I know of a company doing such a thing in. The number of companies doing it are clearly infinitesimal. So whilst I’m not saying it’s unachievable, I just wish to be a little bit more pragmatic, opposed to presenting something I have never done as “if you’re not doing this, then you’re failing at DevOps”. And rather than talk about something someone else has done and attempt to bask in its reflective glory, I want to bring up an experience I had of helping implement a DevOps way of working some years ago.

Some years ago I worked for a small company that had a decent setup when it came to Continuous Integration: upon check in, code was compiled, and unit tests were run. If this went green another build was run, changes were automatically deployed and more automation tests were run. Code was merged, more tests run, and when a release was ready we branched off main and deployed manually to PreProd/Prod. So the Continuous Dpeloyment story was not even half way there, but there was something, but we couldn’t build on it (more on why later)

For the databases, we used SSDT to create a diff script from a dacpac that we stored in source code. This dacpac was the version that was live, as far as we knew. There was of course a risk in that the dacpac was out of date, but seeing as I did both the deployments and was the DBA, we were all fairly confident that I could communicate with myself any changes.

But clearly this was not the proper way to work. The automated deployment in the dev/test environments was a combination of Exec commands in MSBuild and Custom Actions as part of the MSI installs. In the context of having something to automate deployments to dev/test yes it worked, but it wouldn’t cut it in Production. We needed to move this stuff out of here and into a proper deployment process. Plus, a lot of the manual process was stored in txt files on servers in Production, and a lot of knowledge in my head.

And so as a new app/database project was being written, it made sense to write a deployment process that would deploy the code from dev all the way to production. And so licences for Octopus Deploy were procured. And proper versioning of builds was implemented. And instead of using MSI’s to deploy the applications, Nuget packages were used (I loathe WiX, which is a story for another day, but was happy with the idea of moving away from MSI’s.) Dacpacs were also packaged into Nuget packages. And publish.xml files were packaged with the dacpacs. And the diff files were created at deploy time from the same dacpac irrespective of what environment we were deploying to.

It took a couple of months, but then these new projects were deployed via Octopus all the way to production. And so with a proven track record of how to get a task on a sprint board deployed all the way to production using this new pipeline, the company were able to role out the changes to the rest of the projects. I had left before they had manged to get everything done, but they successfully got there in the end.

What Tools Do I Need?

I’m not going to shill any particular tool here. Depending on how mature your DevOps processes are, you may find some tools more useful more useful than others.

However: just like crafting is fundamental to Minecraft, learning a scripting language like PowerShell is fundamental to DevOps: in TeamCity, Jenkins, Visual Studio Online, Octopus Deploy etc, much of the automation process will work by scripting tasks. There plenty of help out there if you know where to look, as well as for Minecraft.

And then regardless of what tool you use, there will be shortcomings: SSDT has a concept of pre-deploy scripts, so that changes can be made before the main diff script is executed. Trouble is, the diff script is executed before a pre-deploy script is executed. So you can’t use pre-deploy scripts to stage a change that the diff script can then build on. This means you have to use something like a pre-model script: a script that is executed before the diff script is generated.

Again, depending on your level of experience with SSDT, your response can broadly fall into one of three categories:

Ideally it would be great to get to the point where a developer checks in code and it goes through several rounds of testing and deployment before reaching to production, all in one go. But in suitably large organisations, that would appear to be unlikely: some manual intervention or checking may be required until confidence in the pipeline is such that you may be able to deploy automatically.

And just because you’re deploying to a different environment you must make sure that your deployment process does not deviate in any way: if you look at the tooling recipes in Minecraft, they don’t change, irrespective of the mineral (2 sticks, 3 blocks of mineral). If you deploy your pre-model scripts via an automated step in dev/test, why hand the script over to a DBA? The whole point to deploying in previous environments is to test the deployment.

In the context of automating database deployments, it’s really hard! Don’t let anyone fool you into thinking it is easy. What makes database deployments unique is that you’re only ever deploying a diff between the model database and what is currently live, if you’re using SSDT at least. For migration-based deployments, it is even more vague, as scripts drift further away from what was originally written. So unless you’re going to deploy every single change you make straight to Production, the changes deployed to Production are going to build up, increasing the risk of something failing. So you need to effectively plan your changes so that they are non-breaking. This really is deviating from my original topic though…

Of course as mentioned once you have slayed the Ender Dragon, it will re-spawn. It truly never ends, just like improving your process to deploy code to live. But it always begins with punching blocks.