Deploying SSDT Scripts (With My Good Eye Closed)
(If anyone actually gets the reference: kudos. If not, then it’s just that you’re not a Soundgarden fan)
So, there’s something that has been troubling me about deploying to databases with SSDT for some time. Actually, it’s not a SSDT-exclusive problem per se, it’s more of a database/release management process.
What it pertains to is that most people like to create the deployment script for review prior to deploying to production. This makes sense if you are new to the Continuous Deployment process, and that maybe sometimes SSDT creates scripts that are not quite expected. You may skip this because you feel rather more confident in your deployments, or the database sufficiently simple enough to just deploy without having to review. But generating the script as a deploy artefact is still a good idea, because auditing.
At any rate, the script is generated and maybe reviewed….. so then what? In SSDT there is no way to create and deploy script in one step; they are two distinct steps. And even if they were one step, this would still not resolve the issue that troubles me. So what is this issue?
The issue is that by creating a script, and then running the deploy, you cannot be sure that the database is in the exact same state that it was when the initial script was generated. If you don’t already know, SSDT runs a deploy entirely in memory, so as mentioned there is no script created. You have to explicitly create the script as part of the process. Or, if you have already created one, you have to re-create the script.
And OK, so if you create a script then publish straight after because you have no review process, you can be very certain that the database is in the same state. And you know, in a huge number of instances, this is exactly what will happen. But going back to the manual review step, the delay in reviewing and deploying could be 30 minutes, an hour, or even more. And the longer the delay, the more likely something may have changed that renders the script null. So if a script is published, and then SSDT publishes from the dacpac as opposed to running the script that was created and reviewed in the first place, you could end up with an entirely different script.
And I so I’ve been thinking, which is worse:
The reason I prefer the “execute the script I created” option is that I’d rather deploy the script that has been reviewed, as opposed to the potential entropy-inducing alternative of deploying something may be different from the script generated for review. And what exacerbates this is that most people have a manual review step mainly because people have no confidence that an automated process will work. And so by using a process when the script that is created is not the one that is executed vindicates this lack of confidence. And if the deployment failed because something did change, then the feedback loop between the script being generated and the script being deployed must be shorter to reduce changes coming in.
Some people have disagreed with my approach because it probably won’t ever happen, and there’s always the chance that this is over thinking a problem. I could of course be way too cautious, but database deployments are hard to do properly, especially in a live deployment model where the rollback process is pretty much non existent.
The second reason why I feel the script generated should be executed is that if we consider the alternative for a moment, there’s no way we’d know that the script was different until after it had been deployed, and even then we’d have to rely on telemetrics or failing operations within the system. However this shows that there’s no way to know what you’ve deployed unless you deploy exactly the same things you’d reviewed/generated in the first step. It’s the proverbial Schrödinger’s Cat…
There’s a third, less significant reason why I also prefer the “run the script I created option”, and that is to do with time. It can take a long time to generate a diff script to deploy, and if you’re running this twice to get the exact same script (in 99% of cases), then this slows does the release process and takes up the resource of a deployment box
Finally, in the purest form of Continuous Deployment, the process is build once, deploy many: in the case of Octopus Deploy, when you have deployed to one environment, you then “promote” to the other environment. You don’t re-build what was already compiled. You take the build artefacts and re-deploy them. Is it not too much of a stretch then to consider that a script that is reviewed and added as a deployment artefact the one that we deploy? I certainly don’t think it is: you need to have utter confidence that what you review and retain is what you deploy, not some in-memory object you have no method of auditing.
Ultimately, this is all theory in the process of Continuous Deployment for databases in a live environment, where you don’t really have a chance of rolling back, but it comes from practical experience. And I’d be interested to hear other peoples views on what they do here.