Testing That Your Deployments Aren't Lousy
Aloha!
It’s my birthday, and I’m also on holiday, so what better way to spend it than writing a blog post.
In any Software Development Lifecycle Methodology, be it Scrum or Waterfall, there is a beginning and an end to the process, no matter how frequent those iterations are. That is to say, there is a process which relies on input from an upstream process before it can proceed. Generally it goes like this: the customer wants, BA’s spec, Devs code, Testers, umm, test, it gets released and in between that whole dev/test thingy is the non-trivial matter of deployment.
And when I think of DevOps, I think that deployment is very much part of it; the misty bridge what connects the code to the boxes that are running it. And that part, the deployment, is not a role (as in you don’t hire people to create the process to deploy something that someone else has written), it’s a task owned by developers. This idea seems to be something that most people struggle to reconcile with, which seems odd to me.
Over the space of 6 nights I’ve worked hard in getting the static data in SSDT.Pokedex to deploy properly. To give you an idea of the task, there’s one table with ~360,000 rows of data. And but so it is important that I’ve done this, because SSDT just can’t cope with the amount of static data it has to import. SSDT and the SQL Server Instance I am deploying to use up all the memory on my box and then the deployment falls over with a “out of memory exception”….
So I’ve ruminated over the deployment process and what the impact is going to be. And the long nights and trial and error have paid off, because I’ve worked out how to build and deploy with needing only about 200MB spare memory on the box. A huge improvement.
OK I say I’m developing on a “box”, but I really mean “slate”… I’m developing on a Surface Pro 3. “But that’s not a proper dev box!” I hear some of you cry. Maybe true, but consider this: what’s the spec of your build boxes? Or your SQL Server instances you are deploying to? Or if you’re using something like Octopus, how beefy are your tentacles?
I could just fire up the behemoth box sat under my desk: even at 5 years old it’s a Sandy Bridge Extreme CPU with 32GB memory and stocked to the gills with SSDs. It is in a word, beefy. And even if SSDT and the SQL Server instance did cope on my dev machine, chances are other peoples boxes won’t be able to cope. And so when it comes to getting the static data into the database, I know it’s strengths and weakenesses.
And but so even though I’ve worked hard on the static data in SSDT.Pokedex, the post-deploy and post-post-deploy script process is very much a straw house: I recognise it, and I have a plan to improve it.
The fact is, no story is “done” until it’s deployed to live, preferably in an automated manner. There’s no one better equipped to find the optimal way to deploy a change than the person who is writing the code themselves. And because we have environments we deploy to before live, we need to use these opportunities to not only check that the code works, but that the deployment process itself works. And to that end you need to deploy in dev/test/any other upstream environment in exactly the same way you deploy to prod. IF you don’t, then that means the first time you’re deploying properly is in production. And that is where the dreaded “known unknowns” lurk.