I got a pr from Peter Schott wrt PoshSSDTBuildDeploy not quite working as expected. Turns out there was a few bugs with a couple of the functions. And I noticed another one myself so there's been a relative flurry of activity over on the GitHub. What bothered me though was that I was sure that I had tests to cover these bugs. Turns out I do have tests to cover the issue of spaces in folder names when passing a Nuget path, it was to do with some weird concatenation.
And so it got me to thinking of one the talks I went to at SQLBits was Ed Elliott talking about testing. And one thing he said which really stuck with me was the idea of complexity and confidence in testing. Basically it's quite easy ot set up a Pester test to execute a function (and pass in a folder path to Nuget with spaces) in complete isolation, but in reality the result of that test gives me 0% confidence that it will work in Bamboo, VSTS, TeamCity, Octopus Deploy, etc. What I really need to do is rather than run my tests and then publish, I need to set up a whole build and deploy process for the most popular tools that are used to create pipelines, and have them as release phases prior to publishing.
The thought of setting up a box with the assortment of tools, plus the sql instances to deploy to is not something that I am necessarily relishing, never mind even setting up each one with a build/deploy process for a database using my module, but to be honest the only way I really can say with any confidence that PoshSSDT will work in the context it's going to use, I'm going to have to get complex.
In other news, I'm currently having fun with an ADF pipeline not failing when an activity fails. I suspect it has everything to do with this activity calling the final activity in the pipeline “on completion”. Meaning that I will have to create an extra activity and set that to run “on failed” and change the original activity to on “completion”. Ultimately the same activity will run, it's ust the context. If this does indeed cause the pipeline to be set to failed when said activity does fail then I will like ADF a little bit less. I've struggled to reconcile how I feel about ADF. It has some decent features in that you can get up and running pretty quickly, but the lack of ability to test what it does has been a pain point. The solution is to make sure that the underlying process is well tested, but it comes back to the complexity vs. confidence testing quandry, in that you cannot be totally confident that something works if you are testing in isolation. There's also the issue about storing your ADF's in source code. Gavin Campbell's talk at sqlbits covers its shortcomings very well.