2018-11

2018-08

2018-07

2018-06

2018-05

2018-04

2016-10

This is by design.

So at first I thought “put nothing in script modules” because they’re pretty pointless, right? Anything you write will pertain to a step exclusively, so why do you need to share code across steps. Turns out quite a bit of code is shared across steps, particularly checking and validating parameters.

Then, as step templates got more sophisticated they got harder to read, so then I thought “put everything in script modules!” It aided in legibility of the step templates and allowed them to focus mainly on setting up a process and using script modules to execute the function.

There is however an issue to this: script modules are the “magic” of a step template. If you’re not aware that a step template requires a script module, how can you communicate this implicitly? You could mention that a script module is required in a step template, but this is still a manual process that everyone remembers, or even buys into.

There is also another issue: I had to make a change to a step template that required a change to the script module. The step template in question was in heavy use, and so careful planning was required to make sure that the change in the step template and the script module was not a breaking change: basically what I mean is that whereas a step template has to be explicitly updated for a given step in an deployment pipeline, the same cannot be said for a script module. So whatever is the latest version of the script template is what a user gets. And if you make a few changes to the step template in an iterative manner, you have to be sure that if anyone updates the step template then all version have to match one another.

2016-09

2016-08

posts. I realised I had made a mistake on that post and had updated. But I also realised that the script itself is actually part of a PowerShell module. So wanting to make things as easy as possible, and being the helpful guy that I am, I have altered that example into a script which I am posting here.

It’s worth stressing that the errors that output to the console are fairly generic, and that there is a log file that will contain all the information if there are any errors.

2016-07

previous post, I shared a script that used the Octopus API to create a defect, with the aim of it being added to a TeamCity build and chaining the build to a deploy/test build step in TeamCity: the aim being to raise an Octopus defect if a test fails whilst the deployment to the environment succeeded. You can read more about it here.

What makes this a challenge is that there is no way to have a chained build that runs if, and only if, a build has failed. So as with Octopus you have to use the TeamCity API. In this script I get the the status of the last build that deployed/ran the tests, and if this build succeeded I do nothing. So yes this chained build has to always run post deploy/test phase.

Where it gets interesting though is if the build failed. Here we raise a defect, but not before checking to see if there are any defects raised that are still unresolved, as only one Octopus defect can be unresolved at any one time.

2016-06

2016-04

2016-03

2016-02

2014-12

2014-11

Automate SSAS Syncing Part 1: Create SSAS Database Role Automate SSAS Syncing Part 2: Create SSAS Role And Add Member

properties to this list to expand the info that you want. I’ve often mused about moving a lot of the logic that I constantly put in these scripts into higher tier functions in Powershell, or even create a ssas library, but time is always a premium…

set the aggregation designs on your cube partitions. I recently created another script that contains a “Fix” flag to set the aggregation design, so you can just run it without the flag to get the info, or set the flag and fix the aggregation designs. I also added a “process” part that will run a process index on any partition whose aggregations are not fully processed.

The advantages of using this over the older version include the aforementioned process and checks, but also it means you don’t have to specify cube, measuregroup and the aggregation designs name. The disadvantage to this is that it assumes you want the latest aggregation design applied and processed. Somewhere along the way there will probably be a script which combines all of this nicely!

2014-07

2014-05

2014-04

2013-11

sqlpackage.exe to extract the dacpac. Check out my September Six of The Best for another solution provided by Gianluca Sartori.

2013-09

2013-08

2013-07

Anti Virus running on SQL file-types.

“Good one” I thought to myself. And then I wondered if any of the files that make up the cubes on our SSAS boxes were being scanned or whether they were excluded. We have some big cubes in our system, and by big i mean TB’s worth of cubes, some of them over 700GB. So I fired up the System Center 2012 Endpoint Protection on one of the SSAS boxes, and sure enough, all files were being scanned. But should they? This Microsoft KB article confirmed my suspicions that they can be exempt from scanning: How to choose antivirus software to run on computers that are running SQL Server

2013-02

2013-01

Extract DacPacs Using PowerShell Via sqlpackage.exe Automate Database Builds Part Two: Extract Database Structure for Visual Studio Solution Automate Database Build Part Three: Create a Database Publish Profile

Introduction

I’m a big fan of PowerShell and an even bigger fan of writing PowerShell modules which enable me to share my functions amongst the team and enable them to extend and augment these functions. In this article I am going to introduce extracting dacpacs through sqlpackage.exe, and how to create a nested module that contains a simple function to call a bat file which extracts the dacpac of a database via sqlpackage. This function will be aliased in order to reduce the number of keystrokes required to extract a dacpac.