When releasing a new version of our software there are typically cube changes that have to be deployed. We have only one SSAS Solution but we deploy it many times for reasons I won’t go into here. Suffice to say that cube deployment and processing is by far the biggest task of any release due to the number of databases we deploy and size of the cubes.

We have been able to automate much of the deployment thanks to the use of Powershell and MSBuild. It’s a 2 step process where we compile the solution, then in an MSBuild file (as a post-compile target) we update the values in the analysis services deployment files. These values include the members of the roles that are defined in our solution, the name of the databases, the server that they are deployed to (this last one can be very important, or not important at all, I will explain that one later), whether we want to retain partitions (again, very important depending on deployment type, more later) etc. We use MSBuild because it works well in iteratively reading the files and updating the values using xpath and anyone can run it from the cmd line if they need to. A process that may take someone manually several hours is completed in about 30 seconds through MSBuild, including compile time.

The second part of the process is to deploy the databases. I use 2 PowerShell functions; one to execute the Microsoft.AnalysisServices.Deployment.exe and another to execute the xmla. In the PowerShell you will see that there are switches that define whether we deploy the script (ie bypassing generating xmla file and run the deployment files directly against the SSAS instance) or generate an xmla script if we want to sanity check the deployment script before running it. The big advantage of deploying directly is that we have many partitions that are generated when loading data that are not present in the solution. If we choose to create a script, then even if the ‘retainpartitions’ value is set to true in thedeploymentoptions file, the xmla script will not include these partitions. The only way to keep these partitions is to deploy the database directly where the database already exists. But remember earlier when I said that specifying the server can be important or not important, depending on your choice of deployment? Well in this case it is very important. You need to specify the server that the database already exists on, or it will just create a new database. This value is in the deploymenttargets file.

Another advantage of deploying the database directly is that if you have several cubes in the solution and the change is only going to affect one of those cubes, then you only have to re-process one cube as opposed to the whole solution. So w/r/t our deployments, we can reduce the downtime required by selecting this option.

Creating a script is also a valid choice; you may want to check the content first, or you may wish to copy this out and run it on another server afterwards… it’s beneficial to see what actually gets generated from the deployment files sometimes. In the PowerShell function I have included a choice where you can create and deploy the script, or just create the script. This time however, it is irrelevant in what server you specify in the deploymenttargets file as the PowerShell uses ADOMD.NET to connect to the SSAS instance and deploy, so we have to specify which instance as a parameter in the function. I suppose if you wanted to be really clever you could use PowerShell to read the server from the deploymenttargets file instead of using a parameter, but I took the easy option here. Generally, I’ll deploy the script when the changes we have to the cubes are quite drastic and would require a whole re-process of the database anyway.


function publish-SsasDb
{
[CmdletBinding()]
param(
[Parameter(Position=0,mandatory=$true)]
[string] $TargetSsasSvr,
[Parameter(Position=1,mandatory=$true)]
[string] $sourceDb,
[Parameter()]
[switch] $runscript,
[Parameter()]
[switch] $createScript,
[Parameter()]
[switch] $deployScript)

$script:ASDeployWizard = "C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\Microsoft.AnalysisServices.Deployment.exe"

process
{
if(!(Test-Path $sourceDb))
{
Write-Error -message "The path to [$sourceDb] is either not accessable or does not exist" -category InvalidArgument
return
}

Write-Host "$(Get-Date): Starting"

Trap {Write-Host "Failed to publish database ($_.Exception.Message)" -foregroundcolor Black -backgroundcolor Red; break}

<# Execute the Microsoft.AnalysisServices.Deployment wizard to produce the xmla file #>
$path = [System.IO.Path]::GetDirectoryName($sourceDb)
$targetDatabase = [System.IO.Path]::GetFileNameWithoutExtension($sourceDb)
$outputScript = [System.IO.Path]::ChangeExtension($sourceDb,"xmla")
$deploymenttargets = [System.IO.Path]::ChangeExtension($sourceDb,"deploymenttargets")
$logPath = [System.IO.Path]::ChangeExtension($sourceDb,"log")

if($runscript -or $createScript)
{
$arguments = @("`"$sourceDb`"", "/s:`"$logPath`"", "/o:`"$outputScript`"")
}
if($deployScript)
{
$arguments = @("`"$sourceDb`"", "/s:`"$logPath`"")
$arguments
}
Write-Verbose "$(Get-Date): Start deployment wizard from deployment script located at [$sourceDb]. See log file [$logPath] for more info."
Start-Process -FilePath $script:ASDeployWizard -ArgumentList $arguments -Wait
Write-Verbose "$(Get-Date): End deployment wizard from deployment script located at [$sourceDb]. See log file [$logPath] for more info."

if($runscript -or $createScript)
{
if(!(Test-Path $outputScript))
{
Write-Error -message "Path [$outputScript] does not exist. Deployment script has not been generated as expected." -category InvalidOperation
return
}
}
if($runscript)
{
<# Execute the generated xmla file #>
$xmla = [string]::join([Environment]::NewLine, (Get-Content $outputScript))
Write-Verbose "$(Get-Date): Start Database deployment script [$outputScript]"
$xmlaresult = Execute-Xmla $TargetSsasSvr $xmla
Write-Verbose "Xmla Result: $xmlaresult"
Write-Verbose "$(Get-Date): Finished Database deployment script [$outputScript]"
}
Write-Host "$(Get-Date): Finished"
}
}
function Execute-Xmla
{
[CmdletBinding()]
param(
[Parameter(Position=0,mandatory=$true)]
[string] $TargetSsasSvr,
[Parameter(Position=1,mandatory=$true)]
[string] $xmla
)
begin
{
Add-Type -AssemblyName "Microsoft.AnalysisServices.AdomdClient, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
$connectionectionString = "Data Source=$TargetSsasSvr;Provider=MSOLAP.4;Integrated Security=SSPI;Impersonation Level=Impersonate;"
$connection = New-Object Microsoft.AnalysisServices.AdomdClient.AdomdConnection($connectionectionString)
}
process
{
$connection.Open()
$comand = $connection.CreateCommand()
$comand.CommandTimeout = 20000
$comand.CommandType = [System.Data.CommandType]::Text
$comand.CommandText = $xmla
$reader = $comand.ExecuteXmlReader()
if($reader.Read())
{
$return = $reader.ReadOuterXml()
}
return $return
}
end
{
$connection.Close();
$connection.Dispose();
}
}
New-Alias deploySsas publish-SsasDb
Export-ModuleMember -alias * -function publish-SsasDb

Note that it is set up for SQL Server 2012 as the deployment.exe is stored in the 110 binn, but you can change this for whatever version of SQL you are running. I have removed something from this function, and that is how I create the loading partitions in a blank database. This is because partitions and what you split them on are going to be different from me.If you don’t have a method for managing partitions, here’s a post from ssas-info you might want to read up on for partitions using SSIS.

So the commands will look like this:


publish-ssasdb "olap01" "C:\CubeDeploy\ssas\ssas.asdatabase" -verbose -runscript


publish-ssasdb "olap01" "C:\CubeDeploy\ssas\ssas.asdatabase" -verbose -createscript


publish-ssasdb "olap01" "C:\CubeDeploy\ssas\ssas.asdatabase" -verbose -deployscript

You can easily turn this function into a script, or you can create a module and add these functions to that module, or you may have a module you use already for analysis services. After running this it’s just a case of re-processing all the cubes. In a later post I will talk about how I have automated the processing of cubes using PowerShell. PowerShell really is the most valuable tool you can use when managing Analysis Services.