Not long ago we were experiencing growing pains with our cubes: We have over 20 SSAS databases with 3 MOLAP cubes on each database, and we had aggregations applied to one of the 3 cubes. To quote Jen Stirrup, aggregations are a “data structure that contains a pre-calculated summary of data. The aggregation size is smaller than the fact data it is based on, so it is expected that this will be quicker to query. Thus, aggregation design is an important method of optimising performance.” (1)

On our Data Warehouse, we use partitioned fact tables, and in turn we use partitioned measure groups in a direct 1 to 1. So we create a new partition, insert the data and run a ProcessUpdate on the dimensions. What we didn’t know was that a Process Update will drop aggregations unless you include the “Process Affected Objects” option through the UI;

processupdate_png

or through the xmla;


<Batch ProcessAffectedObjects="true" xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">

<Type>ProcessUpdate</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>

What happens is that when the dimensions are updated, the aggregations are out of date, so if you include affected objects, in other words the aggregations, then the ProcessUpdate will rebuild the indexes. Otherwise it just drops them. So this caused us quite a few problems as without our aggregations our cubes were hideously slow. It was going to be some time before we deployed a hotfix (run a ProcessAdd instead of ProcessUpdate), so we needed a pragmatic solution that mitigated the performance. So we decided to turn on lazy aggregations at a partition level. Typically when you run a full process or a process update (with affected objects), the data is first loaded (a ProcessData) and then the aggregation designs are created (ProcessIndex). Lazy aggregations allow partitions to be queried immediately after data has been loaded. Aggregations will be created as a separate background process (using far less CPU etc) while users start to query the partition. So each time we lost the aggregations the Lazy Aggregations Mode would re-process them in the background.

Through the UI, the process would involve clicking on every partition individually, opening properties, and changing ProcessMode.

noprocessinghere1_png

It’s not possible to group partitions together through the Object Model Explorer and click on properties, as you can see below. It really would be each and every single partition individually. But as you can imagine, 20 MOLAP cubes with 3 measure groups with 100+ partitions is a lot of partitions to enable, particularly when you consider that every time a process update was run on a dimension, the whole aggregation design was lost, and we’d have go back and find the new partitions!

noprocesshere_png

So, enter PowerShell. The script below, which is part of a larger module I use for managing PowerShell, connects to an instance of SSAS, matches by name the one cube we have aggregation designs on, and searches through each partition on every measuregroup for it’s processing mode. If this is not set to LazyAggregations, it enables it. Regardless of it’s findings it writes it out to the console.

[System.Reflection.Assembly]::LoadWithPartialName("microsoft.AnalysisServices")
$svr = new-Object Microsoft.AnalysisServices.Server
$svr.Connect("sqla\direct1")

foreach($db in $svr.Databases)

{
$cub = $db.Cubes.GetByName("AdventureWorks")

foreach ($mg in $cub.MeasureGroups)
{
foreach ($p in $mg.Partitions)
{
if ($p.ProcessingMode -ne "LazyAggregations")
{
Write-Host $db.Name $mg.Name $p.name"requires LazyAggregations to be applied as Processing mode is currently"$p.ProcessingMode -foregroundcolor darkgreen -backgroundcolor white
$p.Set_ProcessingMode("LazyAggregations")
Write-Host $db.Name $mg.Name $p.ProcessingMode"has been applied to"$p.name -foregroundcolor white -backgroundcolor darkgreen
$p.Update()
}
else
{Write-Host $db.Name $mg.Name $p.name" already has "$p.ProcessingMode" applied."}
}
}
}
$svr.disconnect

We ran this script several times before we were able to release a hotfix and it really saved us from a lot of complaints from our clients.

(1)http://www.jenstirrup.com/2009/04/optimising-aggregations-in-analysis.html