When I first started administering SSAS, the HardMemoryLimit is a property on SSAS instances that I never used to really take much notice of. After all, there’s the LowerMemoryLimit and the UpperMemoryLimit, and once these are set, they are set in stone right?
Wrong. When SSAS does not have enough memory, it will be allocated more memory. And this is fine as long as there is plenty of memory going round. However, once that UpperMemoryLimit is breached, SSAS will continue to be allocated memory if it still requires more memory. This memory will be de-allocated eventually, but as we all know SQL is a ravenous, resource intensive monster that takes and takes and takes. So HardMemoryLimit is a threshold that the instance will, once breached, aggressively terminate active user sessions to reduce memory usage. By default, this number is set to 0. Obviously, we all know that this represents some other calculated number, and this number is the midway point between the TotalMemoryLimit and the total physical memory of the system.
This is important to take in; by default the HighMemoryLimit will be 80% of the total memory of the server. So broadly speaking SSAS can quite easily take 90% before severe action is taken. But these numbers don’t take into account these factors:
- Lousy memory allocation. Consider this post on dbareactions. 16GB of memory on a new server would leave less than 3GB of RAM for the rest of the server. Which might be OK, but most likely not really.
- Multiple instances/services running on one box. Imagine a server that was running three separate instances of SSAS, supposedly for security and logical separation of data purposes. In this scenario, the lower and upper memory limits need to be reduced to roughly one third of what they are (so from 65% and 80% of memory they were reduced to 20% and 25%). But if HardMemoryLimit is not altered, then an instance can still take a large amount of the memory, leaving the other instances to suffer.