Configuring Available Memory for SQL Server Reporting Services
This post has now moved to my new site. Why don’t you check it out?
This post has now moved to my new site. Why don’t you check it out?
(If anyone actually gets the reference: kudos. If not, then it’s just that you’re not a Soundgarden fan)
So, there’s something that has been troubling me about deploying to databases with SSDT for some time. Actually, it’s not a SSDT-exclusive problem per se, it’s more of a database/release management process.
What it pertains to is that most people like to create the deployment script for review prior to deploying to production. This makes sense if you are new to the Continuous Deployment process, and that maybe sometimes SSDT creates scripts that are not quite expected. You may skip this because you feel rather more confident in your deployments, or the database sufficiently simple enough to just deploy without having to review. But generating the script as a deploy artefact is still a good idea, because auditing.
At any rate, the script is generated and maybe reviewed….. so then what? In SSDT there is no way to create and deploy script in one step; they are two distinct steps. And even if they were one step, this would still not resolve the issue that troubles me. So what is this issue?
The issue is that by creating a script, and then running the deploy, you cannot be sure that the database is in the exact same state that it was when the initial script was generated. If you don’t already know, SSDT runs a deploy entirely in memory, so as mentioned there is no script created. You have to explicitly create the script as part of the process. Or, if you have already created one, you have to re-create the script.
Unit testing can sometimes throw up unexpected issues that are entirely concerned with the unit tests and not the application under test. Recently I’ve been having issues with DateTime formats affecting the results of the unit tests: the format of the DateTime was different on the build server than it was on my local machine. The stored procedure could not be altered (because reasons*).
Hello! I’ve been working on some pre-deployment scripts to SQL Server using PowerShell, and I chose to use Invoke-Sqlcmd as opposed to creating a sql connection*, however the first time I ran the script on my local box to test, it seemed to hang. I thought it may be my local instance not running, but that was running fine. So I created the simplest command possible and tried it: Invoke-Sqlcmd -Query "SELECT GETDATE() AS TimeOfQuery;" -ServerInstance ".
Quick note on NULLS: NULL means “Unknown Value”, not “0”. So adding a value to NULL results in NULL: 1 + NULL = NULL Because NULL means “I Don’t Know”, adding two NULLS together does not equate to the same value; it’s still two indeterminate values. NULL + NULL != NULL You can work around this in T-SQL by using COALESCE. declare @i VARCHAR(8) = NULL declare @j VARCHAR(8) = NULL SELECT @i SELECT @i + 'bob' SELECT COALESCE (@i, 'bob') if @i = @j select 'yes' else select 'no'
I’m going a bit meta in this blog post; I’m talking about blogs and blogging.
The MSDN page for SQLPackage is great for all the information you could possibly need to use SQLPackage via command line. And there are a lot of options, especially with the Properties: although these can be set in a publish.xml file, you can override whatever is specified within the publish file by specifyingthe property name and setting the value. This is especially useful when you don’t want to have many publish.
The subject of my last post was to do with returning DBCC CHECKDB results from a temp table, which was populated by running DBCC DBINFO against each database hosted on the SQL instance. I shared a query that returned any databases that had not had a CHECKDB run against it for over 7 days. The query is below:
select dbi.Value from #db dbi
WHERE field LIKE '%lastknow%'
AND (CASE WHEN isDate(dbi.value) = 1 THEN CAST (dbi.Value AS DATETIME2(3))END) < DATEADD (DAY,-7, GETDATE())
That “CASE” statement stands out quite a bit doesn’t it. Let’s explore why this query needs a CASE expression. Begin by creating a temp table that contains a sample of the data stored in the temp table.
Hello!
Recently I needed to apply compression data on a particularly large table. One of the main reasons for applying compression was because the database was extremely low on space, in both the data and the log files. To make matters worse, the data and log files were nowhere near big enough to accommodate compressing the entire table in one go. If the able was partitioned then I could have done one partition at a time and all my problems would go away. No such luck.
So I had to think quite outside of the box to compressing the data. The solution I came up with was to create a new empty table, and copy 50 rows at a time (that was the largest I could move at any one time, yes I really was THAT low on space) and then delete the 50 rows copied. But instead of COPY/DELETE, which would seem the logical way, I ran the DELETE first and then run an INSERT from the output table. This way I would be certain that the same rows that were deleted were copied into the new table, and meant that I was only scanning the original table the once.
I had to run further checks into the script: the database was set to simple mode, but the log file would still grow because of ETL processes etc running on other parts of the db, and so I needed to check to make sure that I wasn’t saturating the log file with each delete. So the logic here will work for both SIMPLE, BULK and FULL recovery models.
SET NOCOUNT ON DECLARE @SQL NVARCHAR(MAX) = '' DECLARE @EXEC NVARCHAR(MAX) = '' DECLARE @RUN BIT = 0 DECLARE @newdatalocation NVARCHAR(256) = ‘L:\temptempdb' DECLARE @newLogLocation NVARCHAR(256) = ‘M:\tempdb_log’ DECLARE @fileName NVARCHAR(64) DECLARE @fileLocation NVARCHAR(256) DECLARE cur CURSOR FOR SELECT f.NAME ,f.filename FROM sys.sysfiles f OPEN cur FETCH NEXT FROM cur INTO @fileName ,@fileLocation PRINT @fileName PRINT @fileLocation WHILE @@FETCH_STATUS = 0 BEGIN SELECT @SQL = ‘ALTER DATABASE ' + DB_NAME() + ' modify FILE ( NAME = ' + f.
Disclaimer: I’m not saying that these three things are the only things you need to know about missing index hints in Execution Plans, I’m sure whole chapters in SQL Books can be dedicated to the topic! And I’m not advocating the use of creating new indexes indiscriminately to a database.
Hello!
Missing index hints in query plans are a potentially quick and fantastic way of speeding up a slow running query. However, there are a few things that are vital before adding a new index recommended via the Missing Index Hint. Let’s setup a test database to give some working examples.
CREATE DATABASE [IX]
GO
ALTER DATABASE [IX] SET RECOVERY SIMPLE
GO
USE IX
GO
CREATE TABLE Me_Heap
(
Me_Heap_ID BIGINT IDENTITY (1,1),
FK_Id_BIG INT,
DateChange DATETIME2 DEFAULT ( CURRENT_TIMESTAMP )NOT NULL ,
DataCol NCHAR (8) DEFAULT ( 'IX' )
)
GO
BEGIN TRANSACTION
DECLARE @i INT = 1
WHILE @i <= 30000
BEGIN
INSERT INTO Me_Heap
( FK_Id_BIG )
VALUES ( @i % 10 )
SET @i = @i + 1
END
COMMIT TRANSACTION
CHECKPOINT
GO 40
CREATE TABLE Me_Heap_2
(
Me_Heap_2_ID INT IDENTITY (1,1),
A_N_Other_Val INT,
DateChange DATETIME2 DEFAULT ( CURRENT_TIMESTAMP )NOT NULL ,
DataCol NCHAR (8) DEFAULT ( 'IX' )
)
BEGIN TRANSACTION
DECLARE @i INT = 1
WHILE @i <= 30
BEGIN
INSERT INTO Me_Heap_2
( A_N_Other_Val )
VALUES ( @i % 10 )
SET @i = @i + 1
END
COMMIT TRANSACTION
CHECKPOINT
GO
You will notice that there are no indexes, and this is intentional.
I was wondering if there was a way to script out Database Mail so that the configuration was identical across multiple SQL Server instances. Unlike most other objects in SQL, there’s no obvious way to do this (typically I expect to see the option through the SSMS UI when right clicking on an object).
I’ve been working hard on my backup/restore skills lately, getting to grips with the fundamentals and making sure that I understand backup/restore best practices and running examples to make sure I’m ready in the event of a disaster. It’s a cliche to say that you don’t want to be doing this in a real life disaster when everyone is huddled around your desk, but it’s true! Tasks like tail log backups, a type of backup that backs up the log file even if it is offline or damaged, is something that you may need to run in some scenarios.
I received an email from the TechNet evaluation Centre this morning, stating that the CTP for SQL Server 2016 (v2.2) has been replaced as of yesterday. Apparently the previous release caused downtime with AlwaysOn Availability Groups during rolling upgrades. Though I have SQL Server 2016 installed on an Azure box, I’ve not played around with AlwaysOn in 2016. An upgrade is recommended for any users who wish to upgrade to SQL Server 2016 CTP 2.
This has been done many times before by many other people, but I set this challenge to a friend of mine who was interested in programming, and then realised that I had no solution. So I spent the train journey home writing my effort below. Like a lot of things in computer programming, there’s more than one correct way, but many more incorrect ways! I like this way because it’s quite small and readable.
Ever get tired of the ugly looking tabs at the top of SSMS, full of impertinent information? I was recently told of a method to tidy up the info displayed on the tabs, making them more legible: Go to Tools > Options Then navigate to Text Editor > Editor Tab and Status Bar Those options underneath “Tab Text” determine what is displayed on the tab. Here you can set them to be on or off.
I was catching up on some blogs yesterday, and one I really liked was this post on Spaghetti DBA. The authors main complaint was that he had grown disillusioned with Connect and the way in which so many issues are dealt with (ie, closed as “Won’t” Fix"). All the author wanted was better feedback from the engineers when responding to issues. I’m adding my voice to this request because in the past year I have raised a couple of issues myself, only to be disappointed with the feedback:
Hello! And Happy New Year!
I’ve spent much of the past month off work looking after my daughter whilst my wife is at work. It seems that while this time of year is quiet for me at the office, in the NHS its the busiest period of the year. So it has been great to spend time with Phoebe at home, which has resembled a building site since the end of October. Indeed, as I work from home I have had to move the computer from into 5 different times whilst work was completed. During that time I’ve learnt more things about plumbing than I’ve ever wanted to know, and surprised myself when I kept a remarkably cool head when I noticed water leaking out the ceiling (from the room I had just removed the radiator from successfully (I thought) and whose pipes I had capped) into our living room. And here is some advice which is as unrelated to technology as you’ll ever read on this site, but invaluable nonetheless: try not to reuse caps to cap off radiator pipes, as you have to turn them so tight they tend to break up when you try to use them again. Which is exactly what I had done. I thought they were screwed on well enough until I turned the heating on and water got flowing around the system, which was when the water started to leak out of the busted cap. Fortunately for me no damage was done and I was able to drain the entire heating system, which unfortunately coincided with us living without heating during the coldest days of 2014, until the plastering was done. It’s all part of us paying our dues until the house is done. Currently we are without a shower/bath, though mercifully we are not far away from friends who are kind enough to let us use their bathroom.
Been working from home lately, and this morning I needed to alter my password to a RDP session inside another RDP session. So I needed to find a way to press CTRL+ALT+DELETE without the first session receiving it. I had hoped that the control panel might help, but sadly not: One suggestion that worked for me is to use the on screen keyboard (OSK, osk.exe). But don’t press CTRL+ALT+DEL on the OSK exclusively.
When I first started administering SSAS, the HardMemoryLimit is a property on SSAS instances that I never used to really take much notice of. After all, there’s the LowerMemoryLimit and the UpperMemoryLimit, and once these are set, they are set in stone right?
Right?
If you’re using, or at least read up on Clustered Columnstore Indexes then you’ll know that the data is stored in segments. You’ll also know that there are 4 states the segments can be in at any one time:
A real quickie this evening, and it’s something I discovered yesterday the hard way: using NOLOCK table hints can cause blocking. Yep, you read that right. Given the fact that the table hint is called NOLOCK it’s counter intuitive to what you’d instinctively think, but it’s fact and by design and also, there’s nothing wrong with it.
Before I get into what exactly it is that I have paid in full, let’s have Eric B and Rakim drop a beat for you this morning: [youtube https://www.youtube.com/watch?v=E7t8eoA_1jQ&w=420&h=315] So, that it’s then, my student debt is paid off! A mere 7 years after leaving university, I have fulfilled my side of the deal. I was going to write a large piece about the state of education today, but I think I’ll just put a link to a sobering news story on the BBCwebsite about the state of the student loan system in the UK.
To paraphrase a saying; learning is priceless, but education must be cheap. And if you’re like me then your training budget is usually whatever you find down the back of the couch!
I’ve been going through the process of upgrading from SQL Server 2012 to SQL Server 2014 and I’ve hit a bug when running snapshots. Any snapshot that was in place from SQL Server 2012 cannot be dropped in SQL Server 2014. There’s a Connect article already open.
Today I needed to update our cube aggregation designs using the Usage-Based Optimisation Wizard but I got stuck on an error I;d not seen before “Estimated Count or Partition Count must be specified for cube objects that require counting.” Ok, not too helpful. I highlighted each dimension and clicked the “count” button for each one, but it still did not fix. I then looked a little closer and noticed that one of the dimensions had a red squiggle underneath it.
Xp_regread is an undocumented stored procedure. It enables you to read the literal registry path that you specify via T-SQL. EXEC [master].[dbo].[xp_regread] @rootkey='HKEY_LOCAL_MACHINE', @key='Software\Microsoft\MSSQLServer\Setup', @value_name='SQLPath' [code] You can use the OUTPUT parameter with xp_regread to return the value back. [code language="sql"] declare @outputValue nvarchar (128) EXEC [master].[dbo].[xp_regread] @rootkey='HKEY_LOCAL_MACHINE', @key='Software\Microsoft\MSSQLServer\Setup', @value_name='SQLPath', @value = @outputValue OUTPUT SELECT @outputValue You can check who has access to xp_regread by running the following query; SELECT OBJECT_NAME(major_id) AS [Extended Stored Procedure], USER_NAME(grantee_principal_id) AS [User] FROM sys.
Over the weekend a SQL Server went live in one of our environments. There’s a permission script I needed to run this morning. This script I’ve used a fair few times on different servers, and not had a problem with it. However, today it failed with the following error message.
“Implicit conversion of varchar value to varchar cannot be performed because the collation of the value is unresolved due to a collation conflict between “Latin1_General_CI_AS” and “SQL_Latin1_General_CP1_CI_AS” in add operator.”
Today I was writing a script that would go through all the databases on a server and add a user as a member of readonly and denywriter. It’s a script that I’ll probably need again, and so I discovered the “EXEC sp_MSForEachDB” stored procedure. This undocumented and clearly powerful sproc needs to be respected, however it’s certainly useful for batching a lot of commands quickly. sql execute master.sys.sp_msforeachdb 'select ''?'' '
Today I am going to try to write a blog in 10 minutes, based on an issue that occured yesterday.
We use SQL Server Enterprise which gives us access to taking snapshots of databases which we use during releases. Recently we added a filestream filegroup to one of our databases. We did not know this at the time, but you cannot take a snapshot of a database that has a filestream filegroup… at least that is what we thought. Turns out all you need to do is not reference the file when you are taking a snapshot. An example for you to try is below.
Yesterday evening there was a very special SQL Supper hosted at the Microsoft offices in London Victoria. It was an open session with some of the Microsoft Data Platform group, Mark Souza, Nigel Ellis, Conor Cunningham, Hatay Tuna and Ewan Fairweather. So I made my way by foot from the offices here in Covent Garden to London Victoria. Although February in London is rather chilly, with a brisk walk you can make it in half an hour, plus you pass several landmarks such as the piazza at Covent Garden, Trafalgar Square (and it’s controversial 4th plinth sculpture, the Blue Hahn Cock) the Mall, Buckingham Palace and finally, Westminster Cathedral.
I’m on holiday now until the beginning of January, getting some well earned rest. That is until I remember I have a 4 month old daughter who is currently unwell and wishes to vocalise her displeasure of this rather than getting some sleep.
With Service Packs and Cumulative Updates rapidly released for SQL, it can be hard to keep up with what version you have installed where. It’s easy enough to find out which version of SQL Server you have installed just running the following T-SQL and you can find out all the info you need. But what about SSAS?
So, you’ve got your brand new Windows Server 2012 configured, you’ve got all your power saving methods switched off through the UI, and all the BIOS power saving features are turned off, powering your new box to the best it can get. There’s only one problem; you keep on losing network connectivity! Why is this? We were recently working on a new cluster, and after doing the customary checks to ensure all saving features were switched off I spoke to one of the sys admins to verify that everything had been done.