Configuring Available Memory for SQL Server Reporting Services
This post has now moved to my new site. Why don’t you check it out?
This post has now moved to my new site. Why don’t you check it out?
SSDT is great for many things, but one thing it’s not great at is giving control over to the user with regards to data and log file creation. It uses prefixed values of DefaultDataPath and DefaultLogPath. These variables are system defined and cannot be overridden. Instead it uses the default locations that are set on the instance we are deploying to. Humph. And I’d wager that if these are null then it will just use where the system database settings are.
Today Microsoft have released Service Pack 2 for SQL Server 2014. In addition o some new fixes, SQL Server 2014 SP2 includes hotfixes that were included in SQL Server 2014 SP1 CU1 to SQL Server 2014 SP1 CU7. So this is a new branch of releases, and we will see cumulative updates separately for each of the branches of SQL Server 2014 (RTM, SP1 and SP2). In addition to the CU fixes, There are 2 noticeable improvements to the release: Showplan XML will now include any trace flags in effect, actual rows read, per-operator performance metrics, and more details on spills.
Unit testing can sometimes throw up unexpected issues that are entirely concerned with the unit tests and not the application under test. Recently I’ve been having issues with DateTime formats affecting the results of the unit tests: the format of the DateTime was different on the build server than it was on my local machine. The stored procedure could not be altered (because reasons*).
Hello! Thanks to everyone who turned up at last nights SQL Supper to see me talk about preventing plan regressions during database testing. The testing framework I demoed was written by me, but the decision whether to make it open source or not is not mine. So the best I can do today is share the slides, and hopefully the framework will go online and be improved upon by the community.
When restoring a database you can specify one of three recovery options; RECOVERY, NORECOVERY and STANDBY. Today I’m going to touch briefly on the RECOVERY and NORECOVERY options before delving deeper into the STANDBY mode as it has a few features that separate it from the either two options that are worth explaining.
I recently had a comment posted to one of my first articles I wrote way back in 2012 concerning adding a column that detailed the used space as a %age. So without any further ado, here is an updated version of a script originally blogged about here. select sf.FILEID AS [File ID], [File Size in MB] = convert(decimal(12,2),round(sf.size/128.000,2)), [Max Size in MB] = convert(decimal(12,2),round(sf.maxsize/128.000,2)), [Space Used in MB] = convert(decimal(12,2),round(fileproperty(sf.name,'SpaceUsed')/128.000,2)), [Free Space in MB] = convert(decimal(12,2),round((sf.
Hello! Last week I presented for the first time at a SQL User Group (SQL Supper.) There was a lot of content, and plenty of demos, and a few people came up to me afterwards and asked for the scripts. In case you were too shy to ask, or just want a simple database that uses partitioning and Clustered Columnstore Indexes, the scripts attached will provide you with such a thing.
Recently I needed to apply compression data on a particularly large table. One of the main reasons for applying compression was because the database was extremely low on space, in both the data and the log files. To make matters worse, the data and log files were nowhere near big enough to accommodate compressing the entire table in one go. If the able was partitioned then I could have done one partition at a time and all my problems would go away. No such luck.
So I had to think quite outside of the box to compressing the data. The solution I came up with was to create a new empty table, and copy 50 rows at a time (that was the largest I could move at any one time, yes I really was THAT low on space) and then delete the 50 rows copied. But instead of COPY/DELETE, which would seem the logical way, I ran the DELETE first and then run an INSERT from the output table. This way I would be certain that the same rows that were deleted were copied into the new table, and meant that I was only scanning the original table the once.
I had to run further checks into the script: the database was set to simple mode, but the log file would still grow because of ETL processes etc running on other parts of the db, and so I needed to check to make sure that I wasn’t saturating the log file with each delete. So the logic here will work for both SIMPLE, BULK and FULL recovery models.
This is something of an epic post, so grab a tea and get ready…I’m probably showing my age by quoting an old Ren and Stimpy cartoon here, but to be fair it probably sums up log shipping pretty well. This post is focusing on using a read-only log shipping database for reporting purposes, and the limitations of read-only log shipped databases. I also share some monitoring scripts and a few ideas on how to improve restore performance without having to upgrade the hardware/software.
Despite the development of AlwaysOn in recent releases of SQL Server, log shipping is still a great way to set up a copy of databases to be used for reporting. One of the main reasons it is great is because, unlike AlwaysOn, it is available in less expensive editions like Standard and Web from SQL Server 2008 onwards. Sure, in 2016 AlwaysOn will be available in Standard, but in a greatly deprecated form, and you cannot read from the secondary. So it will be good for DR, but not for reporting (as an aside it still might be easier to set up log shipping for DR than AlwaysOn Basic because you need to setup a failover cluster. Read through the “how to set up Standard Edition Availability Groups” here.) However you do need to be careful though when setting up log shipping across different editions of SQL Server: whilst you can log ship between Enterprise to Standard/Web, if the database uses any Enterprise features then you’ll need to log ship to an Enterprise edition of SQL Server. And because you’ll be using the database for reporting, you’ll need to get it licensed.
Like all DBA’s, I have a collection of random scripts put together over time that reside in a folder and are then frequently forgotten about. Today I found a file called “useful log shipping script”, and being the generous guy I am, I’m putting it up here. SELECT ls.primary_server AS [primaryServer] ,ls.primary_database AS [primaryDB] ,lsd.secondary_database AS [secondaryDB] ,lsd.restore_delay AS [restoreDelay] ,CONVERT(VARCHAR, DATEDIFF(SECOND, lms.last_restored_date, GETDATE()) / (60 * 60 * 24)) + '_' + CONVERT(VARCHAR, DATEADD(s, DATEDIFF(SECOND, lms.
Although it is not an activity that I regularly do, I needed to write a script to move TempDB. This was whilst I was testing different RAID arrays on a SAN. The TempDB had quite a few files and I got bored writing out the commands over and again, so came up with a script that will print out the commands to alter the files for TempDB, and if necessary will execute the command also.
Super quick script to find all objects that are compressed. This script is grouped by table and compression type as partitioned tables may have different types of compression enabled per partition. If we do not include the “group by”then all partitions will be listed regardless of whether the compression type is different or not, and that looks really messy. SELECT SCHEMA_NAME(sys.objects.schema_id) AS [Schema] ,OBJECT_NAME(sys.objects.object_id) AS [Object] ,[data_compression_desc] AS [COmpressionType] FROM sys.
Disclaimer: I’m not saying that these three things are the only things you need to know about missing index hints in Execution Plans, I’m sure whole chapters in SQL Books can be dedicated to the topic! And I’m not advocating the use of creating new indexes indiscriminately to a database.
Missing index hints in query plans are a potentially quick and fantastic way of speeding up a slow running query. However, there are a few things that are vital before adding a new index recommended via the Missing Index Hint. Let’s setup a test database to give some working examples.
CREATE DATABASE [IX] GO ALTER DATABASE [IX] SET RECOVERY SIMPLE GO USE IX GO CREATE TABLE Me_Heap ( Me_Heap_ID BIGINT IDENTITY (1,1), FK_Id_BIG INT, DateChange DATETIME2 DEFAULT ( CURRENT_TIMESTAMP )NOT NULL , DataCol NCHAR (8) DEFAULT ( 'IX' ) ) GO BEGIN TRANSACTION DECLARE @i INT = 1 WHILE @i &lt;= 30000 BEGIN INSERT INTO Me_Heap ( FK_Id_BIG ) VALUES ( @i % 10 ) SET @i = @i + 1 END COMMIT TRANSACTION CHECKPOINT GO 40 CREATE TABLE Me_Heap_2 ( Me_Heap_2_ID INT IDENTITY (1,1), A_N_Other_Val INT, DateChange DATETIME2 DEFAULT ( CURRENT_TIMESTAMP )NOT NULL , DataCol NCHAR (8) DEFAULT ( 'IX' ) ) BEGIN TRANSACTION DECLARE @i INT = 1 WHILE @i &amp;amp;amp;amp;lt;= 30 BEGIN INSERT INTO Me_Heap_2 ( A_N_Other_Val ) VALUES ( @i % 10 ) SET @i = @i + 1 END COMMIT TRANSACTION CHECKPOINT GO
You will notice that there are no indexes, and this is intentional.
I was wondering if there was a way to script out Database Mail so that the configuration was identical across multiple SQL Server instances. Unlike most other objects in SQL, there’s no obvious way to do this (typically I expect to see the option through the SSMS UI when right clicking on an object).
I’ve been working with data warehouses again recently and needed to help out the rest of the team who were not familiar with partitioning in SQL Server. It’s one of those subjects that really requires hands on experience with to truly appreciate the mechanics of partitioning, and for me it especially helps if I can visualize a concept.
I’ve always thought that the best way to explain partitioning as imagining that any one table in SQL Server has a partition count of exactly one. When we actually partition the table on a range value on a column then we introduce multiple copies of that table, and that it is the meta data that is replicated for the table. And it is from this copying of the table behind the scenes that helps with archiving and query performance. This is probably not far off from how partitioning actually works.
Recently I have been looking at the COPY_ONLY feature for taking backups of a database. COPY_ONLY was introduced in SQL Server 2005, and allows you to take ad-hoc backups of either the log or a full backup without breaking the backup chain for differential backups. The COPY_ONLY backup is not part of the restore log: so when restoring from a full backup you can ignore it. The same here can also be said for log COPY_ONLY backups; it’s an ad-hoc backup that does not alter the transaction log at all; there’s no clearing down of the log.
I’ve been working hard on my backup/restore skills lately, getting to grips with the fundamentals and making sure that I understand backup/restore best practices and running examples to make sure I’m ready in the event of a disaster. It’s a cliche to say that you don’t want to be doing this in a real life disaster when everyone is huddled around your desk, but it’s true! Tasks like tail log backups, a type of backup that backs up the log file even if it is offline or damaged, is something that you may need to run in some scenarios.
A couple of weeks back I posted a script that checks the progress of any backups running on and instance. A few days later I needed a script to check the progress of a restore. Fortunately enough, the backup script can be re-used with a very simple change. This simple script to check if any restores are running, how long they have been running and what the anticipated complete time will be.
I received an email from the TechNet evaluation Centre this morning, stating that the CTP for SQL Server 2016 (v2.2) has been replaced as of yesterday. Apparently the previous release caused downtime with AlwaysOn Availability Groups during rolling upgrades. Though I have SQL Server 2016 installed on an Azure box, I’ve not played around with AlwaysOn in 2016. An upgrade is recommended for any users who wish to upgrade to SQL Server 2016 CTP 2.
Edit: MSSISBuild has been updated and moved to GitHub. Read about it here. The rest of this post is still relevant, it’s just where you download the code form that has changed.
this has proven to be a popular post, and the software has been downloaded a few times. I will post an update for SQL Server 2016 soon, though I think it is just references in the solution that needs updating.
Today I am pleased to announce the release of MSBuildSsis2012 on
Codeplex. But before i get into what it does first, a bit of background:
Back in May, and subsequently a few more times since, I’ve posted about an error I get occasionally in one of our custom tasks that run in our builds. This custom task is a special case: As the SSIS 2012 project extension is dtproj, it cannot be compile using MSBuild. The typical solution would be to use DevEnv In MSBuild. And to automate deployment the solution (that would work for most people) is to use the SSIS Deployment Wizard in silent mode. However, we don’t use the SSIS Deployment Wizard in our automated builds as it does not work in Silent Mode when you have assembly references in the SSIS packages: it destroys those references and the dtsx packages fail whenever they run.
Hello! If you have backups running through jobs you may want to know the progress of the backup and where the backup media is. Script below returns this information for you. Happy scripting! SELECT A.NAME ,B.TOTAL_ELAPSED_TIME / 60000 AS [Running Time] ,B.ESTIMATED_COMPLETION_TIME / 60000 AS [Remaining] ,B.PERCENT_COMPLETE AS [%] ,( SELECT TEXT FROM sys.dm_exec_sql_text(B.SQL_HANDLE) ) AS COMMAND FROM MASTER..SYSDATABASES A ,sys.dm_exec_requests B WHERE A.DBID = B.DATABASE_ID AND B.COMMAND LIKE '%BACKUP%' ORDER BY percent_complete DESC ,B.
Hello! As an environment grows bigger and you have many jobs across many server instances, it’s important to automate as much as you can. And starting with the small, tedious but time consuming jobs is never a bad idea! A good example is having to enable/disable SQL Agent jobs during maintenance, or when a snapshot for replication is being pushed out you may not want certain jobs to run to prevent failures.
Today I am going to share with you a SQL Script that can help monitor the status on your distribution agents in a snapshot/transactional replication topology.
Despite their being some built-in monitoring for replication agents in the Replication Monitor, one that seems to be missing is checking for the agents updating the distributor. Sure there’s ones for agents shutting down for whatever reason, but sometimes latency can be reported as “excellent” for a subscriber despite not having updated the distributor for some time. And because it has not updated the distributor, there’s no calcs for transactional latency, which means that whilst on the surface everything is fine, you are slowly heading towards a real headache with replication.
I’ve been working with SQL Server Replication recently, and there are plenty of resources available online to help you get started, including
Ever get tired of the ugly looking tabs at the top of SSMS, full of impertinent information? I was recently told of a method to tidy up the info displayed on the tabs, making them more legible: Go to Tools > Options Then navigate to Text Editor > Editor Tab and Status Bar Those options underneath “Tab Text” determine what is displayed on the tab. Here you can set them to be on or off.
I was catching up on some blogs yesterday, and one I really liked was this post on Spaghetti DBA. The authors main complaint was that he had grown disillusioned with Connect and the way in which so many issues are dealt with (ie, closed as “Won’t” Fix”). All the author wanted was better feedback from the engineers when responding to issues. I’m adding my voice to this request because in the past year I have raised a couple of issues myself, only to be disappointed with the feedback:
Hello! And Happy New Year!
I’ve spent much of the past month off work looking after my daughter whilst my wife is at work. It seems that while this time of year is quiet for me at the office, in the NHS its the busiest period of the year. So it has been great to spend time with Phoebe at home, which has resembled a building site since the end of October. Indeed, as I work from home I have had to move the computer from into 5 different times whilst work was completed. During that time I’ve learnt more things about plumbing than I’ve ever wanted to know, and surprised myself when I kept a remarkably cool head when I noticed water leaking out the ceiling (from the room I had just removed the radiator from successfully (I thought) and whose pipes I had capped) into our living room. And here is some advice which is as unrelated to technology as you’ll ever read on this site, but invaluable nonetheless: try not to reuse caps to cap off radiator pipes, as you have to turn them so tight they tend to break up when you try to use them again. Which is exactly what I had done. I thought they were screwed on well enough until I turned the heating on and water got flowing around the system, which was when the water started to leak out of the busted cap. Fortunately for me no damage was done and I was able to drain the entire heating system, which unfortunately coincided with us living without heating during the coldest days of 2014, until the plastering was done. It’s all part of us paying our dues until the house is done. Currently we are without a shower/bath, though mercifully we are not far away from friends who are kind enough to let us use their bathroom.
Every day this week I am going to be posting part of a script that I recently wrote, but broken down into separate components that are useful in their own right. The full script, which I will be sharing at the end of the week, automates the synchronisation of SSAS cubes from one instance to another.
So to make this an automated process that was robust to any deviation I had to make sure that the destination service account has full admin access to the source database, that the destination disk could stand the anticipated growth as well as the copy of the sync’d cube, and that the datasource’s connection string on the destination database was reset to it’s original value. Typically the datasource does not need updating, but the destination databases are used in a different environment with their own datasource. And as the sync also syncs the datasource it is necessary to update the datasource back to its original value
SQL Relay 2014 resumes in the UK next week. SQL Relay is a series of 1 day events that are run throughout the UK.
When I first started administering SSAS, the HardMemoryLimit is a property on SSAS instances that I never used to really take much notice of. After all, there’s the LowerMemoryLimit and the UpperMemoryLimit, and once these are set, they are set in stone right?
Despite the fact that end of mainstream support ended for SQL Server 2008 earlier this year, Microsoft have released SQL Server 2008 Service Pack 4. This arrives a few days after Service Pack 3 for SQL Server 2008 R2, but three years since the last one for this version of SQL. That’s a lot of updates!
Despite the fact that end of mainstream support ended for SQl Server 2008 R2 earlier this year, Microsoft have released SQL Server 2008 R2 Service Pack 3.
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #12.
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 2 Cumulative Update 2!
October this year is shaping up to be a month of learning, especially for those of us who live in London. Because not only is SQL Relay taking place on the 30th, there’s also RedGate’s SQL In The City happening a week earlier on the 24th.
Hey all, today I am going to show a quick demo on QUOTENAME. When I first discovered this string function, it was one I wished I had learnt long ago as it helps simplify writing out dynamic SQL.
To quote books online, QUOTENAME “Returns a Unicode string with the delimiters added to make the input string a valid SQL Server delimited identifier.” So basically it formats an identifier with delimiters so that in the event that an identifier does not comply with formatting.
The example below returns all the tables in AdventureWorks 2014 with the different types of delimiters that can be added using QUOTENAME.
Throughout October, SQL Relay 2014 is taking place in the UK. If you’re unfamiliar with SQL Relay, it is a series of 1 day events that are run throughout the UK. Each event is a single track (or more in the case of London) conference, with between 50 and 200 attendees and 6-8 hour long talks by SQL Server professionals, MVPs, authors, technical experts etc.
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 2 Cumulative Update 1!
With every new release of a software product, some features are altered at the request of the majority of users with the aim to improve the product. And generally these changes are correct. Most, but not all. Some are short sighted at best. And if there was one feature change which Microsoft acquiesced to which I just cannot abide by was to remove the automatic creation of the folder structure in Visual Studio database projects.
I’ve been looking into using processor affinity on our SSAS instances to help control the workload on our servers. Sometimes a process can run which will grab as much CPU resource as it can, which means that queries on the cubes cannot run, grinding the front end to a halt. Unlike the db engine, there is no resource governor for Analysis Services. Sure you can set the min/max memory simple enough, but setting limits on IO and CPU is a more complex task.
Recently, I discovered a job running on an instance of SQL Server (not a production server, test server mercifully!) that ran this:
Apologies for lack of truly original content lately, moving house + work + family are taking up all my spare time. Here’s some updates from around the web.
UPDATE: You should probably hold off installing this until the first CU is available (see linked post)
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #10.
Xp_regread is an undocumented stored procedure. It enables you to read the literal registry path that you specify via T-SQL. EXEC [master].[dbo].[xp_regread] @rootkey='HKEY_LOCAL_MACHINE', @key='Software\Microsoft\MSSQLServer\Setup', @value_name='SQLPath' [code] You can use the OUTPUT parameter with xp_regread to return the value back. [code language="sql"] declare @outputValue nvarchar (128) EXEC [master].[dbo].[xp_regread] @rootkey='HKEY_LOCAL_MACHINE', @key='Software\Microsoft\MSSQLServer\Setup', @value_name='SQLPath', @value = @outputValue OUTPUT SELECT @outputValue You can check who has access to xp_regread by running the following query; SELECT OBJECT_NAME(major_id) AS [Extended Stored Procedure], USER_NAME(grantee_principal_id) AS [User] FROM sys.
(If you attended my talk at SQL Supper in January 2016 and want a copy of my scripts, then this post here has the full details)
One of the features of SQL 2014 which got less column inches (pun intended) than Hekaton/ in memory OLTP but was of more interest to me was the clustered columnstore indexes, or CCI for brevity.
I’ve worked with partitioning for quite some time now, and though it has been around for quite some time and lots has been written about the concept I recently presented an example of partitioning to a few people within a team and felt it was worth sharing on this blog. In today’s post I am going to demonstrate how to create partitions for a table, what happens when you insert data, how to split data out, how to switch partitions and finally merging partitions.
As part of their Incremental Service Modelling, no sooner have they released SQL 2014 RTM, Microsoft have now released SQL Server 2014 Cumulative Update #1. Cumulative Update 1 includes all hotfixes which were released in SQL Server 2012 SP1 CU 6, 7, 8, and 9. The build number of the cumulative update package is 12.0.2342.0. You may have to restart the computer after you apply this cumulative update package. This CU contains 111-121 fixes, depending on your sources.
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #9.
Apologies for the lack of snappy title! Some weeks back I posted about the Management Pack for SQL Server Analysis Services 2008 and 2012 that was recently released. If you do not know anything about the management pack I recommend a read of my older post before reading this one. We recently installed the Management Pack for SQL Server Analysis Services 2012 and as promised I am posting some notes on the whole experience and install;
As my more regular readers will know, I am working towards my MCSA SQL Server 2012. So I was pretty pleased to see this tweet the other day. https://twitter.com/MSLearning/status/435877633067593729 I signed up and booked my exam for March 28th. I’ve booked the “Administering Microsoft SQL Server 2012 Databases”. This is my day job after all! A few caveats with the voucher: This voucher is valid for 20% off any MCSA: SQL Server 2012 exam: 461, or 462, or 463, plus a free retake of the exam if you need it.
Yesterday evening there was a very special SQL Supper hosted at the Microsoft offices in London Victoria. It was an open session with some of the Microsoft Data Platform group, Mark Souza, Nigel Ellis, Conor Cunningham, Hatay Tuna and Ewan Fairweather. So I made my way by foot from the offices here in Covent Garden to London Victoria. Although February in London is rather chilly, with a brisk walk you can make it in half an hour, plus you pass several landmarks such as the piazza at Covent Garden, Trafalgar Square (and it’s controversial 4th plinth sculpture, the Blue Hahn Cock) the Mall, Buckingham Palace and finally, Westminster Cathedral.
I’ve been using TRUNCATE TABLE to clear out some temporary tables in a database. It’s a very simple statement to run, but I never really knew why it was so much quicker than a delete statement. So let’s look at some facts:
I was browsing Twitter last Sunday night and a tweet from Chris Webb caught my eye:
Great! Finally, a Management Pack for SSAS! I don’t know how much people know about Systems Center Operations Manager (SCOM), so a brief introduction: SCOM is, according to the Microsoft Marketing Executives:
“System Center 2012 – Operations Manager provides infrastructure monitoring that is flexible and cost-effective, helps ensure the predictable performance and availability of vital applications, and offers comprehensive monitoring for your datacenter and cloud, both private and public.”
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #8.
( Part 2 focuses on clearing out the SSISDB by creating new stored proc based on the one used by the maintenance job. If your Maintenance Job is taking hours to run, you need to check this out.)
When deploying our Ispacs through my builds, I’ve still been getting the intermittent “timeout” issues I first wrote about back in May, and this has been exacerbated by extra SSIS Projects added to the SSISDB catalog.
SQL Relay R2 2013 resumes in the UK next week. SQL Relay is a series of 1 day events that are run throughout the UK.
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #7.
This morning I am happy to announce the release of an alpha version of the SSAS Activity Viewer for SQL Server 2014 CTP2. The name has had a slight change and is now called Analysis Services Server Activity Monitor (ASSAM) 2014.
One of my first posts was a simple PowerShell module that used a bat file to run SqlPackage in cmdline mode to extract dacpacs. I’ve recently made some changes that make it more robust and allow other users to run it. The PowerShell module is just a wrapper around a cmd file that executes sqlpackage.exe to extract the dacpac. Check out my September Six of The Best for another solution provided by Gianluca Sartori.
Throughout November, SQL Relay R2 2013 is taking place in the UK. This is the 2nd series of events this year. SQL Relay is a series of 1 day events that are run throughout the UK. Each event is a single track (or more in the case of London) conference, with between 50 and 200 attendees and 6-8 hour long talks by SQL Server professionals, MVPs, authors, technical experts etc.
At the PASS Summit keynote today Quentin Clark announced that SQL Server CTP2 (which is public and the last public review) and is available right now to download here. It is also available as a VM on Azure. According to a tweet from Glenn Berry, the version number in the demo is 12.0.1524.
Before what you download and install, here’s what you ought to know;
This morning I am happy to announce the release of a stable version of SSAS Activity Viewer 2012. The build number is 188.8.131.52. The new features have been highlighted below in the release notes section.
(Worth noting that there is a more recent version available here!)
Hi all, yesterday morning I created a project based on an old blog post Upgrade Analysis Services Activity Viewer 2008 to SQL Server 2012. Over the course of the day I made a few changes that I wanted to see myself and this afternoon I have released a beta version, SQL Server Analysis Services Activity Viewer 2012 184.108.40.206, or SSASAV 2012 220.127.116.11 for short.
Below is a script that I use to find the backup history for all the databases on a single instance. There is an inline ‘WHERE’ clause that is currently commented out that you can add in to filter by database name.
Here’s a roundup of 6 links and updates from the past month I recommend as good reading. I decided that I needed to post a list of Blogs posts, technology info and other random-but-related stuff. So I decided to limit myself to a monthly post of sharing no more than 6 articles across the web. The Full Story of Nokia and Microsoft - How we got here, and why Microsoft will fail with Nokia handsets
Being a DBA is more than just knowing about SQL: it’s about the technology which SQL runs on, which includes servers and storage. I’m naturally interested in hardware, so I enjoy keeping up-to-date. Becoming acquainted and staying current with new technologies will not take up too much time, plus it will make you a better DBA. Below is a few resources that I have used to keep up-to-date with hardware and storage.
When extracting Dacpacs it’s quite straightforward to go through the UI and extract, however what if you want to do this for more than one database, or automate the process for a build? The sqlpackage.exe enables us to extract a dacpac in silent mode. Below is the command, and I will go through each parameter: "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Extract /OverwriteFiles:False /tf:"C:\Users\richard.lee\Documents\dacpacs\dbname_Baseline.dacpac" /SourceConnectionString:"Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=AdventureWorks;Data Source=SQLA" /action: we want to extract the dacpac to a location.
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #6. To apply this cumulative update package, you must be running SQL Server 2012 SP1. You may have to restart the computer after you apply this cumulative update package. This CU contains 23 fixes and is pertinent for versions 11.0.3000 – 11.0.3373.0 but is not for SQL Server 2012 RTM (11.0.2100 – 11.
There are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don’t know. But there are also unknown unknowns – there are things we do not know we don’t know.Last July saw me work my last release weekend for quite some time. But the last weekend was not without it’s challenges, as we were attaching a new JBOD to our SQL Cluster (upgrading from 8 disk RAID 5 to 20 disk RAID 10 for OLAP Databases, really excited so see how they work out), amongst some other big changes. And you can plan for everything, except for the Unknown Unknowns.
(Donald Rumsfeld, United States Secretary of Defense).
A 9th Cumulative Update is available for those users still on the RTM branch of SQL Server 2012. This is in keeping with Microsoft’s Incremental Service Modelling. http://support.microsoft.com/kb/2867319 You may have to restart your machine before the update is applied. Please check that the hotfixes which are included in this cumulative update package are relevant to you, though some of these seem fairly important, such as poor performance when you run a query that uses the LIKE operator in the WHERE clause.
Here’s a roundup of 6 links and updates from the past month I recommend as good reading. I decided that I needed to post a list of Blogs posts, technology info and other random-but-related stuff. So I decided to limit myself to a monthly post of sharing no more than 6 articles across the web. Determine if a computer is virtual with powershell SSD RAID Load Testing Results from a Dell PowerEdge R720
Here’s a roundup of 6 links and updates from the past month I recommend as good reading. I decided that I needed to post a list of Blogs posts, technology info and other random-but-related stuff. So I decided to limit myself to a monthly post of sharing no more than 6 articles across the web. Enabling OpenSource Software using Windows Server 2012 R2 Architectural overview of SQL Server 2014 In Memory OLTP
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #5. To apply this cumulative update package, you must be running SQL Server 2012 SP1. You may have to restart the computer after you apply this cumulative update package. This CU contains 30 fixes and is pertinent for versions 11.0.3000 – 11.0.3368 but is not for SQL Server 2012 RTM (11.0.2100 – 11.
This week has been the week of the release: We’ve had the Visual Studio 2013 Preview and now the SQL Server 2014 CTP1. And that’s just the ones I’m focusing on! There’s the Windows 8.1 Preview as well as Windows Server 2012 R2. What with all the keynotes from Build and both TechEd’s this month, it is clear that Microsoft have certainly accelerated the release cycle and embraced a cloud-first development model. For me this is most noticeable in Team Foundation Services, the Azure based Source Control solution. Features were turned on regularly before they were released in the three updates we’ve had since the release of TFS 2012 back in August. This is a big change from the Microsoft’s previous strategy of developing for its products hosted on the premises first and then being pushed to them to the cloud afterwards.
At a later date I’ll dig more into features of SQL 2014, as this week my focus has been on Visual Studio 2013, which is out sometime this year, as well as doing my job in real life. The duties of blogging…
A week late and more than a few dollars short, this blog post is focusing on a few of the features announced for SQL Sever 2014. I’ve watched the Keynote and a few of the videos from the sessions at TechEd North America 2013, and read the whitepapers and datasheets. Information is still limited though, like which versions the features will be added to, so it’s certainly not a deep dive into the features. Nor is it an exhaustive list of the features; moreover, it’s a look at those ones that immediately piqued my interest in the Mission Critical Performance category of improvements.
As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #4. http://support.microsoft.com/kb/2833645 To apply this cumulative update package, you must be running SQL Server 2012 SP1. You may have to restart the computer after you apply this cumulative update package. This CU contains 39 fixes and is pertinent for versions 11.0.3000 – 11.0.3367 but is not for SQL Server 2012 RTM (11.0.2100 – 11.
Recently I post about a timeout issue I was getting when deploying SSIS projects. It was mainly to do with too much log and previous versions being kept in the SSISDB on our test environments. My solution was to run the SSIS Server Maintenance Job prior to every deployment to mitigate the timeout. By and large this has been super effective. However that job can take a considerable amount of time to run.
(Update: Part 2 focuses on clearing out the SSISDB by creating new stored proc based on the one used by the maintenance job. If your Maintenance Job is taking hours to run, you need to check this out.)
No WiX Wednesday this week, owing to commitments in real life. Instead, here is something regarding SSIS 2012 Deployment. Enjoy!
As part of our CI and Test Builds we have automated the deployment of two SSIS Projects. One is fairly large and the other one contains only two dtsx packages. Recently we have been getting timeout issues with the deployment of the solutions.
I feel like I haven’t posted in a while despite posting only yesterday. This is because I wrote all the WiX posts in a batch and have them on auto release (definitely a good idea to write all the posts for s a series and have them done before anything else.)
Last year, we planned to move our data warehouse from a 2 socket server to a 4 socket server that was left in the data centre from an abandoned project to virtualize our desktops.
In this post I’m gong to show some T-SQL that will show you the compression state of tables and how to quickly find objects that are worth compressing.
Last week brentozar.com ran one of their Technology Triage Tuesdays on Compression in SQL Server. The video does not appear to be up yet, however it provided good insight, certainly better than the Technet pages on the same subject. As we have a multi-terabyte data warehouse at work, which is on an Enterprise licensed instance of SQL Server 2012, I’m familiar with the subject of data compression in SQL. Until recently however, another one of our other SQL Servers was on a Standard license. Recently this instance was upgraded to Enterprise, and so I was able to compress the database. Although not as large as our data warehouse, it was well worth considering compressing some of the larger tables.
A 7th Cumulative Update is available for those users still on the RTM branch of SQL Server 2012. This is in keeping with Microsoft’s Incremental Service Modelling. http://support.microsoft.com/kb/2823247 You may have to restart your machine before the update is applied. Please check that the hotfixes which are included in this cumulative update package are relevant to you, though some of these seem fairly important. This CU has 17 fixes, and the version number will now be 11.
With Service Packs and Cumulative Updates rapidly released for SQL, it can be hard to keep up with what version you have installed where. It’s easy enough to find out which version of SQL Server you have installed just running the following T-SQL and you can find out all the info you need. But what about SSAS?
As part of their Incremental Service Modelling , Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #3. http://support.microsoft.com/kb/2812412 This CU contains 38 fixes and is pertinent for versions 11.0.3000 – 11.0.3348 but is not for SQL Server 2012 RTM (11.0.2100 – 11.0.2999) Please check that the hotfixes which are included in this cumulative update package are relevant to you, unlike the last CU2,which fixed a major bug , it is generally recommended that you only install a CU if one of the fixes is pertinent to you.
Been a massively busy week, what with deploying clustered SQL servers on Sunday, Release this weekend, new project starting next week for the dev team, so that means it starts this week for me, plus the usual drama of month end loading. Today, I have been up since 5am working with cubes using PowerShell. So am still telecommuting. My own proclivities to the activity are documented here and to further the conversation the article linked below is an article on the Business Insider website about how metrics were used to make the decision.
Update: turns out this job doesn’t actually execute (d’oh!) so I made a stored proc. Read the updated post by clicking this linky One of our databases is exclusively used for staging tables of raw data, before it is extracted and transformed by a SSIS package and inserted into our data warehouse. These tables are large and pretty much useless after one run. The database has over 1.5TB of storage, but even after a few days of heavy month end data loading, it can fill up quickly.
Here is a neat little script; it get’s all the status info of an Analysis Service database instance and the cubes within that database. You should see that status ought to be “Processed” and if not, you know that something has gone wrong in your database. This script can be quite easily expanded into including dimensions, measure groups, partitions, but I personally split that out into different scripts otherwise the screen looks too full of info and you cannot see the forest for the trees.
Automate Database Builds Part One: Extract DacPacs Using PowerShell Via sqlpackage.exe Automate Database Builds Part Two: Extract Database Structure for Visual Studio Solution Automate Database Build Part Three: Create a Database Publish Profile
(There is a more up to date version of this script here) Hey folks, and welcome to my first proper blog post. One of the things that I like to monitor through my daily checks in the file size and the free space of each file within the databases. During a busy loading period, some databases can grow massively, so it’s important to keep an eye on the growth. Through the UI, SSMS only gives us the size of the files and what the max size is that they can grow to, which is clearly not very useful. Fortunately, a quick query on the dbo.sysfiles of each databases that we want to monitor gives us some info:
but this isn’t entirely too useful. For starters the size is in KB. This makes sense as databases store data in 8kb sized files. Whilst it may be OK for a small database like this one, our data warehouses are far to big for us to find sizes in KB useful. Also, we can infer the remaining space, but again it’s not too helpful by having to figure this out for ourselves.