2018-10

2018-09

2018-08

2018-07

2018-06

2018-05

2018-04

2018-03

2018-02

2018-01

2017-12

2017-11

2017-10

2017-09

2017-08

2017-07

2017-06

2017-05

2017-04

2017-03

2017-02

2017-01

2016-12

2016-11

2016-10

This is by design.

So at first I thought “put nothing in script modules” because they’re pretty pointless, right? Anything you write will pertain to a step exclusively, so why do you need to share code across steps. Turns out quite a bit of code is shared across steps, particularly checking and validating parameters.

Then, as step templates got more sophisticated they got harder to read, so then I thought “put everything in script modules!” It aided in legibility of the step templates and allowed them to focus mainly on setting up a process and using script modules to execute the function.

There is however an issue to this: script modules are the “magic” of a step template. If you’re not aware that a step template requires a script module, how can you communicate this implicitly? You could mention that a script module is required in a step template, but this is still a manual process that everyone remembers, or even buys into.

There is also another issue: I had to make a change to a step template that required a change to the script module. The step template in question was in heavy use, and so careful planning was required to make sure that the change in the step template and the script module was not a breaking change: basically what I mean is that whereas a step template has to be explicitly updated for a given step in an deployment pipeline, the same cannot be said for a script module. So whatever is the latest version of the script template is what a user gets. And if you make a few changes to the step template in an iterative manner, you have to be sure that if anyone updates the step template then all version have to match one another.

Two of my most recent posts have concerned pre-model scripts; those scripts that need to be run before the dacpac is compared against the production database. These are sometimes necessary. And this necessity is usually because SSDT may not produce a script that is optimal. One such example is an index creation: no index is created “ONLINE”, and this can be a problem if the database being deployed is online during this operation. And can be even worse if the table is particularly large.

In respect to the SSDT team, I can see why this is the case: some versions of SQL have the online index feature, some don’t. So one solution may be to write a pre-model script that will create an index with the online operation included. And while there’s nothing wrong with this, there is an alternative: deployment contributors.

2016-09

2016-08

log shipping. One of the key points I mentioned was how slow restoring can be for a read only log shipped database. If going and reading the whole thing is too much effort for you now, I’ll save you the effort (you child of the internet you) and tell you it’s because the database needs to be kept transactionally consistent in between restores when making it available for read-only. It creates a .tuf file (transaction undo file) to keep the progress of all the pages that have uncommitted transactions saved to them. The checking, creation, applying and re-applying of this can take some time where there is a significant amount of uncommitted transactions within a log backup.

posts. I realised I had made a mistake on that post and had updated. But I also realised that the script itself is actually part of a PowerShell module. So wanting to make things as easy as possible, and being the helpful guy that I am, I have altered that example into a script which I am posting here.

It’s worth stressing that the errors that output to the console are fairly generic, and that there is a log file that will contain all the information if there are any errors.

2016-07

Incremental Service Modelling, Microsoft have released SQL Server 2016 Cumulative Update #1 for SQL Server 2016 RTM.

previous post, I shared a script that used the Octopus API to create a defect, with the aim of it being added to a TeamCity build and chaining the build to a deploy/test build step in TeamCity: the aim being to raise an Octopus defect if a test fails whilst the deployment to the environment succeeded. You can read more about it here.

What makes this a challenge is that there is no way to have a chained build that runs if, and only if, a build has failed. So as with Octopus you have to use the TeamCity API. In this script I get the the status of the last build that deployed/ran the tests, and if this build succeeded I do nothing. So yes this chained build has to always run post deploy/test phase.

Where it gets interesting though is if the build failed. Here we raise a defect, but not before checking to see if there are any defects raised that are still unresolved, as only one Octopus defect can be unresolved at any one time.

2016-06

Incremental Service Modelling, Microsoft have released SQL Server 2014 Cumulative Update #7 for SQL Server 2014 SP1.

2016-05

2016-04

preview mode, you are able to set up feeds and then publish NuGet packages either through NuGet command line or as a step in a Visual Studio build. I'm going to assume you've read the documentation and set up a feed, and that as part of your build you wish to publish a NuGet package to your new feed, and then you want to add the feed as a service on Octopus. I'm also going to assume you are using the NuGet package step instead of OctoPack to make your NuGet packages.

2016-03

"Ronseal" title for a post.... and whilst this may not be something you'll have to do regularly, searching for values in a query plan may be useful when running unit tests for SQL: you may be using it to confirm that a certain operator is used in the query plan, or whether a seek or scan is used... the possibilities are really endless.

2016-02

Incremental Service Modelling, Microsoft have released SQL Server 2014 Cumulative Update #5 for SQL Server 2014 SP1.

2016-01

returning DBCC CHECKDB results from a temp table, which was populated by running DBCC DBINFO against each database hosted on the SQL instance. I shared a query that returned any databases that had not had a CHECKDB run against it for over 7 days. The query is below:

 
select dbi.Value from #db dbi
WHERE field LIKE '%lastknow%'
AND (CASE WHEN isDate(dbi.value) = 1 THEN CAST (dbi.Value AS DATETIME2(3))END) < DATEADD (DAY,-7, GETDATE())

That “CASE” statement stands out quite a bit doesn’t it. Let’s explore why this query needs a CASE expression. Begin by creating a temp table that contains a sample of the data stored in the temp table.

2015-12

Incremental Service Modelling, Microsoft have released SQL Server 2014 Cumulative Update #4 for SQL Server 2014 SP1.

greatly deprecated form, and you cannot read from the secondary. So it will be good for DR, but not for reporting (as an aside it still might be easier to set up log shipping for DR than AlwaysOn Basic because you need to setup a failover cluster. Read through the “how to set up Standard Edition Availability Groups” here.) However you do need to be careful though when setting up log shipping across different editions of SQL Server: whilst you can log ship between Enterprise to Standard/Web, if the database uses any Enterprise features then you’ll need to log ship to an Enterprise edition of SQL Server. And because you’ll be using the database for reporting, you’ll need to get it licensed.

2015-11

2015-10

Incremental Service Modelling, Microsoft have released SQL Server 2014 Cumulative Update #3 for SQL Server 2014 SP1.

2015-09

  1. Paradise Lost. Remember, the hiring manager will have a pile of CV’s and perhaps will dedicate all of 5 minutes in deciding who makes the initial shortlist and whose to bin, so help them make a quick decision by keeping your CV short. The optimal length of a CV should be no more than a page, or 2 pages if you have to. But no one is going to care that 20 years ago you worked on a Saturday washing cars. You may have a great job history, but try not to go back too far. And the older the job, the less detail you need to put in. This still gives you a chance to shows something you’re really proud of in an old role without going into too much detail.
  2. Keep it Pertinent: You need to keep in mind that no 2 jobs are the same yet chances are you will apply for these different roles with the same CV. So you need to focus on the profession you’re applying for. If you’re applying for a job that is a DBA, keep that as your focus as you write it. The hiring manager is not going to care if your hobbies are going out with your friends and taking long walks on the beach. Cutting out the fraff like this will also help keep it short. You don’t want your CV to be the next Voynich Manuscript .
  3. Keep it Presentable: So you’ve got to keep it pithy and pertinent, yet you also need to keep it clear. So this is where most people who do keep it short and pertinent may fail in trying to cram everything in. Consider your font type: using something like Calibri over Arial or Verdana. By all means keep the margins narrow, but don’t over do it. Use bullet points as opposed to full sentences. Brevity is the key. If you want to draw the managers attention to something, highlight a word or two in bold. And use spell and grammar check! Despite technologies best efforts, I write with many typos, because I am lazy when it comes to typing. Rather embarrassingly, as a DBA, I still misspell “FROM as “FORM”, yet spell check will not pick this up. So get someone to proof read it. This is different from critiquing your CV: just make sure the sentences make sense and that there’s no daft typos.
  4. Show The Value: the benefits of certain technologies that you know about may not immediately present their benefit to the hiring manager, so you need to demonstrate that you understand the business value of your expertise. Listing technologies shows nothing other than you know how to write a list of technologies. You know how to write PowerShell modules… great. Who cares? You wrote a series of PowerShell modules that transferred backup files over a dodgy FTP connection and would retry any failed transfers so that no one had to log in over the weekend and check the progress of transfers? OK, that sounds much better.
  5. Plan and Prepare: there’s a military adage called the 7 P’s . Mild expletive aside, it’s important to take the time to write your CV and give it the attention it deserves. Your CV is the first impression that hiring managers have of you. If you just bash out some bullet points and don’t put the effort into applying the four points above, your putting out the impression that you aren’t really that serious about looking for a new job, so why should the hiring manager be serious about calling you in for an interview?

2015-08

Incremental Service Modelling, Microsoft have released SQL Server 2014 Cumulative Update #2 for SQL Server 2014 SP1.

2015-07

Read about it here. The rest of this post is still relevant, it’s just where you download the code form that has changed. this has proven to be a popular post, and the software has been downloaded a few times. I will post an update for SQL Server 2016 soon, though I think it is just references in the solution that needs updating.

Today I am pleased to announce the release of MSBuildSsis2012 on Codeplex. But before i get into what it does first, a bit of background:

Back in May, and subsequently a few more times since, I’ve posted about an error I get occasionally in one of our custom tasks that run in our builds. This custom task is a special case: As the SSIS 2012 project extension is dtproj, it cannot be compile using MSBuild. The typical solution would be to use DevEnv In MSBuild. And to automate deployment the solution (that would work for most people) is to use the SSIS Deployment Wizard in silent mode. However, we don’t use the SSIS Deployment Wizard in our automated builds as it does not work in Silent Mode when you have assembly references in the SSIS packages: it destroys those references and the dtsx packages fail whenever they run.

2015-06

previous post I shared some SQL scripts which would give you the number of undistributed commands by publication, by publication and article, and by distribution database. So you can measure undistributed commands at a very high granularity (articles), or at a very low granularity (the entire distribution database.) How granular you go depends entirely on where your pain is: you could of course filter out to only a few articles, or a particular publication.

best to avoid running Replication Monitor too often. Instead, you can use the information stored in your distribution database to get information such as agents hitting latency thresholds and a raise in undistributed transactions. Using the information stored in the distribution database ensures that monitoring is centralised. (I’m aware that there is a built in check for replication exceeding a latency threshold, however I have seen a high number of undistributed commands where in fact there were none, and false positives can be frustrating.)

2015-05

2015-04

HOwever, replication is really something that you’re not going to learn through a book. You’re going to learn by working with replication in a real environment. And chances are a real replication environment is going to use more than one publication against groupings of subscribers.

Incremental Service Modelling, Microsoft have released SQL Server 2014 Cumulative Update #7.

2015-03

Data Loading performance considerations with Clustered Columnstore indexes. In it they talk of loading data into partitioned tables where the load is aligned. Interestingly, their recommended process is to load into a staging table with the clustered columnstore index created first.

This is still based on the fact that you can hit the magic number of 1,048,576, or at the very least 102,400, per bulk insert, to create compressed segments. If you cannot, then I still feel that the method of loading into a heap, and then creating the clustered columnstore index, is preferential. This is because if you trickle insert into a CCI, you will end up with a mixture of open/closed delta stores and come compressed segments. You could leave it to the tuple mover to compress these, and you would still be left with some open delta stores. However, as Remus Rusanu points out, this is not recommended. To complete the process quickly, and to remove any open delta stores, you would need to rebuild the partition.

2015-02

Incremental Service Modelling, Microsoft have released SQL Server 2014 Cumulative Update #6.

2015-01

2014-12

2014-11

Automate SSAS Syncing Part 1: Create SSAS Database Role Automate SSAS Syncing Part 2: Create SSAS Role And Add Member

properties to this list to expand the info that you want. I’ve often mused about moving a lot of the logic that I constantly put in these scripts into higher tier functions in Powershell, or even create a ssas library, but time is always a premium…

set the aggregation designs on your cube partitions. I recently created another script that contains a “Fix” flag to set the aggregation design, so you can just run it without the flag to get the info, or set the flag and fix the aggregation designs. I also added a “process” part that will run a process index on any partition whose aggregations are not fully processed.

The advantages of using this over the older version include the aforementioned process and checks, but also it means you don’t have to specify cube, measuregroup and the aggregation designs name. The disadvantage to this is that it assumes you want the latest aggregation design applied and processed. Somewhere along the way there will probably be a script which combines all of this nicely!

2014-10

Incremental Service Modelling, Microsoft have released SQL Server 2014 Cumulative Update #4.

LowerMemoryLimit and the UpperMemoryLimit, and once these are set, they are set in stone right?

Right?

SQL Server 2008 Service Pack 4. This arrives a few days after Service Pack 3 for SQL Server 2008 R2, but three years since the last one for this version of SQL. That’s a lot of updates!

when a query contains table joins in SQL Server 2014.

The poor performance may show up if all the following apply:

  • You query contains an inner join
  • both tables have a Clustered Columnstore Index
  • There is a hash join executing in batch mode
  • the plan decides that a bitmap filter will be selective enough to be useful
  • the bitmap filter used is a complex bitmap filter (filter is made up of multiple columns and data types, whilst simple is a single column filter)

2014-09

truncating and deleting data from a table. Today I’m going to look into the behaviour of an identity column on a table when you use either a delete or a truncate to clear the data out of the table.

Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #12.

Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 2 Cumulative Update 2!

SQL Relay 2014 is taking place in the UK. If you’re unfamiliar with SQL Relay, it is a series of 1 day events that are run throughout the UK. Each event is a single track (or more in the case of London) conference, with between 50 and 200 attendees and 6-8 hour long talks by SQL Server professionals, MVPs, authors, technical experts etc.

Incremental Service Modelling, Microsoft have released SQL Server 2014 Cumulative Update #3.

2014-08

SQL Server 2014 Clustered Columnstore Indexes and Partitioning. The script below will create the database and the objects necessary. I’m going to state the obvious, but read through the scripts first before you run them; things like file locations on boxes are always different:

2014-07

Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 2 Cumulative Update 1!

min/max memory simple enough, but setting limits on IO and CPU is a more complex task.

2014-06

Incremental Service Modelling, no sooner have they released SQL 2014 RTM, Microsoft have now released SQL Server 2014 Cumulative Update #2.

Connect article already open.

SQL Server 2012 Service Pack 2 is available - but there’s a catch!

As part of their Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 2!

Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #10.

2014-05

here has the full details)

2014-04

2014-03

Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #9.

2014-02

ApexSQL Recover) if you’re quick enough.

https://twitter.com/Technitrain/status/429582908010274816

Great! Finally, a Management Pack for SSAS! I don’t know how much people know about Systems Center Operations Manager (SCOM), so a brief introduction: SCOM is, according to the Microsoft Marketing Executives:

“System Center 2012 – Operations Manager provides infrastructure monitoring that is flexible and cost-effective, helps ensure the predictable performance and availability of vital applications, and offers comprehensive monitoring for your datacenter and cloud, both private and public.“
This post on TechRepublic offers a decent intro on using SCOM to manage the performance of Windows Server 2012 with System Center

yesterday. So I’m 13 of the way towards achieving the Microsoft Certified Solutions Associate (MCSA) SQL Server Solutions Associate certification.

The second exam I passed is the “Delivering Continuous Value with Visual Studio 2012 Application Lifecycle Management”, which (along with the testing and TFS exams) is necessary for achieving the Microsoft Certified Solutions Developer Certification.

2014-01

Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #8.

2013-12

Part 2 focuses on clearing out the SSISDB by creating new stored proc based on the one used by the maintenance job. If your Maintenance Job is taking hours to run, you need to check this out.)

When deploying our Ispacs through my builds, I’ve still been getting the intermittent “timeout” issues I first wrote about back in May, and this has been exacerbated by extra SSIS Projects added to the SSISDB catalog.

2013-11

SSAS Activity Viewer 2012 is the ability to cancel sessions and queries. Though there is no obvious way to do this through SSMS you can in fact write an xmla query to kill either the session, connection or command:

Incremental Service Modelling, Microsoft have released SQL Server 2012 Service Pack 1 Cumulative Update #7.

blogs.msdn site showed how to create a custom MSBuild task to associate changesets and work items only since the last successful build. Recently I needed to write this custom task myself, but for builds that were “PartiallySuccessful” as well as “Successful”, and was pleased to find such a complete target available. However as it is written for TFS 2008 it won’t work with TFS 2010 onwards. As the blog appears to be inactive now I’ve made the changes and put the code below. Hopefully anyone who needs it for TFS 2010 onwards can use the pingback to get here for the up to date code. The rest of the solution works fine.

sqlpackage.exe to extract the dacpac. Check out my September Six of The Best for another solution provided by Gianluca Sartori.If you’ve never created a module before it’s very simple and I go into the steps below. But first, create a cmd file. The cmd file must contain one line of this for each database you want to extract. The %1 will be the location that we pass in through Powershell.

SQL Relay R2 2013 is taking place in the UK. This is the 2nd series of events this year. SQL Relay is a series of 1 day events that are run throughout the UK. Each event is a single track (or more in the case of London) conference, with between 50 and 200 attendees and 6-8 hour long talks by SQL Server professionals, MVPs, authors, technical experts etc.

2013-10

to download here. It is also available as a VM on Azure. According to a tweet from Glenn Berry, the version number in the demo is 12.0.1524.

Before what you download and install, here’s what you ought to know;

SSAS Activity Viewer 2012. The build number is 1.1.0.0. The new features have been highlighted below in the release notes section.

here!)

Hi all, yesterday morning I created a project based on an old blog post Upgrade Analysis Services Activity Viewer 2008 to SQL Server 2012. Over the course of the day I made a few changes that I wanted to see myself and this afternoon I have released a beta version, SQL Server Analysis Services Activity Viewer 2012 1.0.0.1, or SSASAV 2012 1.0.0.1 for short.

2013-09

2013-08

2013-07

this song stuck in my head. I’m sure (hopeful!) most of you are probably old enough to remember the Windows 95 Commercial. It’s a great song and a short post so if you want a bit of background music while you’re reading then open this in a new tab and enjoy.)

Recently I wanted to get the start times for a few SQL instances I have running. I immediately thought of the Get-Process cmdlet, so I checked out the members of get-process:

lines 49 - 57

Anti Virus running on SQL file-types.

“Good one” I thought to myself. And then I wondered if any of the files that make up the cubes on our SSAS boxes were being scanned or whether they were excluded. We have some big cubes in our system, and by big i mean TB’s worth of cubes, some of them over 700GB. So I fired up the System Center 2012 Endpoint Protection on one of the SSAS boxes, and sure enough, all files were being scanned. But should they? This Microsoft KB article confirmed my suspicions that they can be exempt from scanning: How to choose antivirus software to run on computers that are running SQL Server

How to: Use the Same Target in Multiple Project Files.

Further to my post yesterday about Importing projects in MSBuild, it’s worth having a quick read over the “Order of Evaluation” section to understand further that it is important where you import a project if you wish to set the properties and items of the imported project.(TL;DR -

The Story of Synchronized Settings - The Visual Studio Blog - Site Home - MSDN Blogs.

This blog from the Visual Studio team provides an insight into how and why synchronized settings came about.

2013-06

Visual Studio 2013 Preview and now the SQL Server 2014 CTP1. And that’s just the ones I’m focusing on! There’s the Windows 8.1 Preview as well as Windows Server 2012 R2. What with all the keynotes from Build and both TechEd’s this month, it is clear that Microsoft have certainly accelerated the release cycle and embraced a cloud-first development model. For me this is most noticeable in Team Foundation Services, the Azure based Source Control solution. Features were turned on regularly before they were released in the three updates we’ve had since the release of TFS 2012 back in August. This is a big change from the Microsoft’s previous strategy of developing for its products hosted on the premises first and then being pushed to them to the cloud afterwards.

At a later date I’ll dig more into features of SQL 2014, as this week my focus has been on Visual Studio 2013, which is out sometime this year, as well as doing my job in real life. The duties of blogging…

Extract DacPacs Using PowerShell Via sqlpackage.exe Automate Database Builds Part Two: Extract Database Structure for Visual Studio Solution Automate Database Build Part Three: Create a Database Publish Profile

Introduction

Continuing with my automated database deployment series, this post touches on the build.publish.xml file that is used to script and publish a SQL Server 2012 project in Visual Studio. This step is necessary as we will need this file in our automated build.

2013-05

SlowCheetah being one of them. This only works if you know at runtime the servers you are deploying to, or you only want a direct 1 to 1 connection. So on those occasions that you want to deploy to more than one box, updating at runtime does not work. Fortunately there is a way to update using WiX, and is surprisingly straightforward.

Part 2 focuses on clearing out the SSISDB by creating new stored proc based on the one used by the maintenance job. If your Maintenance Job is taking hours to run, you need to check this out.)

No WiX Wednesday this week, owing to commitments in real life. Instead, here is something regarding SSIS 2012 Deployment. Enjoy!

As part of our CI and Test Builds we have automated the deployment of two SSIS Projects. One is fairly large and the other one contains only two dtsx packages. Recently we have been getting timeout issues with the deployment of the solutions.

WebAppPool Element, the features available do not completely cover all the functionality that you may want to configure in the installer. For example, you can set how often you want the app recycled in minutes, but you can’t define when you want it done (say at 5am). Defining the app to recycle every 1440 minutes isn’t the same, as you’re constricted to installing at the time you want the 24 hour refresh to occur.

Testing The Waterhouse)

Firstly, let me introduce myself, my name is Gareth (author of Testing The Waterhouse), I’m a Senior QA Engineer and started working in QA the same time as Richard for the same company. I got into QA for similar reasons to Rich, I’d graduated and was finding it hard to get a job so ended up working for a small consultancy firm as a QA analyst.

When Richard got in touch with me a couple of weeks ago about guest blogging on his blog, I wasn’t really sure what to write about, seeing as he used to be a tester, but has moved into the DevOps world. I started reading The Phoenix Project, so I could try and gain an idea of what his new world is like, unfortunately, I’m only half way through, so didn’t really want to write a blog post about that…

2013-04

Extract DacPacs Using PowerShell Via sqlpackage.exe Automate Database Builds Part Two: Extract Database Structure for Visual Studio Solution Automate Database Build Part Three: Create a Database Publish Profile

Introduction

In Visual Studio 2012, the old Data Dude projects have been replaced by SQL Server Data Tools (SSDT), which encompasses a list of improvements to database development in Visual Studio that was formerly known by the codename “Juneau”. When SSDT was first released there were several issues with compatibility levels, but has improved greatly since the December update.

brentozar.com ran one of their Technology Triage Tuesdays on Compression in SQL Server. The video does not appear to be up yet, however it provided good insight, certainly better than the Technet pages on the same subject. As we have a multi-terabyte data warehouse at work, which is on an Enterprise licensed instance of SQL Server 2012, I’m familiar with the subject of data compression in SQL. Until recently however, another one of our other SQL Servers was on a Standard license. Recently this instance was upgraded to Enterprise, and so I was able to compress the database. Although not as large as our data warehouse, it was well worth considering compressing some of the larger tables.

TFS Build Server Best Practices,I thought I’d share the stack I install and a few of the steps that I go through to get the build server working. Obviously your experiences will be different to mine, and I don’t recommend that you follow this to the letter unless you have a similar set of solutions to build. I’m also going to assume you have Windows Server installed and your build service up and running, there are plenty of blogs and MSDN articles that show you how to set this up.

2013-03

“you know, Write-Host isn’t all bad… so long as your verb is Show.”

Jeffrey Snover

I’m very aware that using Write-Host cmdlet in PowerShell is generally frowned upon by PowerShell experts, mainly because it bypasses the pipeline, so it limits what you can do with that info. The reason I like to use it in my scripts is that generally I plan to write to the console in a variety of colours in order to convey the information and to highlight that there has been a change. If a cube is only partially processed it generally means that there is a partition within a measure group that is not processed, and when you have hundreds of partitions stored away it really can become a mammoth task to find out which one is unprocessed. So I have a script that searches all partitions with a status of unprocessed and write it to the screen in red-on-black writing. Little did I know that Write-Error existed that does the exact same thing!

2013-02

2013-01

Extract DacPacs Using PowerShell Via sqlpackage.exe Automate Database Builds Part Two: Extract Database Structure for Visual Studio Solution Automate Database Build Part Three: Create a Database Publish Profile

Introduction

I’m a big fan of PowerShell and an even bigger fan of writing PowerShell modules which enable me to share my functions amongst the team and enable them to extend and augment these functions. In this article I am going to introduce extracting dacpacs through sqlpackage.exe, and how to create a nested module that contains a simple function to call a bat file which extracts the dacpac of a database via sqlpackage. This function will be aliased in order to reduce the number of keystrokes required to extract a dacpac.

2012-12

here) Hey folks, and welcome to my first proper blog post. One of the things that I like to monitor through my daily checks in the file size and the free space of each file within the databases. During a busy loading period, some databases can grow massively, so it’s important to keep an eye on the growth. Through the UI, SSMS only gives us the size of the files and what the max size is that they can grow to, which is clearly not very useful. Fortunately, a quick query on the dbo.sysfiles of each databases that we want to monitor gives us some info:

" rel="attachment wp-att-11

but this isn’t entirely too useful. For starters the size is in KB. This makes sense as databases store data in 8kb sized files. Whilst it may be OK for a small database like this one, our data warehouses are far to big for us to find sizes in KB useful. Also, we can infer the remaining space, but again it’s not too helpful by having to figure this out for ourselves.