by Andrzej Kowal

MSSQL on Amazon RDS runs out of space!

Let’s take a simple case which I needed to solve today: our MSSQL Server database hosted in Amazon RDS used whole storage space which we assigned at the moment it was created. Ok, no problem, let’s increase it. Hmm, but there is no such option. So maybe take an RDS snapshot and restore to a larger instance? Nope. Ok, then let’s create a backup file, create new RDS instance with larger storage and restore backup file to that instance? Wrong again. This is just not supported!

The solution is the SQL Azure Migration Wizard. It has many features, but the one we need is the one that moves existing database between servers. The migration includes schema and data. It will support any MSSQL server including Azure, RDS and standalone installations. To solve my problem I first created new RDS instance with same settings and larger storage. Then I migrated DB from the old instance to the new one (with full schema and data) using this tool.

Let’s see the migration process in details:

  1. Download and unzip the application. If you don’t have SQL Server installed on the computer where you run this tool you will need to install Shared Management Objects (SMO) and Command Line Tools from this link: (this is for MSSQL 2012). You can find the necessary links after expanding “Install Instructions” section.
  2. In some cases you need to edit to SQLAzureMW.exe.config file and adjust the path to folder where the scripts are stored (by default it is c:\SQLAzureMW\BCPData). These scripts can get large, depending on your database size.
  3. Run the tool.
  4. Choose analyze / migrate -> database
  5. Connect to source database (it can be RDS too – in my case it was the RDS instance where we reached storage size limit). If you don’t have a source DB but a backup file – simply restore it to any MSSQL server which you have access to (outside RDS of course). Then connect to that database.
  6. Generate scripts. You might want to take a look at advanced options – there is quite a few of them.
  7. After script is generated connect to target database (if it does not exist, you can create it during the migration process) and execute the import.


That’s it. I was working on a 20GB database and it took 15 minutes to pull data and prepare the scripts and around 90 minutes to push the data to target DB. To speed things up I was running this tool on an AWS EC2 instance, which was launched in the same region as RDS instance. After import I even managed to rename the new RDS instance to the old name, so I could keep the old connection strings in all the config files.

And what about Amazon’s guidelines for importing database into RDS? If you take a close look, you will see that one of suggested scenarios involves bcp.exe (Bulk copy, which is part of SQL Command Line Tools). Actually SQLAzureMW uses bcp.exe under the hood to perform the migration. My verdict: go for SQLAzureMW. Its simplicity makes it the best choice for an RDS MSSQL DBA.

by Wojciech Sura

Number of rows in all tables in MSSQL Server

During reverse-engineering of a project, I had a suspicion, that some of database tables are not used at all. I wanted to get number of rows in each tables, but there were like 200 tables and I didn’t wanted to go through them one-by-one. Quick search in the Internet revealed this useful query:

t.NAME AS TableName, as indexName,
sum(a.total_pages) as TotalPages,
sum(a.used_pages) as UsedPages,
sum(a.data_pages) as DataPages,
(sum(a.total_pages) * 8) / 1024 as TotalSpaceMB,
(sum(a.used_pages) * 8) / 1024 as UsedSpaceMB,
(sum(a.data_pages) * 8) / 1024 as DataSpaceMB
sys.tables t
sys.indexes i ON t.OBJECT_ID = i.object_id
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
sys.allocation_units a ON p.partition_id = a.container_id
i.index_id <= 1
t.NAME, i.object_id, i.index_id,, p.[Rows]

Source: Stackoverflow.

Tags: , , ,
by Njål

Windows Azure SQL – Entity Framework 6 Retry Logic


One of the biggest drawbacks of using Azure SQL is that the connection towards the database is a lot more fragile and unstable than when you’re using a database server on your LAN. Also – your Azure database is running on servers with several hundred other databases – so the database server might disconnect you at any given time to ensure fairness of service etc.

These are transient errors – meaning that a retry of the query most often will succeed and fix the error.

Microsoft has released several code blocks and guidelines to help developers use this retry logic –

And luckily – the EntityFramework team has added support for this in version 6 as well.

Here’s how you can get started with EF6 (beta 1) – using the built in SqlAzureExecutionStrategy


1. Install EF6 beta by opening the NUGET console in Visual Studio and typing
     PM> Install-Package EntityFramework –Pre

2. Install the EF6 Template generator – This will create the C# code for you – which will handle the communication towards the database

3. Restart Visual Studio

4. Add New Item  -> EDMX file. Connect to Database and/or specify your datamodel. Just like older versions of Entity Framework.

5. Right click in the EDMX editor – and select Add Code Generation Item. Select EF 6.x DbContext Generator.

Compile. You should now have EF6 up and running. Only one crucial part missing – the Azure Retry Logic.

7. Add a class inside your Project with the following code. It will get picked up upon Application Start – and make sure that all the EF Database queries will be retried if they fail.


using System;
using System.Data.Entity.Config;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.SqlServer;

namespace Www.Business.Database
    public class SqlAzureDbConfiguration : DbConfiguration
        public SqlAzureDbConfiguration()
            AddExecutionStrategy(() => new SqlAzureExecutionStrategy(), null);

        protected void AddExecutionStrategy(Func<IExecutionStrategy> getExecutionStrategy, string serverName)
            AddDependencyResolver(new ExecutionStrategyResolver<IExecutionStrategy>("SqlAzureExecutionStrategy", serverName, getExecutionStrategy));

(The example provided here: does not work/compile)

by Njål

Quick row count from Microsoft SQL Database


Recently, I’ve been involved in a cloud provider project where I had to do a row count on a Microsoft SQL Database table with more than 50.000.000 rows in it.



took about 10 seconds – since it performs a full table scan.


Here’s an alternative way (still accurate) of doing it that takes under 1 sec:

SELECT SUM (row_count)
FROM sys.dm_db_partition_stats
WHERE object_id=OBJECT_ID('myTable')
AND (index_id=0 or index_id=1);


You can run this from Microsoft Entity Framework as well:

long total = dbContext.Database.SqlQuery<long>("SELECT SUM (row_count) FROM sys.dm_db_partition_stats WHERE object_id=OBJECT_ID('myTable') AND (index_id=0 or index_id=1);").First();

by Stian

Long wait-time when taking SQL Server database offline

dbofflineWhen trying to take a SQL Server database offline, you might find yourself waiting, waiting and waiting because some process is hugging the database. This is how you fix it. Run:


In some cases you might get an error saying:

ALTER DATABASE failed because a lock could not be placed on database 'dbname' Try again later.

After you get this error, run

EXEC sp_who2

Look for the database in the result and find the SPID for that connection, then run


That should do the trick!

by Stian

Missing relations in EntityFramework model when generating from database

Talk to the handWe recently had a problem where the generated model in Entity Framework 4 would not create all the relationships we had in our database.

We were moving from a Linq2Sql model to Entity Framework and tried to automatically generate the model from the database, but we kept missing a few of the relationships. After hours of looking at the missing relationships in the database, comparing them to see why these would go missing while all others were ok, we got a hint at StackOverflow to check the indexes.

Having a Unique Non-Clustered Index on the primary key caused a problem for the EF model and changing these to a Unique Clustered Index fixed the problem.

by Njål

Problems with Recycle Bin in Umbraco – No Document exists with Version '00000000-0000-0000-0000-000000000000'

I recently had some problams emptying the Recycle Bin in Umbraco 4.6.1.  I clicked Empty Recycle Bin – but after a few items got removed the progress suddenly stopped showing 201 items remaining. Crap.

Internet Explorer displayed a Javascript Error msg (with a stacktrace):

No Document exists with Version '00000000-0000-0000-0000-000000000000'

Waiting, restarting IIS and trying to empty the recycle bin again did not help. Time to Google.

I found part of the solution here:

The sql

SELECT * FROM umbracoNode, cmsContent -- return nodes
nodeObjectType = 'C66BA18E-EAF3-4CFF-8A22-41B16D66A972' -- that are of type 'Content'
AND NOT IN (SELECT nodeId FROM cmsContent) -- but are not in the 'Content' table

helped me identidy orphan nodes that should be removed.

For each unique umbracoNode.Id in the result (you will get a lot of rows with the same id using the sql above) i ran the sql below to remove these nodes.

SET @TEMPID= 1843 -- TYPE IN ID HERE - e.g. 1843

FROM         cmsContent
WHERE     (nodeId IN (Select id from umbracoNode where parentID = @TEMPID))

FROM         cmsPropertyData
WHERE     (contentNodeId IN (Select Id from umbracoNode where parentID = @TEMPID))

FROM         umbracoNode
WHERE     (id IN (Select id from umbracoNode where parentID = @TEMPID))

FROM         umbracoNode

Then I tried emptying the recycle bin again. Now it deleted 10 more items before it stopped.

I repeated the steps above again – first I ran the select sql to find the new orphan nodes, and then I ran the delete sql’s. I was able to remove 43 more items from the recycle bin this way.

I repeated this procedure until my Recycle Bin was empty (I had to do it 6-7 times or so).

Make sure to backup your database before trying this approach.

by Aage Andre

Access-feil i Sharepoint config-database i Sharepoint 2007

Vi gikk på ett problem med at vi ikke fikk startet search service etter installering av MOSS 2007.

Tydeligvis liker ikke WSS at man bruker FQDN(fully qualified domain name) i Sharepoint_Config databasen.

Kjapp fiks: Kjør i SQL Manager på config-databasen:
(bytt ‘NETBIOS’ og ‘FQDN’ med servernavn etc)

Vi gutset på, men best practise er nok å kjøre en backup av databasen først 😉

by Andreas

Free .DBF Query Manager

A recent request from a client included digging into some data from an old CRM system, which is using the .DBF format. A quick search online showed that there are actually people charging money for query managers supporting this format – which, by the way, is obsolete and slow. I therefore sacrificed a couple of hours of my precious time (..) and put together a very simple DBF Query Manager, written in managed C# code.

This might be developed further, but I honestly doubt it. It solved my issues, and might solve yours as well.

1. Choose a directory where the DBF files are stored (DBF is a flat file database system where a table is represented as a file within a directory). In my example the .DBF files are stored in Q:DBFDemo:


2. Click Connect to open the Main Query Window:


3. Run your queries as you would in an overpriced, commercial, dishonest version! Including nested queries – any syntactically correct query will work.

Note: this thing is free. Free like it used to be in the old days – with no strings attached, or a “max 30 days use before we rip you off”-scheme hidden somewhere in the code. It took me a couple of hours to build. This blog post included! But for that very reason there is hardly any error handling! It will display a “Table not found” error message whenever it detects something wrong – like if you screw up the query. There is no intellisense (or intelligence..), and it is READ-ONLY! At least for now..

But it works! Like a charm! And THAT’s the main point!

Click here to download (requires .NET Framework 4.0)