by Andrzej Kowal

MSSQL on Amazon RDS runs out of space!

Let’s take a simple case which I needed to solve today: our MSSQL Server database hosted in Amazon RDS used whole storage space which we assigned at the moment it was created. Ok, no problem, let’s increase it. Hmm, but there is no such option. So maybe take an RDS snapshot and restore to a larger instance? Nope. Ok, then let’s create a backup file, create new RDS instance with larger storage and restore backup file to that instance? Wrong again. This is just not supported!

The solution is the SQL Azure Migration Wizard. It has many features, but the one we need is the one that moves existing database between servers. The migration includes schema and data. It will support any MSSQL server including Azure, RDS and standalone installations. To solve my problem I first created new RDS instance with same settings and larger storage. Then I migrated DB from the old instance to the new one (with full schema and data) using this tool.

Let’s see the migration process in details:

  1. Download and unzip the application. If you don’t have SQL Server installed on the computer where you run this tool you will need to install Shared Management Objects (SMO) and Command Line Tools from this link: (this is for MSSQL 2012). You can find the necessary links after expanding “Install Instructions” section.
  2. In some cases you need to edit to SQLAzureMW.exe.config file and adjust the path to folder where the scripts are stored (by default it is c:\SQLAzureMW\BCPData). These scripts can get large, depending on your database size.
  3. Run the tool.
  4. Choose analyze / migrate -> database
  5. Connect to source database (it can be RDS too – in my case it was the RDS instance where we reached storage size limit). If you don’t have a source DB but a backup file – simply restore it to any MSSQL server which you have access to (outside RDS of course). Then connect to that database.
  6. Generate scripts. You might want to take a look at advanced options – there is quite a few of them.
  7. After script is generated connect to target database (if it does not exist, you can create it during the migration process) and execute the import.


That’s it. I was working on a 20GB database and it took 15 minutes to pull data and prepare the scripts and around 90 minutes to push the data to target DB. To speed things up I was running this tool on an AWS EC2 instance, which was launched in the same region as RDS instance. After import I even managed to rename the new RDS instance to the old name, so I could keep the old connection strings in all the config files.

And what about Amazon’s guidelines for importing database into RDS? If you take a close look, you will see that one of suggested scenarios involves bcp.exe (Bulk copy, which is part of SQL Command Line Tools). Actually SQLAzureMW uses bcp.exe under the hood to perform the migration. My verdict: go for SQLAzureMW. Its simplicity makes it the best choice for an RDS MSSQL DBA.

by Wojciech Sura

Number of rows in all tables in MSSQL Server

During reverse-engineering of a project, I had a suspicion, that some of database tables are not used at all. I wanted to get number of rows in each tables, but there were like 200 tables and I didn’t wanted to go through them one-by-one. Quick search in the Internet revealed this useful query:

t.NAME AS TableName, as indexName,
sum(a.total_pages) as TotalPages,
sum(a.used_pages) as UsedPages,
sum(a.data_pages) as DataPages,
(sum(a.total_pages) * 8) / 1024 as TotalSpaceMB,
(sum(a.used_pages) * 8) / 1024 as UsedSpaceMB,
(sum(a.data_pages) * 8) / 1024 as DataSpaceMB
sys.tables t
sys.indexes i ON t.OBJECT_ID = i.object_id
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
sys.allocation_units a ON p.partition_id = a.container_id
i.index_id <= 1
t.NAME, i.object_id, i.index_id,, p.[Rows]

Source: Stackoverflow.

Tags: , , ,
by Njål

Quick row count from Microsoft SQL Database


Recently, I’ve been involved in a cloud provider project where I had to do a row count on a Microsoft SQL Database table with more than 50.000.000 rows in it.



took about 10 seconds – since it performs a full table scan.


Here’s an alternative way (still accurate) of doing it that takes under 1 sec:

SELECT SUM (row_count)
FROM sys.dm_db_partition_stats
WHERE object_id=OBJECT_ID('myTable')
AND (index_id=0 or index_id=1);


You can run this from Microsoft Entity Framework as well:

long total = dbContext.Database.SqlQuery<long>("SELECT SUM (row_count) FROM sys.dm_db_partition_stats WHERE object_id=OBJECT_ID('myTable') AND (index_id=0 or index_id=1);").First();