Last Backup Occurred

This script will show when the last backup occurred as well as the backup type for each database. This covers all the backup types available and pivots the output into a simple and easily readable format.
It will state the reason; if any, for log reuse wait and the database recovery model in use.
Depending on your requirements you may also wish to alter the script to exclude copy only backups. This can be done by adding the following:

SELECT [database_name],[backup_finish_date],[type]
FROM msdb..backupset
WHERE [is_copy_only] <> 1

--http://jongurgul.com/blog/last-backup-occurred/
SELECT
 QUOTENAME(d.[name]) [DatabaseName]
,SUSER_SNAME(d.[owner_sid]) [DatabaseOwner]
,d.[compatibility_level] [Compatibility]
,d.[collation_name] [CollationName]
,d.[is_read_only] [IsReadOnly]
,d.[is_auto_close_on] [IsAutoClose]
,d.[is_auto_shrink_on] [IsAutoShrink]
,d.[recovery_model_desc] [RecoveryModel]
,d.[page_verify_option_desc] [PageVerify]
,d.[state_desc] [State]
,d.[log_reuse_wait_desc] [LogReuse]
,pivbus.[D] [Database]
,pivbus.[I] [DifferentialDatabase]
,pivbus.[L] [Log]
,pivbus.[F] [FileOrFilegroup]
,pivbus.[G] [DifferentialFile]
,pivbus.[P] [Partial]
,pivbus.[Q] [DifferentialPartial]
FROM sys.databases d
LEFT OUTER JOIN
(
SELECT
piv.[database_name],[D],[I],[L],[F],[G],[P],[Q]
FROM
(
SELECT [database_name],[backup_finish_date],[type]
FROM msdb..backupset
) bus PIVOT
(MAX([backup_finish_date]) FOR [type] IN ([D],[I],[L],[F],[G],[P],[Q])) piv
) pivbus ON d.[name] = pivbus.[database_name]

Get-VolumeFreeSpace

Here is an example function to return free space remaining on a volume with associated details. Updated 27/04/2012 to fix mount points.

Usage Examples:
Get-VolumeFreeSpace|ft
Get-VolumeFreeSpace|Select ComputerName,Name,MountPoint,FreeSpaceGiB|ft
Get-VolumeFreeSpace|Where-Object {!$_.SystemVolume}|Select ComputerName,Name,MountPoint,FreeSpaceGiB|ft
Get-VolumeFreeSpace “ComputerA”,”ComputerB” |ft

#http://jongurgul.com/blog/get-volumefreespace/
Function Get-VolumeFreeSpace{ 
    Param([String[]]$Computers) 
    If (!$Computers) {$Computers = $ENV:ComputerName} 
    $Base = New-Object PSObject; 
    $Base | Add-Member Noteproperty ComputerName -Value $Null; 
    $Base | Add-Member Noteproperty DeviceID -Value $Null; 
    $Base | Add-Member Noteproperty SystemVolume -Value $Null; 
    $Base | Add-Member Noteproperty DriveType -Value $Null; 
    $Base | Add-Member Noteproperty Name -Value $Null; 
    $Base | Add-Member Noteproperty MountPoint -Value $Null; 
    $Base | Add-Member Noteproperty FreeSpaceGiB -Value $Null; 
    $Results =  New-Object System.Collections.Generic.List[System.Object]; 
 
    ForEach ($Computer in $Computers){ 
        $Volume = Get-WmiObject -Class "Win32_Volume" -ComputerName $Computer; 
        $MountPoint = Get-WmiObject -Class "Win32_MountPoint" -ComputerName $Computer | Select @{Name="DeviceID";Expression={$_.Volume.ToString().Substring($_.Volume.ToString().IndexOf("`"")).Replace("`"","").Replace("\\","\")}},@{Name="MountPoint";Expression={$_.Directory.ToString().Substring($_.Volume.ToString().IndexOf("`"")).Replace("`"","").Replace("\\","\")}};   
        [String[]]$Mounts = $MountPoint|%{$_.DeviceID} 
        $Volume |%{ 
        $Entry = $Base | Select-Object * 
        $Entry.ComputerName = $Computer; 
        $Entry.DeviceID =  $_.DeviceID; 
        $Entry.SystemVolume =  $_.SystemVolume; 
        $Entry.DriveType = $_.DriveType; 
        $Entry.Name = $_.Name; 
        $Entry.FreeSpaceGiB = [Math]::Round($_.FreeSpace/1GB,3); 
        $DeviceID = $_.DeviceID;         
        $MountPoint| Where-Object{$DeviceID -contains $_.DeviceID}|%{
        $Local = $Entry | Select-Object *
        $Local.MountPoint = $_.MountPoint;
        [Void]$Results.Add($Local);
        }; 
        $_|Where-Object{$Mounts -notcontains $_.DeviceID}|%{[Void]$Results.Add($Entry);}; 
        } 
    }     
    $Results 
}

Dell Service Tag

 

The Dell soap API now returns a HTTP Error 503. The service is unavailable so the code listed below no longer functions.

Dell provide an alternative method to retrieve the information via a REST API. Please search for Dell Support Services APIs for more details.

A simple call can be made via the following link: (Note that you will need to register with Dell to get an apikey to use this service.)

https://api.dell.com/support/v2/assetinfo/header/tags.xml?svctags={svctags}&apikey={apikey}

Here is an example of how to call the Dell Service Tag GetAssetInformation web service so that you can retrieve information about your server. It simply returns the results in an xml format, however if anyone has any ideas or requests please feel free to ask.

updated:
http://xserv.dell.com/jigsawwebservices/AssetService.asmx?WSDL changed to http://xserv.dell.com/services/AssetService.asmx?WSDL

Usage Examples:
Get-DellAssetInformation
Get-DellAssetInformation “MyTag”



Function Get-DellAssetInformation{
Param([String]$ServiceTag = $(Get-WmiObject -Class "Win32_Bios").SerialNumber);
	Try{
	$AssetService = New-WebServiceProxy -Uri "http://xserv.dell.com/services/AssetService.asmx?WSDL";
	$ApplicationName = "AssetService";
	$Guid = [Guid]::NewGuid();
	$Asset = $AssetService.GetAssetInformation($Guid,$ApplicationName,$ServiceTag);
	$Writer = New-Object "System.IO.StringWriter";
	$XmlSerializer = New-Object System.Xml.Serialization.XmlSerializer($Asset.GetType());
	$XmlSerializer.Serialize($Writer,$Asset);
	[String]$Result = $Writer.ToString();
	$Writer.Flush();
	$Writer.Close();
	Return $Result;
	}
	Catch{
	Write-Host $($_.Exception.Message);	
	}
}

 

 

Get-DatabaseSizes

Simple example using SMO to get database storage information. You will need SMO installed for this script to work.

Note Size relates to the file size on disk, where as space refers to the storage used or available.

Usage Examples:

Get-DatabaseSizes
Get-DatabaseSizes . | Select *
Get-DatabaseSizes . | Select DatabaseName,LogicalName,FileName,Size_MiB
Get-DatabaseSizes . | Select FileName,Size_MiB | Format-Table
Get-DatabaseSizes . | Where-Object {$_.DatabaseName -like “m*”}

#http://jongurgul.com/blog/get-databasesizes/
[void][Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null;
Function Get-DatabaseSizes([String] $SQLInstanceName = $Env:COMPUTERNAME)
{
 $SQLInstance = New-Object "Microsoft.SqlServer.Management.Smo.Server" $SQLInstanceName;
 $Results =  New-Object System.Collections.Generic.List[System.Object];
 ForEach ($db in $SQLInstance.Databases)
 {
  ForEach ($fg in $db.FileGroups)
  {
  $Results += $fg.Files | Select-Object @{Name="DatabaseName";Expression={$db.Name}},
  @{Name="FileGroupName";Expression={$fg.Name}},
  @{Name="SpaceUsed_MiB";Expression={([Math]::Round($_.UsedSpace/1KB,3))}},
  @{Name="AvailableSpace_MiB";Expression={([Math]::Round(($_.Size-$_.UsedSpace)/1KB,3))}},
  @{Name="Size_KiB";Expression={$_.Size}},
  @{Name="Size_MiB";Expression={([Math]::Round($_.Size/1KB,3))}},
  @{Name="Size_GiB";Expression={([Math]::Round($_.Size/1MB,3))}},
  @{Name="Size_bytes";Expression={($_.Size*1024)}},
  @{Name="AutoGrowth";Expression={if($_.GrowthType-eq "Percent"){$_.Growth.ToString()+" %"}else{([Math]::Round($_.Growth/1KB,3)).ToString()+" MiB"}}},
  @{Name="MaxSize";Expression={($_.MaxSize)}},
  @{Name="FileType";Expression={"ROWS"}},
  @{Name="IsOffline";Expression={($_.IsOffline)}},
  @{Name="IsReadOnly";Expression={($_.IsReadOnly)}},
  @{Name="LogicalName";Expression={($_.Name)}},
  @{Name="FileID";Expression={($_.ID)}},
  @{Name="FileName";Expression={($_.FileName.Substring($_.FileName.LastIndexOf("\")+1))}},
  @{Name="Path";Expression={($_.FileName)}}
  }
  $Results += $db.LogFiles| Select-Object @{Name="DatabaseName";Expression={$db.Name}},
  @{Name="FileGroupName";Expression={$null}},
  @{Name="SpaceUsed_MiB";Expression={([Math]::Round($_.UsedSpace/1KB,3))}},
  @{Name="AvailableSpace_MiB";Expression={([Math]::Round(($_.Size-$_.UsedSpace)/1KB,3))}},
  @{Name="Size_KiB";Expression={$_.Size}},
  @{Name="Size_MiB";Expression={([Math]::Round($_.Size/1KB,3))}},
  @{Name="Size_GiB";Expression={([Math]::Round($_.Size/1MB,3))}},
  @{Name="Size_bytes";Expression={($_.Size*1024)}},
  @{Name="AutoGrowth";Expression={if($_.GrowthType-eq "Percent"){$_.Growth.ToString()+" %"}else{([Math]::Round($_.Growth/1KB,3)).ToString()+" MiB"}}},
  @{Name="MaxSize";Expression={($_.MaxSize)}},
  @{Name="FileType";Expression={"LOG"}},
  @{Name="IsOffline";Expression={($_.IsOffline)}},
  @{Name="IsReadOnly";Expression={($_.IsReadOnly)}},
  @{Name="LogicalName";Expression={($_.Name)}},
  @{Name="FileID";Expression={($_.ID)}},
  @{Name="FileName";Expression={($_.FileName.Substring($_.FileName.LastIndexOf("\")+1))}},
  @{Name="Path";Expression={($_.FileName)}}
 }
 RETURN $Results
}

http://gallery.technet.microsoft.com/scriptcenter/da2de04e-4f3d-4a84-8e2a-7950b562cb84

Transaction Log Usage By Session ID

A script to show the log usage on a per session basis, which can be useful when determining the potential impact of a query. It can be good to know for example if a session has generated a lot of log entries, and therefore killing that sessions would take a substantial amount of time. I bring this up because sometimes a session may have been running for a long time, but it might not have actual done anything that requires rollback.

--http://jongurgul.com/blog/transaction-log-usage-session-id/
SELECT 
 DB_NAME(tdt.[database_id]) [DatabaseName]
,d.[recovery_model_desc] [RecoveryModel]
,d.[log_reuse_wait_desc] [LogReuseWait]
,es.[original_login_name] [OriginalLoginName]
,es.[program_name] [ProgramName]
,es.[session_id] [SessionID]
,er.[blocking_session_id] [BlockingSessionId]
,er.[wait_type] [WaitType]
,er.[last_wait_type] [LastWaitType]
,er.[status] [Status]
,tat.[transaction_id] [TransactionID]
,tat.[transaction_begin_time] [TransactionBeginTime]
,tdt.[database_transaction_begin_time] [DatabaseTransactionBeginTime]
--,tst.[open_transaction_count] [OpenTransactionCount] --Not present in SQL 2005
,CASE tdt.[database_transaction_state]
 WHEN 1 THEN 'The transaction has not been initialized.'
 WHEN 3 THEN 'The transaction has been initialized but has not generated any log records.'
 WHEN 4 THEN 'The transaction has generated log records.'
 WHEN 5 THEN 'The transaction has been prepared.'
 WHEN 10 THEN 'The transaction has been committed.'
 WHEN 11 THEN 'The transaction has been rolled back.'
 WHEN 12 THEN 'The transaction is being committed. In this state the log record is being generated, but it has not been materialized or persisted.'
 ELSE NULL --http://msdn.microsoft.com/en-us/library/ms186957.aspx
 END [DatabaseTransactionStateDesc]
,est.text [StatementText]
,tdt.[database_transaction_log_record_count] [DatabaseTransactionLogRecordCount]
,tdt.[database_transaction_log_bytes_used] [DatabaseTransactionLogBytesUsed]
,tdt.[database_transaction_log_bytes_reserved] [DatabaseTransactionLogBytesReserved]
,tdt.[database_transaction_log_bytes_used_system] [DatabaseTransactionLogBytesUsedSystem]
,tdt.[database_transaction_log_bytes_reserved_system] [DatabaseTransactionLogBytesReservedSystem]
,tdt.[database_transaction_begin_lsn] [DatabaseTransactionBeginLsn]
,tdt.[database_transaction_last_lsn] [DatabaseTransactionLastLsn]
FROM sys.dm_exec_sessions es
INNER JOIN sys.dm_tran_session_transactions tst ON es.[session_id] = tst.[session_id]
INNER JOIN sys.dm_tran_database_transactions tdt ON tst.[transaction_id] = tdt.[transaction_id]
INNER JOIN sys.dm_tran_active_transactions tat ON tat.[transaction_id] = tdt.[transaction_id]
INNER JOIN sys.databases d ON d.[database_id] = tdt.[database_id]
LEFT OUTER JOIN sys.dm_exec_requests er ON es.[session_id] = er.[session_id]
LEFT OUTER JOIN sys.dm_exec_connections ec ON ec.[session_id] = es.[session_id]
--AND ec.[most_recent_sql_handle] <> 0x
OUTER APPLY sys.dm_exec_sql_text(ec.[most_recent_sql_handle]) est
--WHERE tdt.[database_transaction_state] >= 4
ORDER BY tdt.[database_transaction_begin_lsn]

Coalesce vs ISNULL

The difference between ISNULL and COALESCE?

It is quicker to type ISNULL. Plus COALESCE always takes me a few goes to spell correctly, luckily there is intellisense and I can wait for the color of the text to change.

Well yes quicker to type, but probably going to throw the odd issue because you neglected to read the tiny writing with regards to return type. Stepping over the fact that ISNULL is not ANSI standard syntax, it more importantly uses the data type of the check value.

Compare a VARCHAR(3)  with a VARCHAR(5) and you get? ut oh…

DECLARE @a VARCHAR(3),@b VARCHAR(5)
SET @a = NULL
SET @b = '12345'

SELECT COALESCE(@a,@b) --Returns 12345
SELECT ISNULL(@a,@b) --Returns 123

ISNULL returns a truncated value because it uses the data type of the check value.

COALESCE uses data type precedence, is ANSI standard and can compare multiple values.

Which one should you use?

Extracts from Books Online:

ISNULL (Transact-SQL)
http://technet.microsoft.com/en-us/library/ms184325.aspx

ISNULL ( check_expression , replacement_value )

Return Types
Returns the same type as check_expression.
If a literal NULL is provided as check_expression, returns the datatype of the replacement_value.
If a literal NULL is provided as check_expression and no replacement_value is provided, returns an int.

COALESCE (Transact-SQL)
http://technet.microsoft.com/en-us/library/ms190349.aspx

COALESCE ( expression [ ,…n ] )

Return Types
Returns the data type of expression with the highest data type precedence. If all expressions are nonnullable, the result is typed as nonnullable.

Suppress messages in SQL log

Previously we had looked at how to configure alerts to log the messages that we are interested in picking up. In this previous post I very briefly showed how to add logging for messages that were not normally logged such as a permission denied.

SELECT * FROM sys.messages WHERE [message_id] = 229
The %ls permission was denied on the object '%.*ls', database '%.*ls', schema '%.*ls'.

sp_altermessage (Transact-SQL)
http://msdn.microsoft.com/en-us/library/ms175094.aspx

EXEC msdb.dbo.sp_altermessage 229,'WITH_LOG','true';

But just as there are extra messages you want to see there are also those that perhaps you do not. You can see what is, and is not logged by viewing sys.messages.

The example below will suppress the message.

EXEC msdb.dbo.sp_altermessage 229,'WITH_LOG','false';
GO
SELECT * FROM sys.messages WHERE [message_id] = 229;

Previously the example above suppressed message id 3197 and 3198 which are generated by Data Protection Manager (DPM). However it would appear that these can not be suppressed. The most likely reason being that the messages when generated are called WITH_LOG meaning that they always log regardless.

Finally for those who want to suppress successful backup messages you can use trace flag 3226.

Trace Flags (Transact-SQL)
http://technet.microsoft.com/en-us/library/ms188396.aspx

By default, every successful backup operation adds an entry in the SQL Server error log and in the system event log. If you create very frequent log backups, these success messages accumulate quickly, resulting in huge error logs in which finding other messages is problematic.

With this trace flag, you can suppress these log entries. This is useful if you are running frequent log backups and if none of your scripts depend on those entries.

SQL Server Row Internals

Row Storage

A Row is stored on a Page which is 8192 Bytes, and this is broken down in two key areas when considering physical storage. Firstly on all pages there is a 96 Byte Header. In addition to this we also need to store the Row structure as well as its location in the remainder of the Page.

The location of a Row is stored at the end of the Page working backwards using 2 Bytes per Row. These 2 Byte pairs are generally called the Slot Array. I have seen a few articles/explanations of the Slot Array listing this as 36 Bytes which is incorrect, unless of course you are storing 18 Rows.

Now as the Page Header is always 96 Bytes, if you were to create a table and insert a Row into it; then the earliest point that it could be placed is immediately after the Page Header.

This location immediately after the Page Header would mean that the final 2 Bytes of the Page (Byte 8190 and Byte 8191) would be 60 00 which when reversed becomes 0x60 or 96 as an integer.

If another Row is added then Byte 8188 and Byte 8189 would be used to store the location of this second Row. The order of these 2 Byte pairs within the Slot Array is determined by how they are indexed, not necessarily how they are stored on the Page. That is to say a Row stored towards the front of the Page may or may not appear towards the beginning of the Slot Array which grows from the end of the Page towards the start.

Here is a representation of how a row is stored internally within SQL server using the standard structure.

Desc Bytes
Page Header 96
Status Byte A 1
Status Byte B 1
Null Bitmap Offset 2
Fixed Length Columns X
Column Count 2
Null Bitmap(At least 1 Byte) 1(+1 per 8 Columns)
Variable Length Column Count 2
Variable Length Offset Array 2 per column*
Uniqueifier 4 (if needed)
Variable Length Columns X
Row Version Contains timestamp + pointer to original record 14 bytes
Slot Array 2 bytes per Record/Row/Slot

Now you will see that the fixed length columns are stored towards the start of the structure and then the variable length columns. *I have highlighted that the variable length column array is 2 Bytes per column, however if a column is null then the entry in the Slot Array is not needed unless it has a non null column to its right.

[Location A][Location B][Location C] AAA,BBB,CCC
[Location A][Location B][Location C] AAA,NULL,CCC
[Location A][Location B][Location C] AAA,BBB,NULL

In the first 2 examples all 3 location pointers are needed, however the 3rd example does not require the 3rd location pointer. This raises the interesting point that column order does matter, and that you should keep the variable length columns that have a high probability to  remain NULL after variable lengths columns that are not.

Smallest Row Size

The smallest row size using the above mentioned format would be for a CHAR(1)
+1 Status Byte A
+1 Status Byte B
+2 Null Bitmap Offset
+1 Char(1)
+2 Column Count
+1 Null Bitmap (This is always stored regardless)

This would work out as 8 Bytes for the Row plus 2 Bytes per the Row location in the Slot Array. Now it is worth noting however that this is not the smallest Row size as internally this would be padded by 1 additional Byte. If you store a CHAR(2) then no padding will take place. The reason this happens is because of an optimization which is best explained as follows:

The reason that the data record is 9 bytes is that for heaps we need to guarantee that the RID that is used to uniquely identify a row in a heap is never changed. To do this, SQL Server needs to be able to store at least a forwarding pointer on a page. A forwarding pointer is 9 bytes (1 byte header, and 8 byte RID of the actual record.

For B-Trees, the 9 byte limit is used as well. The reason here is that SQL Server quickly likes to convert a B-Tree to a heap without having to touch all the individual records. This is only possible if we can guarantee that the records are at least 9 bytes long.

Marcel van der Holst – MSFT

http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/4e35ba31-748b-4a3b-947d-b56006b47ab5

So although the Row is 8 Bytes it is stored using 9 Bytes. The smallest optimally stored Row therefore is a fixed data type of 2 Bytes.

Most Rows per Page

On that note then if we say that we have a 9 Byte Row Structure + 2 Bytes for the Slot Array to store in the 8096 Bytes (8192-96 Page Header) we can work out the most Rows per Page as follows:

8096/11 = 736 Rows

Acknowledgement

This post builds on information gathered from research, but the factual knowledge was gathered from Paul Randall blog posts.

http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-page/

http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-record/

SQL Server Partitioning Data Movement

Introduction

This is a continuation to the partitioning overview post to help explain how data movement is performed using Alter Partition with Merge and Split Commands.

Getting Started

In order to move data in or out of a partitioned object the part of the object that needs to be moved must reside on the same file group. In the first post all the partitions were placed on the primary file group. Now if the middle partition P2 was removed using the MERGE syntax this would result in P3 renumbering itself to P2. So to make things easier to follow the next example will create file groups called One,Two,Three and Four which will correspond to the initial partition locations.

The code below is for a RANGE RIGHT partition, but included is the code for a RANGE LEFT.

CREATE DATABASE [MyTest]
GO
USE [master]
GO
ALTER DATABASE [MyTest] ADD FILEGROUP [One]
ALTER DATABASE [MyTest] ADD FILEGROUP [Two]
ALTER DATABASE [MyTest] ADD FILEGROUP [Three]
ALTER DATABASE [MyTest] ADD FILEGROUP [Four]
GO
ALTER DATABASE [MyTest] ADD FILE
(NAME = N'One_File01',FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\One_File01.ndf')
TO FILEGROUP [One]
GO
ALTER DATABASE [MyTest] ADD FILE
(NAME = N'Two_File01',FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\Two_File01.ndf')
TO FILEGROUP [Two]
GO
ALTER DATABASE [MyTest] ADD FILE
(NAME = N'Three_File01',FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\Three_File01.ndf')
TO FILEGROUP [Three]
GO
ALTER DATABASE [MyTest] ADD FILE
(NAME = N'Four_File01',FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\Four_File01.ndf')
TO FILEGROUP [Four]
GO
USE [MyTest]
CREATE PARTITION FUNCTION pfR(INT)
AS RANGE RIGHT FOR VALUES (100,200,300);
CREATE PARTITION SCHEME psR AS PARTITION pfR TO
([One],[Two],[Three],[Four]);
CREATE TABLE R ([ID] INT) ON psR([ID]);
INSERT INTO R VALUES (NULL)
INSERT INTO R VALUES (-1)
INSERT INTO R VALUES (0)
INSERT INTO R VALUES (1)
INSERT INTO R VALUES (100)
INSERT INTO R VALUES (101)
INSERT INTO R VALUES (200)
INSERT INTO R VALUES (201)
INSERT INTO R VALUES (300)
INSERT INTO R VALUES (301)
INSERT INTO R VALUES (1001)

/*
CREATE PARTITION FUNCTION pfL(INT) AS RANGE LEFT FOR VALUES (100,200,300);
CREATE PARTITION SCHEME psL AS PARTITION pfL TO
([One],[Two],[Three],[Four]);
CREATE TABLE L ([ID] INT) ON psL([ID]);
INSERT INTO L VALUES (NULL)
INSERT INTO L VALUES (-1)
INSERT INTO L VALUES (0)
INSERT INTO L VALUES (1)
INSERT INTO L VALUES (100)
INSERT INTO L VALUES (101)
INSERT INTO L VALUES (200)
INSERT INTO L VALUES (201)
INSERT INTO L VALUES (300)
INSERT INTO L VALUES (301)
INSERT INTO L VALUES (1001)

--DROP TABLE L
--DROP PARTITION SCHEME psL
--DROP PARTITION FUNCTION pfL
--DROP TABLE R
--DROP PARTITION SCHEME psR
--DROP PARTITION FUNCTION pfR
*/

You can check where each row is using the Detailed Table and Index Breakdown script

Merge

Using the example below of a RANGE RIGHT partition what would happen if the secondary boundary was removed.

RightPartition

In order to remove the secondary boundary this command is needed:

ALTER PARTITION FUNCTION pfR() MERGE RANGE(200)

The resultant change is that the partition to the right of the boundary shown below is removed and the data is moved to partition 2. The example has rows in both partitions, but this would also be true if both P2 and P3 had been empty, except no data would have been moved.

RightPartitionMergeBoundary200

But what if instead the values 100 and 101 had not been inserted and P2 was empty?

I need to add a note here to say that the following is true from SQL 2012 with regards to empty partitions, as was highlighted this is not true for versions prior. For previous versions see the following post: http://www.brentozar.com/archive/2013/01/best-practices-table-partitioning-merging-boundary-points/

 

RightPartitionEmptyP2

In this case data is not moved from P3 but instead P2 is removed.

RightPartitionEmptyP2MergeBoundary200

If the merge was on a RANGE LEFT example then it will remove the left partition. The easiest way to think about this is that the partition that satisfies the boundary condition will be removed, unless there are no rows in the other partition that does not satisfy the boundary condition. Hopefully that is clear if you imagine the two sides of your boundary.

If nulls are allowed on the partition column then they will be stored in P1, which is important when considering range left or right. A partition merge on a range right that has nulls in P1 will remove P2, where as a range left will remove P1.

Split

If you want to add partitions you must first specify the next filegroup to use. In order to put back the partition removed from the split above the following code is needed:

Right-

ALTER PARTITION SCHEME psR NEXT USED [Three]
GO
ALTER PARTITION FUNCTION pfR() SPLIT RANGE(200)

Left-

ALTER PARTITION SCHEME psL NEXT USED [Two]
GO
ALTER PARTITION FUNCTION pfL() SPLIT RANGE(200)

Switch

Switching is perhaps the most useful of the movement commands in that you can move data incredibly easily.

If the contents of P2 needed to be moved out we would first need to create a destination table before switching P2. The destination table does not need to be partitioned if we are moving only one source partition. In some cases it maybe easier to create an exact copy of your source structure as then you can move partitions specifying each partition number as needed.

CREATE TABLE R2 ([ID] INT) ON [Two];
GO
ALTER TABLE [dbo].[R] SWITCH PARTITION 2 TO R2

The two rows from P2 of the source table are now present in the destination table and P2 of source is showing as zero rows.

Now to put the rows back…

ALTER TABLE [dbo].[R2] SWITCH TO [dbo].[R] PARTITION 2

ALTER TABLE SWITCH statement failed. Check constraints of source table ‘dbo.R2’ allow values that are not allowed by range defined by partition 2 on target table ‘dbo.R’.

In order for this to work we need to confirm that the data in the R2 table will fit in to P2, which means a constraint. Now P2 should hold values that are greater than or equal to 100, but less than 200. (Nulls are held in P1)

ALTER TABLE [dbo].[R2] ADD CONSTRAINT [PleaseBelieveMe]
CHECK ([ID]>=N'100' AND [ID]<N'200' AND [ID] IS NOT NULL)

The switch will now work correctly and the rows are returned. This process becomes slightly more complicated with indexes but the theory is the same and is explained well in Special Guidelines for Partitioned Indexes.

The only issue that I have had with SWITCH is when conflict detection is enabled on a peer to peer replication, which is logged here on connect.

SQL Server Partitioning Overview

Introduction

This is a guide to help explain partitioning setup; which although introduced way back in SQL 2005 is still an area which throws up a lot of questions.

Requirements

First off you will need Enterprise SQL Server in your production environment, or alternatively you can explore this feature in SQL Developer edition.

Getting Started

In order to partition there are two objects that you will first need to create these are a PARTITION FUNCTION and a PARTITION SCHEME.

Function

The partition function is the first building block that you need and requires that you specify boundaries.

CREATE PARTITION FUNCTION pfL(INT)
AS RANGE LEFT
FOR VALUES (100,200,300);

The syntax is simple and shows that in the above case the function expects an integer and will create boundaries based on a RANGE LEFT strategy. The example shows three boundaries which will mean that there are four partitions. Number of boundaries + 1.

RANGE LEFT means that the boundary points are <= less than or equal. In the example below the first boundary is 100 which means all values less than or equal to 100 will go in to the first partition, those above 100 and less than the next boundary will fit in to partition two (P2) and so on.

The below image shows this and with example values sorted.

LeftPartition

Now obviously having shown RANGE LEFT, here is a RANGE RIGHT example. Note that now the boundaries points are => greater than or equal to. So the value 100 for example is now in the second partition.

RightPartition

Scheme

Before we can apply the partition, a partition scheme needs to be deployed which will identify the file groups to use.

CREATE PARTITION SCHEME psL
AS PARTITION pfL TO ([PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY]);

The table below specifies that it will use the partition scheme and will be partitioned on the ID column and to make this example simple all the data is stored on the Primary file group.

CREATE TABLE L ([ID] INT) ON psL([ID])
INSERT INTO L VALUES (NULL)
INSERT INTO L VALUES (-1)
INSERT INTO L VALUES (0)
INSERT INTO L VALUES (1)
INSERT INTO L VALUES (100)
INSERT INTO L VALUES (101)
INSERT INTO L VALUES (200)
INSERT INTO L VALUES (201)
INSERT INTO L VALUES (300)
INSERT INTO L VALUES (301)
INSERT INTO L VALUES (1001)

You can check where each row is using the Detailed Table and Index Breakdown script