Database Corruption Week 5

Database Corruption Week 5

This week I managed to achieve the goal, however it was not very elegant. Plus the only way I could achieve it was to use a hex editor.

Basically I copied page 9 from the backup, and replaced it in the live version with an editor, which then enabled access to the database in emergency mode. After this I noticed that the missing rows upon repair could be found in the old backup. Both these methods I thought to be a hax. (and thought there must be a clean way to get an online database). I look forward to a prettier solution which hopefully Steve will post this week.I did at first try the method that Steve has posted as the wining solution to week 5, however I could not achieve an attached copy of the database with a missing primary mdf.

“Msg 5173, Level 16, State 1, Line 66
One or more files do not match the primary file of the database. If you are attempting to attach a database, retry the operation with the correct files. If this is an existing database, the file may be corrupted and should be restored from a backup.”

The original database and details for this can be found here:

http://stevestedman.com/server-health/database-corruption-challenge/week-5-database-corruption-challenge/week-5-challenge-details/

 

Update 24 May 2015.

Well Steve posted his alternative solution and it basically mirrors mine. (doh.. should have submitted). Anyway here is link: http://stevestedman.com/2015/05/week-5-alternate-solution/

Anyway it looks like the attach solution is very much environment specific so good luck if you are trying to get it to work. Please post any tips to getting it to work, it might be worth setting up a vm and mirror exactly the environment from the winning solution.

Database Corruption Week 4 Solution

Database Corruption Week 4 Solution

Here is my late solution for week four. Blame the UK Bank Holiday. The original database and details for this can be found here:

http://stevestedman.com/server-health/database-corruption-challenge/week-4-database-corruption-challenge/week-4-challenge-details/

This week was harder, but easier with the clue provided pointing you towards the right area.

USE [master]
RESTORE DATABASE [CorruptionChallenge4] FROM  DISK = N'C:\Path\CorruptionChallenge4_Corrupt.bak' 
WITH  FILE = 1,  MOVE N'CorruptionChallenge4' TO N'C:\Path\CorruptionChallenge4.mdf',  
MOVE N'UserObjects' TO N'C:\Path\CorruptionChallenge4_UserObjects.ndf',  
MOVE N'CorruptionChallenge4_log' TO N'C:\Path\CorruptionChallenge4_log.ldf',  NOUNLOAD,  STATS = 5
,
    KEEP_CDC;
GO

USE [CorruptionChallenge4];
WITH
f AS
(
SELECT [id],[FirstName] FROM [dbo].[Customers] WITH (INDEX([ncCustomerFirstname]))--(511740 row(s) affected)
),
l AS
(
SELECT [id],[LastName] FROM [dbo].[Customers] WITH (INDEX([ncCustomerLastname]))--(511740 row(s) affected)
)
SELECT f.[id],f.[FirstName],c.MiddleName,l.[LastName] INTO [tempdb].[dbo].[Customers]
FROM f 
INNER JOIN l ON l.[id] = f.[id]
INNER JOIN cdc.fn_cdc_get_net_changes_dbo_Customers(sys.fn_cdc_get_min_lsn('dbo_Customers'),sys.fn_cdc_get_max_lsn(),'all') c ON c.[id] = f.[id]
--SELECT * FROM cdc.captured_columns
--SELECT * FROM cdc.change_tables
GO
ALTER DATABASE [CorruptionChallenge4] SET SINGLE_USER
GO
DBCC CHECKTABLE ('[dbo].[Customers]',repair_allow_data_loss)--This will deallocate the pages and leave [dbo].[Customers] empty, which will cause constraint violations.
GO
DBCC CHECKDB('CorruptionChallenge4') WITH NO_INFOMSGS,ALL_ERRORMSGS;
GO
DBCC CHECKCONSTRAINTS WITH ALL_CONSTRAINTS
GO
ALTER TABLE [dbo].[Orders] WITH CHECK CHECK CONSTRAINT ALL--The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_Orders_People". The conflict occurred in database "CorruptionChallenge4", table "dbo.Customers", column 'id'.
GO
SET IDENTITY_INSERT [dbo].[Customers] ON
	INSERT INTO [dbo].[Customers] ([id],[FirstName],[MiddleName],[LastName])
	SELECT [id],[FirstName],[MiddleName],[LastName] FROM [tempdb].[dbo].[Customers]
SET IDENTITY_INSERT [dbo].[Customers] OFF
GO
ALTER TABLE [dbo].[Orders] WITH CHECK CHECK CONSTRAINT ALL
GO
ALTER DATABASE [CorruptionChallenge4] SET MULTI_USER
GO

SELECT * FROM [dbo].[Customers] WHERE [id] IN (510900,510901)
--id	FirstName	MiddleName	LastName
--510900	Steve	M	Stedman
--510901	William	V	STARK
GO
SELECT COUNT(*)--9
FROM sys.objects
WHERE is_ms_shipped = 0;

Database Corruption Week 3 Solution

Database Corruption Week 3 Solution

Here is the solution for week three. The original database and details for this can be found here:

http://stevestedman.com/server-health/database-corruption-challenge/week-3-database-corruption-challenge/week-3-challenge-details/

This week was kind of easy as all that is needed is to backup the transaction log with NO_TRUNCATE, and then follow a standard restore. If for some reason the database had been detached then this would have been slightly more tricky, however creating a dummy database would have only taken a few minutes more.

BACKUP LOG [CorruptionChallenge3] TO DISK = N'C:\Path\tail.trn' WITH INIT, NO_TRUNCATE;
GO
USE [master]
RESTORE DATABASE [CorruptionChallenge3] FROM  DISK = N'C:\Path\CorruptionChallenge3_Full.bak' WITH  FILE = 1,
MOVE N'CorruptionChallenge3' TO N'C:\Path\CorruptionChallenge3.mdf',
MOVE N'CorruptionChallenge3_log' TO N'C:\Path\CorruptionChallenge3_log.LDF',  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [CorruptionChallenge3] FROM  DISK = N'C:\Path\3\TransLog_CorruptionChallenge30.trn' WITH  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [CorruptionChallenge3] FROM  DISK = N'C:\Path\TransLog_CorruptionChallenge31.trn' WITH  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [CorruptionChallenge3] FROM  DISK = N'C:\Path\TransLog_CorruptionChallenge32.trn' WITH  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [CorruptionChallenge3] FROM  DISK = N'C:\Path\tail.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 5
GO

Database Corruption Week 2 Solution

Database Corruption Week 2 Solution

Unfortunately I missed week 2 submissions, but here is my solution for week two. The original database and details for this can be found here:

http://stevestedman.com/server-health/database-corruption-challenge/week-2-database-corruption-challenge/week-2-challenge-details/

 

The data corruption is confined to a single page. Page 244. There is a non clustered index available meaning that the only columns that are an issue are [Year] and [Notes], which are retrieved from the old backup file and cross checked with the contents of the corrupt page.

USE [master]
RESTORE DATABASE [CorruptionChallenge2] FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\2\CorruptionChallenge2_LatestBackup.bak' 
WITH  FILE = 1, 
MOVE N'CorruptionChallenge2' TO N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\CorruptionChallenge2.mdf',  
MOVE N'CorruptionChallenge2_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\CorruptionChallenge2_log.LDF'
GO
USE [master]
RESTORE DATABASE [CorruptionChallenge2_TwoDaysAgoBackup] FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\2\CorruptionChallenge2_TwoDaysAgoBackup.bak' 
WITH  FILE = 1, 
MOVE N'CorruptionChallenge2' TO N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\CorruptionChallenge2_TwoDaysAgoBackup.mdf',  
MOVE N'CorruptionChallenge2_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\CorruptionChallenge2_TwoDaysAgoBackup.LDF'
GO

USE [CorruptionChallenge2];
WITH 
a AS(SELECT TOP (595) * FROM [dbo].[Revenue] ORDER BY [ID] ASC),
b AS(SELECT TOP (9450) * FROM [dbo].[Revenue] ORDER BY [ID] DESC),
Good AS(SELECT *,1 [RecordStatus] FROM a UNION ALL SELECT *,1 [RecordStatus] FROM b),
PartialGood AS(
	SELECT p.[ID],p.[DepartmentID],p.[Revenue],Old.[Year],Old.[Notes],0 [RecordStatus] FROM [CorruptionChallenge2].[dbo].[Revenue] p WITH (INDEX([ncDeptIdYear]))
	LEFT JOIN [CorruptionChallenge2_TwoDaysAgoBackup].[dbo].[Revenue] Old
	ON  p.[ID] = Old.[Id]
	WHERE NOT EXISTS (SELECT * FROM [Good] WHERE [Good].[ID] = p.[ID])
)

SELECT * INTO [dbo].[RevenueLatest] FROM Good UNION ALL SELECT * FROM PartialGood
GO
SELECT * FROM  [dbo].[RevenueLatest] WHERE [RecordStatus] = 0--These records were rebuilt from data held in an old backup
DBCC PAGE('CorruptionChallenge2',1,244,1) WITH TABLERESULTS--This can be crossed checked against the damage page. It is only 12 rows, so we can check manually.
GO
TRUNCATE TABLE [Revenue] 
GO
DBCC CHECKDB('CorruptionChallenge2') WITH NO_INFOMSGS,ALL_ERRORMSGS;
GO
SET IDENTITY_INSERT [dbo].[Revenue] ON;
INSERT INTO [Revenue] ([id],[DepartmentID],[Revenue],[Year],[Notes])
SELECT [id],[DepartmentID],[Revenue],[Year],[Notes] FROM [dbo].[RevenueLatest]
SET IDENTITY_INSERT [dbo].[Revenue] OFF;
GO
EXEC [dbo].[checkCorruptionChallenge2Result]
GO

Database Corruption Week 1 Solution

Database Corruption Week 1 Solution

Here is my solution for week one. The original database and details for this can be found here:

http://stevestedman.com/2015/04/introducing-the-database-corruption-challenge-dbcc-week-1-challenge/

USE [CorruptionChallenge1]
GO
SELECT [id],[Notes] INTO #s FROM [dbo].[Revenue] s WITH (INDEX([ncBadNameForAnIndex]))
GO
DBCC DBREINDEX ('dbo.Revenue',clustId);
GO
UPDATE t SET t.[Notes] = s.[Notes] FROM [dbo].[Revenue] t
INNER JOIN #s s ON t.[id] = s.[id] WHERE t.[Notes] <> s.[Notes] OR t.[Notes] IS NULL
GO
SELECT * FROM [dbo].[Revenue]
GO
DROP TABLE #s
GO
DBCC CHECKDB('CorruptionChallenge1') WITH NO_INFOMSGS,ALL_ERRORMSGS;
GO

 

 

New Lab Environment 2015

I have recently been looking at solution for hardware for a new home lab. The primary need was a lower power solution, with plenty of ram and the ability to run hyper-v.

The solution I have chosen is a small form factor from Intel called nuc5i5mybe. The hardware purchase list is as follows:

Crucial 16GB (2x 8GB) DDR3 1600 MT/s CL11 SODIMM 204 Pin 1.35V/1.5V Memory Module Kit CT2KIT102464BF160B ***NOTE THIS IS LOW VOLTAGE RAM compatible with the board***
http://www.amazon.co.uk/gp/product/B007B5S52C
Price: £97.60

BLKNUC5I5MYHE
http://www.scan.co.uk/products/intel-nuc-core-i5-5300u-dual-core-23ghz-ddr3l-so-dimm-m2-plus-sata-iii-6gb-s-25-internal-intel-hd-55
Price: £346.98

M.2 Type 2280 500GB Fast Solid State Drive/SSD Crucial MX200
http://www.scan.co.uk/products/500gb-crucial-mx200-ssd-m2-type-2280-with-555-mb-s-read-500-mb-s-write-100k-iops-random-read-87k-iop
Price: £168.60

Total Cost: £613.18

This enables the running of several visualized guests, which so far I have two domain controllers, and one edge server allowing a site to site vpn to azure. This means my home lab can integrate seamlessly with the cloud offering from MS.

 

Help, I have lost a SQL data file and I have no backup

This week I stumbled across a post on the forum that was quite interesting in terms of recovery in dire situations. The op basically had lost a SQL data file and was in a situation where they had no backups. The question asked could they rebuild the missing data file, which unfortunately is not possible as the very nature of the problem is data loss.

Anyway the loss of a data file = data loss, and the best route is to use the backups. You there. Yes reader I am talking to you. BACKUPS, BACKUPS and did I mention BACKUPS!!! (Also do not forget to test them!!!)

When you create a database using the defaults you will get a data file (Test.mdf) and a log file (Test_log.ldf). There are several posts on the internet for last resort scenarios when you have lost a log file, but what happens for a data files. If you lose Test.mdf then as you can imagine you are pretty much dead in the water.

Now if you have additional data files you can still recover to some degree the database. In SQL 2005 on wards a new restore process was made possible called Piecemeal Restores. In which you can restore the database in pieces, providing that you first restored the PRIMARY file group.

Well this got me thinking about the ability to offline a file, and so this post will show how to do just that.

Example on a non production server for a missing non primary file group file.

	CREATE DATABASE [Test]
	 ON  PRIMARY
	( NAME = N'Test', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf' , SIZE = 5120KB , FILEGROWTH = 1024KB ),
	( NAME = N'Test_PRIMARY', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_PRIMARY.ndf' , SIZE = 17408KB , FILEGROWTH = 1024KB ),
	 FILEGROUP [NONPRIMARY]
	( NAME = N'Test_NONPRIMARY', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_NONPRIMARY.ndf' , SIZE = 17408KB , FILEGROWTH = 1024KB )
	 LOG ON
	( NAME = N'Test_log', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_log.ldf' , SIZE = 3072KB , FILEGROWTH = 10%)
	GO
	USE [Test]
	CREATE TABLE [dbo].[Found]([i] INT) ON [Primary]
	CREATE TABLE [dbo].[Lost]([i] INT) ON [NONPRIMARY]
	GO
	INSERT INTO [dbo].[Found]
	SELECT TOP (1000000)  ROW_NUMBER() OVER(ORDER BY(SELECT NULL)) FROM sys.all_objects x,sys.all_objects y
	INSERT INTO [dbo].[Lost]
	SELECT TOP (1000000)  ROW_NUMBER() OVER(ORDER BY(SELECT NULL)) FROM sys.all_objects x,sys.all_objects y

In the example above a database is created with two additional files, one sits on the PRIMARY file group and the other on a file group called NONPRIMARY. Then two tables are created and populated with 1 Million rows each.

Once this is done, stop the SQL Server (MSSQLSERVER) service for the instance and delete the file called Test_NONPRIMARY.ndf before starting SQL again.

Upon resuming you can check the logs to confirm, or run the following:

ALTER DATABASE [Test] SET ONLINE

This will throw the following error:

Msg 5120, Level 16, State 5, Line 1
Unable to open the physical file “C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_NONPRIMARY.ndf”. Operating system error 2: “2(The system cannot find the file specified.)”.
Msg 5181, Level 16, State 5, Line 1
Could not restart database “Test”. Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

Surprised? Well we did just delete that. Lets tell SQL Server to offline that file.

	ALTER DATABASE [Test] MODIFY FILE (NAME = 'Test_NONPRIMARY', OFFLINE);
SELECT * FROM sys.master_files WHERE [database_id] = DB_ID('Test');
	ALTER DATABASE [Test] SET ONLINE;

Excellent so that worked, and when checked the table on our intact files called dbo.Found has 1 Million rows.

	SELECT COUNT(*) FROM [Test].[dbo].[Found]--1000000
	GO
	SELECT COUNT(*) FROM [Test].[dbo].[Lost]

The lost table throws this:

Msg 8653, Level 16, State 1, Line 1
The query processor is unable to produce a plan for the table or view ‘lost’ because the table resides in a file group which is not online.

 

Example on a non production server for a missing primary file group file.

When you attempt the same process as above but delete Test_PRIMARY.ndf you will get an error when you try to offline it.

Msg 5077, Level 16, State 1, Line 1
Cannot change the state of non-data files or files in the primary filegroup.

However you can trick SQL into doing it, by creating a dummy database. Stop SQL, Copy your database files somewhere. Then start SQL and create a dummy database following the same layout but this time the file that was/is on the Primary file group put on a dummy one.

 

CREATE DATABASE [Test]
	ON  PRIMARY
( NAME = N'Test', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf' , SIZE = 5120KB , FILEGROWTH = 1024KB ),
	FILEGROUP [ItDoesNotMatter] --CHANGE the Test_PRIMARY file to be on the ItDoesNotMatter filegroup
( NAME = N'Test_PRIMARY', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_PRIMARY.ndf' , SIZE = 17408KB , FILEGROWTH = 1024KB ),
	FILEGROUP [NONPRIMARY]
( NAME = N'Test_NONPRIMARY', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_NONPRIMARY.ndf' , SIZE = 17408KB , FILEGROWTH = 1024KB )
	LOG ON
( NAME = N'Test_log', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_log.ldf' , SIZE = 3072KB , FILEGROWTH = 10%)
GO

Once done, stop sql, copy your original files over the dummy files. Making sure to delete the dummy Test_Primary.ndf else it will error like this.

Msg 5173, Level 16, State 1, Line 1
One or more files do not match the primary file of the database. If you are attempting to attach a database, retry the operation with the correct files. If this is an existing database, the file may be corrupted and should be restored from a backup.
Msg 5181, Level 16, State 5, Line 1
Could not restart database “Test”. Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

Then carry on as previous, except get ready for the corruption that is going to be caused by losing a vital piece of the database.

ALTER DATABASE [Test] MODIFY FILE (NAME = 'Test_PRIMARY', OFFLINE);
ALTER DATABASE [Test] SET ONLINE;--The Service Broker in database "Test" will be disabled because the Service Broker GUID in the database () does not match the one in sys.databases ().
DBCC CHECKDB('Test') WITH ALL_ERRORMSGS, NO_INFOMSGS;
/*
ALTER DATABASE [Test] SET SINGLE_USER;
ALTER DATABASE [Test] SET EMERGENCY;
DBCC CHECKDB('Test',repair_allow_data_loss) WITH ALL_ERRORMSGS, NO_INFOMSGS;
ALTER DATABASE [Test] SET MULTI_USER;
GO
SELECT COUNT(*) FROM [Test].[dbo].[Found]--1000000
GO
SELECT COUNT(*) FROM [Test].[dbo].[Lost]
*/

In my example the recovery completed, but of my two tables I could only access 144`304 from the dbo.found table, with dbo.lost being inaccessible like I had deleted it. I did not delete the file with the dbo.lost data in, but the corruption has meant that I might have well have.  Where the corruption occurs and what is lost/saved is mostly down to luck.

The database is no longer production viable after these steps, and this post is just showing a way to get data out.

I will finish by saying that this post is really a last chance saloon, and you would be crazy to do this for any other reasons than that. (Unless you are strange and you like playing with corruption in test environments like me).

BACKUPS BACKUPS BACKUPS

 

SQL Index dm_db_index_xxx_stats

There are 3 dm_db_index_xxx_stats objects that can be used to check and investigate index information.

sys.dm_db_index_usage_stats can be used to see how beneficial the index is, and its related maintenance cost. Note that this information is reset upon instance restart.

The final two index stats objects provide information on the physical and operational statistics of each index. This can be used to check for fragmentation, page splits and can be useful when deciding on factors such as fill factor.

sys.dm_db_index_operational_stats
sys.dm_db_index_physical_stats
 

--http://jongurgul.com/blog/sql-index-stats-queries
SELECT
 SCHEMA_NAME(ao.[schema_id]) [SchemaName]
,ao.[object_id] [ObjectID]
,ao.[name] [ObjectName]
,ao.[is_ms_shipped] [IsSystemObject]
,i.[index_id] [IndexID]
,i.[name] [IndexName]
,i.[type_desc] [IndexType]
,ddius.[user_scans] [UserScans]
,ddius.[user_seeks] [UserSeeks]
,ddius.[user_lookups] [UserLookups]
,ddius.[user_updates] [UserUpdates]
FROM sys.all_objects ao
INNER JOIN sys.indexes i ON ao.[object_id] = i.[object_id]
LEFT OUTER JOIN sys.dm_db_index_usage_stats ddius ON i.[object_id] = ddius.[object_id] AND i.[index_id] = ddius.[index_id] AND ddius.[database_id] = DB_ID() 
--stats reset upon server restart
WHERE ao.[is_ms_shipped] = 0
--http://jongurgul.com/blog/sql-index-stats-queries
SELECT 
 SCHEMA_NAME(ao.[schema_id]) [SchemaName]
,ao.[object_id] [ObjectID]
,ao.[name] [ObjectName]
,ao.[is_ms_shipped] [IsSystemObject]
,i.[index_id] [IndexID]
,i.[name] [IndexName]
,ddios.[partition_number] [PartitionNumber]
,i.[type_desc] [IndexType]
,ddios.[leaf_insert_count]--Cumulative count of leaf-level inserts.
,ddios.[leaf_delete_count]--Cumulative count of leaf-level deletes. 
,ddios.[leaf_update_count]--Cumulative count of leaf-level updates. 
,ddios.[leaf_ghost_count]--Cumulative count of leaf-level rows that are marked as deleted, but not yet removed.
--These rows are removed by a cleanup thread at set intervals. This value does not include rows that are retained, because of an outstanding snapshot isolation transaction. 
,ddios.[nonleaf_insert_count] [NonleafInsertCount]--Cumulative count of inserts above the leaf level.
,ddios.[nonleaf_delete_count] [NonleafDeleteCount]--Cumulative count of deletes above the leaf level.
,ddios.[nonleaf_update_count] [NonleafUpdateCount]--Cumulative count of updates above the leaf level.
,ddios.[leaf_allocation_count] [LeafAllocationCount]--Cumulative count of leaf-level page allocations in the index or heap.For an index, a page allocation corresponds to a page split.
,ddios.[nonleaf_allocation_count] [NonLeafAllocationCount]--Cumulative count of page allocations caused by page splits above the leaf level. 
,ddios.[range_scan_count] [RangeScanCount]--Cumulative count of range and table scans started on the index or heap.
,ddios.[singleton_lookup_count] [SingletonLookupCount]--Cumulative count of single row retrievals from the index or heap. 
,ddios.[forwarded_fetch_count] [ForwardedFetchCount]--Count of rows that were fetched through a forwarding record. 
,ddios.[lob_fetch_in_pages] [LobFetchInPages]--Cumulative count of large object (LOB) pages retrieved from the LOB_DATA allocation unit.
,ddios.[row_overflow_fetch_in_pages] [RowOverflowFetchInPages]--Cumulative count of column values for LOB data and row-overflow data that is pushed off-row to make an inserted or updated row fit within a page. 
,ddios.[page_lock_wait_count] [PageLockWaitCount]--Cumulative number of times the Database Engine waited on a page lock.
,ddios.[page_lock_wait_in_ms] [PageLockWaitIn_ms]--Total number of milliseconds the Database Engine waited on a row lock.
,ddios.[row_lock_wait_count] [RowLockWaitCount]--Cumulative number of times the Database Engine waited on a page lock.
,ddios.[row_lock_wait_in_ms] [RowLockWaitIn_ms]--Total number of milliseconds the Database Engine waited on a page lock.
,ddios.[index_lock_promotion_attempt_count] [IndexLockPromotionAttemptCount]--Cumulative number of times the Database Engine tried to escalate locks.
,ddios.[index_lock_promotion_count] [IndexLockPromotionCount]--Cumulative number of times the Database Engine escalated locks.
FROM sys.all_objects ao 
INNER JOIN sys.indexes i ON ao.[object_id] = i.[object_id] 
LEFT OUTER JOIN sys.dm_db_index_operational_stats(DB_ID(),NULL,NULL,NULL) ddios ON i.[object_id] = ddios.[object_id] AND i.[index_id] = ddios.[index_id]
WHERE ao.[is_ms_shipped] = 0
--http://jongurgul.com/blog/sql-index-stats-queries
SELECT
 DB_NAME() [DatabaseName]
,ao.[object_id] [ObjectID]
,SCHEMA_NAME(ao.[schema_id]) [SchemaName]
,ao.[name] [ObjectName]
,ao.[is_ms_shipped] [IsSystemObject]
,i.[index_id] [IndexID]
,i.[name] [IndexName]
,i.[type_desc] [IndexType]
,au.[type_desc] [AllocationUnitType]
,p.[partition_number] [PartitionNumber]
,ds.[type] [IsPartition]
--,p.[data_compression_desc] [Compression]
,ds.[name] [PartitionName]
,fg.[name] [FileGroupName]
,p.[rows] [NumberOfRows]
,CASE WHEN pf.[boundary_value_on_right] = 1 AND ds.[type] = 'PS' THEN 'RIGHT'
 WHEN pf.[boundary_value_on_right] IS NULL AND ds.[type] = 'PS' THEN 'LEFT'
 ELSE NULL
 END [Range]
,prv.[value] [LowerBoundaryValue]
,prv2.[value] [UpperBoundaryValue]
,CONVERT(DECIMAL(15,3),(CASE WHEN au.[type_desc] = 'IN_ROW_DATA' AND p.[rows] &gt;0 THEN p.[rows]/au.[data_pages] ELSE 0 END)) [RowsPerPage]
,(CASE WHEN au.[type_desc] = 'IN_ROW_DATA' AND i.[type_desc] = 'CLUSTERED' THEN au.[used_pages]*0.20 ELSE NULL END) [TippingPointLower_Rows]
,(CASE WHEN au.[type_desc] = 'IN_ROW_DATA' AND i.[type_desc] = 'CLUSTERED' THEN au.[used_pages]*0.30 ELSE NULL END) [TippingPointUpper_Rows]
,au.[used_pages][UsedPages]
,CONVERT(DECIMAL (15,3),(CASE WHEN au.[type] &lt;&gt; 1 THEN au.[used_pages] WHEN p.[index_id] &lt; 2 THEN au.[data_pages] ELSE 0 END)*0.0078125) [DataUsedSpace_MiB]
,CONVERT(DECIMAL (15,3),(au.[used_pages]-(CASE WHEN au.[type] &lt;&gt; 1 THEN au.[used_pages] WHEN p.[index_id] &lt; 2 THEN au.[data_pages] ELSE 0 END))*0.0078125) [IndexUsedSpace_MiB]
,au.[data_pages] [DataPages]
,ddips.[avg_fragmentation_in_percent] [AverageFragementationPercent]
FROM
sys.partition_functions pf
INNER JOIN sys.partition_schemes ps ON pf.[function_id] = ps.[function_id]
RIGHT OUTER JOIN sys.partitions p
INNER JOIN sys.indexes i ON p.[object_id] = i.[object_id] AND p.[index_id] = i.[index_id]
INNER JOIN sys.allocation_units au ON au.[container_id] = p.[partition_id] AND au.[type_desc] = 'IN_ROW_DATA' 
INNER JOIN sys.filegroups fg ON au.[data_space_id] = fg.[data_space_id]
INNER JOIN sys.data_spaces ds ON i.[data_space_id] = ds.[data_space_id]
INNER JOIN sys.all_objects ao ON i.[object_id] = ao.[object_id] ON ps.[data_space_id] = ds.[data_space_id]
LEFT OUTER JOIN sys.partition_range_values prv ON ps.[function_id] = prv.[function_id] AND p.[partition_number] - 1 = prv.[boundary_id]
LEFT OUTER JOIN sys.partition_range_values prv2 ON ps.[function_id] = prv2.[function_id] AND prv2.[boundary_id] = p.[partition_number]
INNER JOIN sys.dm_db_index_physical_stats(DB_ID(),NULL,NULL,NULL,'LIMITED') ddips ON i.[object_id] = ddips.[object_id] AND i.[index_id] = ddips.[index_id] AND ddips.[alloc_unit_type_desc] = 'IN_ROW_DATA'
WHERE ao.[is_ms_shipped] = 0

Surface Pro 3

Surface Pro 3

My first impression of the Surface Pro 3 was that Microsoft had produced a tablet/laptop that was really tempting. The choice of which model to buy is obviously dependent on how you want to use it, but will more than likely be heavily influenced by price. As you step up the model specification it is important to note that as well as processor/memory changes there is also an associated graphics change.

Surface Pro 3 – 64 GB / Intel Core i3-4020Y HD 4200 / 4GB RAM £639.00 incl. VAT
Surface Pro 3 – 128 GB / Intel Core i5-4300U HD 4400 / 4GB RAM £749.00 incl. VAT
Surface Pro 3 – 256 GB / Intel Core i5-4300U HD 4400 / 8GB RAM £849.00 incl. VAT
Surface Pro 3 – 256 GB / Intel Core i7-4650U HD 5000 / 8GB RAM £1,239.00 incl. VAT
Surface Pro 3 – 512 GB / Intel Core i7-4650U HD 5000 / 8GB RAM £1,549.00 incl. VAT
http://www.microsoft.com/surface/en-gb/products/surface-pro-3
http://en.wikipedia.org/wiki/Microsoft_Surface_Pro_3

Now I went for i5 128GB, as I was looking for a mid range spend for the purchase. However luckily for me I purchased it for around £670 as I got a returned unit from a retail store.

Issues

Storage:

The one thing that infuriates me most about this device is storage. Take for example the premium i7 models, for an additional £310 you can have a 512GB SSD. Ouch.. really?

I am not for one moment suggesting that this device is alone in creative pricing for hardware upgrades, but what is annoying is that this one is aimed to be a laptop replacement. A device with the same zero upgrade possibilities as other tablets.

Now if you could open the Surface easily then the internal mSATA drive could be replaced, however this is practically impossible to all but the very very determined. I am not even sure anyone has successfully done this with a Surface Pro 3, as the only detailed documented attempt resulted in damage.

https://www.ifixit.com/Teardown/Microsoft+Surface+Pro+3+Teardown/26595

http://uk.crucial.com/gbr/en/ssd/series/M550
Crucial M550 512GB mSATA Internal SSD £191.99 inc. VAT
Crucial M550 256GB mSATA Internal SSD £110.39 inc. VAT
Crucial M550 128GB mSATA Internal SSD £65.99 inc. VAT

USB:

Another issue with the device is that it has only one USB 3.0, which is not a major point but does limit options of what can be plugged in without adding an additional hub device (or dock).

The problem I came across was that the USB port did not provide enough power to keep my blu ray player or external hard dive connected. It kept dropping out which I knew was more than likely a power issue, and a quick search found that many others had encountered this. Now on a plus side the actual power brick for the device has a USB port for power. A quick search again and I found the cable I needed “USB 3.0 Y-CABLE 2x TYPE A Male to TYPE A Female”

Fan:

The fan does occasionally become a little noisy, but on the whole is fine.

Additional purchases

A must have purchase is the the type cover and it is strange not to see this bundled.

Surface Pro Type Cover

Two other items worth considering are a decent mouse and a dock for additional ports.
Surface Pro 3 Docking Station
Sculpt Comfort Mouse

My final thoughts

Now I know I bashed the Surface Pro 3 on a few points it is however a very good tablet/laptop replacement. I have in the past purchased several different IPADs, Nexus 10/7 devices and I can say that by far the most productive device for me is the Surface. It can do all the things my laptop did as well as providing similar functionality to my IPAD. I would like to see the mSATA drive accessible in future versions of the hardware, and it may also be good if Microsoft put a note in the box about potential power issues with a high power USB devices.