Help, I have lost a SQL data file and I have no backup

This week I stumbled across a post on the forum that was quite interesting in terms of recovery in dire situations. The op basically had lost a SQL data file and was in a situation where they had no backups. The question asked could they rebuild the missing data file, which unfortunately is not possible as the very nature of the problem is data loss.

Anyway the loss of a data file = data loss, and the best route is to use the backups. You there. Yes reader I am talking to you. BACKUPS, BACKUPS and did I mention BACKUPS!!! (Also do not forget to test them!!!)

When you create a database using the defaults you will get a data file (Test.mdf) and a log file (Test_log.ldf). There are several posts on the internet for last resort scenarios when you have lost a log file, but what happens for a data files. If you lose Test.mdf then as you can imagine you are pretty much dead in the water.

Now if you have additional data files you can still recover to some degree the database. In SQL 2005 on wards a new restore process was made possible called Piecemeal Restores. In which you can restore the database in pieces, providing that you first restored the PRIMARY file group.

Well this got me thinking about the ability to offline a file, and so this post will show how to do just that.

Example on a non production server for a missing non primary file group file.

	CREATE DATABASE [Test]
	 ON  PRIMARY
	( NAME = N'Test', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf' , SIZE = 5120KB , FILEGROWTH = 1024KB ),
	( NAME = N'Test_PRIMARY', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_PRIMARY.ndf' , SIZE = 17408KB , FILEGROWTH = 1024KB ),
	 FILEGROUP [NONPRIMARY]
	( NAME = N'Test_NONPRIMARY', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_NONPRIMARY.ndf' , SIZE = 17408KB , FILEGROWTH = 1024KB )
	 LOG ON
	( NAME = N'Test_log', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_log.ldf' , SIZE = 3072KB , FILEGROWTH = 10%)
	GO
	USE [Test]
	CREATE TABLE [dbo].[Found]([i] INT) ON [Primary]
	CREATE TABLE [dbo].[Lost]([i] INT) ON [NONPRIMARY]
	GO
	INSERT INTO [dbo].[Found]
	SELECT TOP (1000000)  ROW_NUMBER() OVER(ORDER BY(SELECT NULL)) FROM sys.all_objects x,sys.all_objects y
	INSERT INTO [dbo].[Lost]
	SELECT TOP (1000000)  ROW_NUMBER() OVER(ORDER BY(SELECT NULL)) FROM sys.all_objects x,sys.all_objects y

In the example above a database is created with two additional files, one sits on the PRIMARY file group and the other on a file group called NONPRIMARY. Then two tables are created and populated with 1 Million rows each.

Once this is done, stop the SQL Server (MSSQLSERVER) service for the instance and delete the file called Test_NONPRIMARY.ndf before starting SQL again.

Upon resuming you can check the logs to confirm, or run the following:

ALTER DATABASE [Test] SET ONLINE

This will throw the following error:

Msg 5120, Level 16, State 5, Line 1
Unable to open the physical file “C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_NONPRIMARY.ndf”. Operating system error 2: “2(The system cannot find the file specified.)”.
Msg 5181, Level 16, State 5, Line 1
Could not restart database “Test”. Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

Surprised? Well we did just delete that. Lets tell SQL Server to offline that file.

	ALTER DATABASE [Test] MODIFY FILE (NAME = 'Test_NONPRIMARY', OFFLINE);
SELECT * FROM sys.master_files WHERE [database_id] = DB_ID('Test');
	ALTER DATABASE [Test] SET ONLINE;

Excellent so that worked, and when checked the table on our intact files called dbo.Found has 1 Million rows.

	SELECT COUNT(*) FROM [Test].[dbo].[Found]--1000000
	GO
	SELECT COUNT(*) FROM [Test].[dbo].[Lost]

The lost table throws this:

Msg 8653, Level 16, State 1, Line 1
The query processor is unable to produce a plan for the table or view ‘lost’ because the table resides in a file group which is not online.

 

Example on a non production server for a missing primary file group file.

When you attempt the same process as above but delete Test_PRIMARY.ndf you will get an error when you try to offline it.

Msg 5077, Level 16, State 1, Line 1
Cannot change the state of non-data files or files in the primary filegroup.

However you can trick SQL into doing it, by creating a dummy database. Stop SQL, Copy your database files somewhere. Then start SQL and create a dummy database following the same layout but this time the file that was/is on the Primary file group put on a dummy one.

 

CREATE DATABASE [Test]
	ON  PRIMARY
( NAME = N'Test', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf' , SIZE = 5120KB , FILEGROWTH = 1024KB ),
	FILEGROUP [ItDoesNotMatter] --CHANGE the Test_PRIMARY file to be on the ItDoesNotMatter filegroup
( NAME = N'Test_PRIMARY', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_PRIMARY.ndf' , SIZE = 17408KB , FILEGROWTH = 1024KB ),
	FILEGROUP [NONPRIMARY]
( NAME = N'Test_NONPRIMARY', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_NONPRIMARY.ndf' , SIZE = 17408KB , FILEGROWTH = 1024KB )
	LOG ON
( NAME = N'Test_log', FILENAME = N'C:\SQL\MSSQL11.MSSQLSERVER\MSSQL\DATA\Test_log.ldf' , SIZE = 3072KB , FILEGROWTH = 10%)
GO

Once done, stop sql, copy your original files over the dummy files. Making sure to delete the dummy Test_Primary.ndf else it will error like this.

Msg 5173, Level 16, State 1, Line 1
One or more files do not match the primary file of the database. If you are attempting to attach a database, retry the operation with the correct files. If this is an existing database, the file may be corrupted and should be restored from a backup.
Msg 5181, Level 16, State 5, Line 1
Could not restart database “Test”. Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

Then carry on as previous, except get ready for the corruption that is going to be caused by losing a vital piece of the database.

ALTER DATABASE [Test] MODIFY FILE (NAME = 'Test_PRIMARY', OFFLINE);
ALTER DATABASE [Test] SET ONLINE;--The Service Broker in database "Test" will be disabled because the Service Broker GUID in the database () does not match the one in sys.databases ().
DBCC CHECKDB('Test') WITH ALL_ERRORMSGS, NO_INFOMSGS;
/*
ALTER DATABASE [Test] SET SINGLE_USER;
ALTER DATABASE [Test] SET EMERGENCY;
DBCC CHECKDB('Test',repair_allow_data_loss) WITH ALL_ERRORMSGS, NO_INFOMSGS;
ALTER DATABASE [Test] SET MULTI_USER;
GO
SELECT COUNT(*) FROM [Test].[dbo].[Found]--1000000
GO
SELECT COUNT(*) FROM [Test].[dbo].[Lost]
*/

In my example the recovery completed, but of my two tables I could only access 144`304 from the dbo.found table, with dbo.lost being inaccessible like I had deleted it. I did not delete the file with the dbo.lost data in, but the corruption has meant that I might have well have.  Where the corruption occurs and what is lost/saved is mostly down to luck.

The database is no longer production viable after these steps, and this post is just showing a way to get data out.

I will finish by saying that this post is really a last chance saloon, and you would be crazy to do this for any other reasons than that. (Unless you are strange and you like playing with corruption in test environments like me).

BACKUPS BACKUPS BACKUPS

 

SQL Index dm_db_index_xxx_stats

There are 3 dm_db_index_xxx_stats objects that can be used to check and investigate index information.

sys.dm_db_index_usage_stats can be used to see how beneficial the index is, and its related maintenance cost. Note that this information is reset upon instance restart.

The final two index stats objects provide information on the physical and operational statistics of each index. This can be used to check for fragmentation, page splits and can be useful when deciding on factors such as fill factor.

sys.dm_db_index_operational_stats
sys.dm_db_index_physical_stats
 

--http://jongurgul.com/blog/sql-index-stats-queries
SELECT
 SCHEMA_NAME(ao.[schema_id]) [SchemaName]
,ao.[object_id] [ObjectID]
,ao.[name] [ObjectName]
,ao.[is_ms_shipped] [IsSystemObject]
,i.[index_id] [IndexID]
,i.[name] [IndexName]
,i.[type_desc] [IndexType]
,ddius.[user_scans] [UserScans]
,ddius.[user_seeks] [UserSeeks]
,ddius.[user_lookups] [UserLookups]
,ddius.[user_updates] [UserUpdates]
FROM sys.all_objects ao
INNER JOIN sys.indexes i ON ao.[object_id] = i.[object_id]
LEFT OUTER JOIN sys.dm_db_index_usage_stats ddius ON i.[object_id] = ddius.[object_id] AND i.[index_id] = ddius.[index_id] AND ddius.[database_id] = DB_ID() 
--stats reset upon server restart
WHERE ao.[is_ms_shipped] = 0
--http://jongurgul.com/blog/sql-index-stats-queries
SELECT 
 SCHEMA_NAME(ao.[schema_id]) [SchemaName]
,ao.[object_id] [ObjectID]
,ao.[name] [ObjectName]
,ao.[is_ms_shipped] [IsSystemObject]
,i.[index_id] [IndexID]
,i.[name] [IndexName]
,ddios.[partition_number] [PartitionNumber]
,i.[type_desc] [IndexType]
,ddios.[leaf_insert_count]--Cumulative count of leaf-level inserts.
,ddios.[leaf_delete_count]--Cumulative count of leaf-level deletes. 
,ddios.[leaf_update_count]--Cumulative count of leaf-level updates. 
,ddios.[leaf_ghost_count]--Cumulative count of leaf-level rows that are marked as deleted, but not yet removed.
--These rows are removed by a cleanup thread at set intervals. This value does not include rows that are retained, because of an outstanding snapshot isolation transaction. 
,ddios.[nonleaf_insert_count] [NonleafInsertCount]--Cumulative count of inserts above the leaf level.
,ddios.[nonleaf_delete_count] [NonleafDeleteCount]--Cumulative count of deletes above the leaf level.
,ddios.[nonleaf_update_count] [NonleafUpdateCount]--Cumulative count of updates above the leaf level.
,ddios.[leaf_allocation_count] [LeafAllocationCount]--Cumulative count of leaf-level page allocations in the index or heap.For an index, a page allocation corresponds to a page split.
,ddios.[nonleaf_allocation_count] [NonLeafAllocationCount]--Cumulative count of page allocations caused by page splits above the leaf level. 
,ddios.[range_scan_count] [RangeScanCount]--Cumulative count of range and table scans started on the index or heap.
,ddios.[singleton_lookup_count] [SingletonLookupCount]--Cumulative count of single row retrievals from the index or heap. 
,ddios.[forwarded_fetch_count] [ForwardedFetchCount]--Count of rows that were fetched through a forwarding record. 
,ddios.[lob_fetch_in_pages] [LobFetchInPages]--Cumulative count of large object (LOB) pages retrieved from the LOB_DATA allocation unit.
,ddios.[row_overflow_fetch_in_pages] [RowOverflowFetchInPages]--Cumulative count of column values for LOB data and row-overflow data that is pushed off-row to make an inserted or updated row fit within a page. 
,ddios.[page_lock_wait_count] [PageLockWaitCount]--Cumulative number of times the Database Engine waited on a page lock.
,ddios.[page_lock_wait_in_ms] [PageLockWaitIn_ms]--Total number of milliseconds the Database Engine waited on a row lock.
,ddios.[row_lock_wait_count] [RowLockWaitCount]--Cumulative number of times the Database Engine waited on a page lock.
,ddios.[row_lock_wait_in_ms] [RowLockWaitIn_ms]--Total number of milliseconds the Database Engine waited on a page lock.
,ddios.[index_lock_promotion_attempt_count] [IndexLockPromotionAttemptCount]--Cumulative number of times the Database Engine tried to escalate locks.
,ddios.[index_lock_promotion_count] [IndexLockPromotionCount]--Cumulative number of times the Database Engine escalated locks.
FROM sys.all_objects ao 
INNER JOIN sys.indexes i ON ao.[object_id] = i.[object_id] 
LEFT OUTER JOIN sys.dm_db_index_operational_stats(DB_ID(),NULL,NULL,NULL) ddios ON i.[object_id] = ddios.[object_id] AND i.[index_id] = ddios.[index_id]
WHERE ao.[is_ms_shipped] = 0
--http://jongurgul.com/blog/sql-index-stats-queries
SELECT
 DB_NAME() [DatabaseName]
,ao.[object_id] [ObjectID]
,SCHEMA_NAME(ao.[schema_id]) [SchemaName]
,ao.[name] [ObjectName]
,ao.[is_ms_shipped] [IsSystemObject]
,i.[index_id] [IndexID]
,i.[name] [IndexName]
,i.[type_desc] [IndexType]
,au.[type_desc] [AllocationUnitType]
,p.[partition_number] [PartitionNumber]
,ds.[type] [IsPartition]
--,p.[data_compression_desc] [Compression]
,ds.[name] [PartitionName]
,fg.[name] [FileGroupName]
,p.[rows] [NumberOfRows]
,CASE WHEN pf.[boundary_value_on_right] = 1 AND ds.[type] = 'PS' THEN 'RIGHT'
 WHEN pf.[boundary_value_on_right] IS NULL AND ds.[type] = 'PS' THEN 'LEFT'
 ELSE NULL
 END [Range]
,prv.[value] [LowerBoundaryValue]
,prv2.[value] [UpperBoundaryValue]
,CONVERT(DECIMAL(15,3),(CASE WHEN au.[type_desc] = 'IN_ROW_DATA' AND p.[rows] >0 THEN p.[rows]/au.[data_pages] ELSE 0 END)) [RowsPerPage]
,(CASE WHEN au.[type_desc] = 'IN_ROW_DATA' AND i.[type_desc] = 'CLUSTERED' THEN au.[used_pages]*0.20 ELSE NULL END) [TippingPointLower_Rows]
,(CASE WHEN au.[type_desc] = 'IN_ROW_DATA' AND i.[type_desc] = 'CLUSTERED' THEN au.[used_pages]*0.30 ELSE NULL END) [TippingPointUpper_Rows]
,au.[used_pages][UsedPages]
,CONVERT(DECIMAL (15,3),(CASE WHEN au.[type] <> 1 THEN au.[used_pages] WHEN p.[index_id] < 2 THEN au.[data_pages] ELSE 0 END)*0.0078125) [DataUsedSpace_MiB]
,CONVERT(DECIMAL (15,3),(au.[used_pages]-(CASE WHEN au.[type] <> 1 THEN au.[used_pages] WHEN p.[index_id] < 2 THEN au.[data_pages] ELSE 0 END))*0.0078125) [IndexUsedSpace_MiB]
,au.[data_pages] [DataPages]
,ddips.[avg_fragmentation_in_percent] [AverageFragementationPercent]
FROM
sys.partition_functions pf
INNER JOIN sys.partition_schemes ps ON pf.[function_id] = ps.[function_id]
RIGHT OUTER JOIN sys.partitions p
INNER JOIN sys.indexes i ON p.[object_id] = i.[object_id] AND p.[index_id] = i.[index_id]
INNER JOIN sys.allocation_units au ON au.[container_id] = p.[partition_id] AND au.[type_desc] = 'IN_ROW_DATA' 
INNER JOIN sys.filegroups fg ON au.[data_space_id] = fg.[data_space_id]
INNER JOIN sys.data_spaces ds ON i.[data_space_id] = ds.[data_space_id]
INNER JOIN sys.all_objects ao ON i.[object_id] = ao.[object_id] ON ps.[data_space_id] = ds.[data_space_id]
LEFT OUTER JOIN sys.partition_range_values prv ON ps.[function_id] = prv.[function_id] AND p.[partition_number] - 1 = prv.[boundary_id]
LEFT OUTER JOIN sys.partition_range_values prv2 ON ps.[function_id] = prv2.[function_id] AND prv2.[boundary_id] = p.[partition_number]
INNER JOIN sys.dm_db_index_physical_stats(DB_ID(),NULL,NULL,NULL,'LIMITED') ddips ON i.[object_id] = ddips.[object_id] AND i.[index_id] = ddips.[index_id] AND ddips.[alloc_unit_type_desc] = 'IN_ROW_DATA'
WHERE ao.[is_ms_shipped] = 0

Surface Pro 3

Surface Pro 3

My first impression of the Surface Pro 3 was that Microsoft had produced a tablet/laptop that was really tempting. The choice of which model to buy is obviously dependent on how you want to use it, but will more than likely be heavily influenced by price. As you step up the model specification it is important to note that as well as processor/memory changes there is also an associated graphics change.

Surface Pro 3 – 64 GB / Intel Core i3-4020Y HD 4200 / 4GB RAM £639.00 incl. VAT
Surface Pro 3 – 128 GB / Intel Core i5-4300U HD 4400 / 4GB RAM £749.00 incl. VAT
Surface Pro 3 – 256 GB / Intel Core i5-4300U HD 4400 / 8GB RAM £849.00 incl. VAT
Surface Pro 3 – 256 GB / Intel Core i7-4650U HD 5000 / 8GB RAM £1,239.00 incl. VAT
Surface Pro 3 – 512 GB / Intel Core i7-4650U HD 5000 / 8GB RAM £1,549.00 incl. VAT
http://www.microsoft.com/surface/en-gb/products/surface-pro-3
http://en.wikipedia.org/wiki/Microsoft_Surface_Pro_3

Now I went for i5 128GB, as I was looking for a mid range spend for the purchase. However luckily for me I purchased it for around £670 as I got a returned unit from a retail store.

Issues

Storage:

The one thing that infuriates me most about this device is storage. Take for example the premium i7 models, for an additional £310 you can have a 512GB SSD. Ouch.. really?

I am not for one moment suggesting that this device is alone in creative pricing for hardware upgrades, but what is annoying is that this one is aimed to be a laptop replacement. A device with the same zero upgrade possibilities as other tablets.

Now if you could open the Surface easily then the internal mSATA drive could be replaced, however this is practically impossible to all but the very very determined. I am not even sure anyone has successfully done this with a Surface Pro 3, as the only detailed documented attempt resulted in damage.

https://www.ifixit.com/Teardown/Microsoft+Surface+Pro+3+Teardown/26595

http://uk.crucial.com/gbr/en/ssd/series/M550
Crucial M550 512GB mSATA Internal SSD £191.99 inc. VAT
Crucial M550 256GB mSATA Internal SSD £110.39 inc. VAT
Crucial M550 128GB mSATA Internal SSD £65.99 inc. VAT

USB:

Another issue with the device is that it has only one USB 3.0, which is not a major point but does limit options of what can be plugged in without adding an additional hub device (or dock).

The problem I came across was that the USB port did not provide enough power to keep my blu ray player or external hard dive connected. It kept dropping out which I knew was more than likely a power issue, and a quick search found that many others had encountered this. Now on a plus side the actual power brick for the device has a USB port for power. A quick search again and I found the cable I needed “USB 3.0 Y-CABLE 2x TYPE A Male to TYPE A Female”

Fan:

The fan does occasionally become a little noisy, but on the whole is fine.

Additional purchases

A must have purchase is the the type cover and it is strange not to see this bundled.

Surface Pro Type Cover

Two other items worth considering are a decent mouse and a dock for additional ports.
Surface Pro 3 Docking Station
Sculpt Comfort Mouse

My final thoughts

Now I know I bashed the Surface Pro 3 on a few points it is however a very good tablet/laptop replacement. I have in the past purchased several different IPADs, Nexus 10/7 devices and I can say that by far the most productive device for me is the Surface. It can do all the things my laptop did as well as providing similar functionality to my IPAD. I would like to see the mSATA drive accessible in future versions of the hardware, and it may also be good if Microsoft put a note in the box about potential power issues with a high power USB devices.

Find SQL Server Instances

Any environment will kick up the odd surprise with extra servers you did not know about (suddenly appearing), which is why I always like to have a look around the environment every so often to see what is there. So how do you find SQL instances and without a 3rd party tool?

SQL Command Line

Luckily you can do this via the built-in query tools that Microsoft provides.

Depending on your version of SQL, the command line entries will look like this:

ISQL -L
OSQL -L
SQLCMD -L

Finding Servers that might have SQL

Active Directory

Some companies like to put “SQL” in to the name of the server, or in the description field. So we can look for those with this:

Import-Module ActiveDirectory;
Get-ADComputer -Filter 'Name -like "*sql*" -or Description -like "*sql*"' -Properties Description | Select-Object @{Name="ServerName";Expression={$_.Name}},@{Name="InstanceName";Expression={$null}};

While this only gets a list of probable servers with SQL instances we can use this to scout for the actual installs.

ManagedComputer

Using ManagedComputer only requires a computer name and it will return the instances it knows about. This can also easily be combined with the AD search shown above.

[Void][Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SqlWmiManagement") | out-null;
$Computer = $Env:COMPUTERNAME
$ManagedComputer = New-Object "Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer" $Computer;
$ManagedComputer.ServerInstances | Select-Object @{Name="ServerName";Expression={$Computer}},@{Name="InstanceName";Expression={$_.Name}};

SqlDataSourceEnumerator

GetDataSources is very easy and only requires a simple call to return all the instances which it can see.

[System.Data.Sql.SqlDataSourceEnumerator]::Instance.GetDataSources()

I tend to use this version below as I only want server and instance name.

[System.Data.Sql.SqlDataSourceEnumerator]::Instance.GetDataSources() | Select-Object @{Name="ServerName";Expression={$_.ServerName}},@{Name="InstanceName";Expression={$_.InstanceName}};

Sledgehammer

The final option is to check if the SQL service(s) are installed. Which is basically the go and have a look everywhere option.

Import-Module ActiveDirectory;

ForEach ($Computer in Get-ADComputer -Filter 'Name -like "*"'){
$ComputerName = $Computer.Name;
$Ping = Get-WmiObject Win32_PingStatus -Filter "Address='$ComputerName'";

 IF($Ping.StatusCode -eq 0)
 {
  Get-WmiObject -class "Win32_Service" -Computer $ComputerName|?{$_.Name -like "*MSSQL*"}|Select @{Name="ServerName";Expression={$ComputerName}},@{Name="InstanceName";Expression={$_.Name}};
 }
}

Finally…

Sometimes an instance will just refuse to show up… This is for a variety reasons such as firewalls, permissions, network issues…and sometimes they are just hiding.

If you want something that is a more complete solution with regards to searching and presentation then you may want to go and look at Microsoft Assessment and Planning (MAP) Toolkit.

A simple what is running script aka a modern sp_who2

There is a great deal of information that can be gathered to show what is currently running on SQL Server, but sometimes simple and brief is best. So here is a quick overview gathering scripts with limited bells and whistles.

The most important information for user requests is shown which can then be used to expand and drill down to areas of interest.

--http://jongurgul.com/blog/simple-running-script-aka-modern-sp_who2
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
DB_NAME(er.[database_id]) [DatabaseName]
,er.[session_id] AS [SessionID]
,er.[command] AS [CommandType]
,CASE WHEN er.[command] LIKE ('ALTER%') OR er.[command] LIKE ('CREATE%') THEN est.text
ELSE
(SUBSTRING(est.text,(er.[statement_start_offset]/2)+1,((CASE er.[statement_end_offset] WHEN -1 THEN DATALENGTH(est.text) ELSE er.[statement_end_offset] END - er.[statement_start_offset])/2)+1))
END [StatementCoreText]--http://msdn.microsoft.com/en-gb/library/ms181929.aspx
,est.text [StatementText]
,er.[open_transaction_count] [OpenTransactions]
,er.[status] AS [Status]
,CONVERT(DECIMAL(5,2),er.[percent_complete]) AS [Complete_Percent]
,CONVERT(DECIMAL(38,2),er.[total_elapsed_time] / 60000.00) AS [ElapsedTime_m]
,CONVERT(DECIMAL(38,2),er.[estimated_completion_time] / 60000.00) AS [EstimatedCompletionTime_m]
--,eqp.[query_plan] [QueryPlan]
,er.[plan_handle] [PlanHandle]
,er.[last_wait_type] [LastWait]
,er.[wait_resource] [CurrentWait]
,er.[cpu_time] [CPU]
,CONVERT(DECIMAL(15,3),(er.[granted_query_memory]/128)) [GrantedMemory_MiB]
,eqmg.[grant_time] [GrantTime]
,CONVERT(DECIMAL(15,3),eqmg.[requested_memory_kb]/1024) [RequestedMemory_MiB]
--,CONVERT(DECIMAL (15,3),eqmg.[ideal_memory_kb]/1024) [IdealMemory_MiB]
,CONVERT(DECIMAL(15,3),eqmg.[max_used_memory_kb]/1024) [MaxUsedMemory_MiB]
,CONVERT(DECIMAL(15,3),eqmg.[used_memory_kb]/1024) [UsedMemory_MiB]
,er.[logical_reads] [LogicalReads]
,er.[reads] [Reads]
,er.[writes] [Writes]
,(SELECT COUNT(*) FROM sys.dm_os_tasks ot WHERE er.[session_id] = ot.[session_id]) [NumberOfTasks]
,es.[host_name] [ConnectionHostName]
,es.[login_name] [ConnectionLoginName]
,es.[program_name] [ConnectionProgramName]
--,rgwg.[name] [WorkloadGroupName]
--,rgrp.[name] [ResourcePoolName]
--,tsu.[internal_objects_alloc_page_count]/128 [Task_UserObjectsAlloc_MiB]
--,tsu.[internal_objects_dealloc_page_count]/128 [Task_UserObjectsDeAlloc_MiB]
FROM sys.dm_exec_requests er
INNER JOIN sys.dm_exec_sessions es ON er.[session_id] = es.[session_id] AND es.[session_id] > 50
--LEFT JOIN
--(
--SELECT [session_id],
--SUM([internal_objects_alloc_page_count]) [internal_objects_alloc_page_count],
--SUM([internal_objects_dealloc_page_count]) [internal_objects_dealloc_page_count]
--FROM sys.dm_db_task_space_usage
--GROUP BY [session_id]
--) tsu
--ON tsu.session_id = es.[session_id]
--INNER JOIN sys.resource_governor_workload_groups rgwg ON es.[group_id] = rgwg.[group_id]
--INNER JOIN sys.resource_governor_resource_pools rgrp ON rgwg.[pool_id] = rgrp.[pool_id]
LEFT JOIN sys.dm_exec_query_memory_grants eqmg ON es.[session_id] = eqmg.[session_id]
CROSS APPLY sys.dm_exec_sql_text(er.[sql_handle]) est
--CROSS APPLY sys.dm_exec_query_plan(er.[plan_handle]) eqp
WHERE es.[session_id] <> @@SPID

T-SQL: Sequential Numbering

Introduction

It is as easy as 1, 2, 3. A generic numerical sequence which is incremented from a known value. In this article we will look at the options that are available to create sequences which have primarily used identity, and more recently a new tool in the form of SEQUENCE  before finally looking at a custom solution.

1. IDENTITY (Property) (Transact-SQL)

The most familiar of concepts is that of the  IDENTITY (Property) (Transact-SQL)   which can be specified on a column for a table. The code below shows how an identity property is applied with a seed value and an increment value. The example will start with an ID of 1 and then increment by 1 for each subsequent call, although these values can be changed as required.

DECLARE @d1 TABLE ([ID] BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED,[Name]VARCHAR(50));
INSERT INTO @d1 ([Name]) VALUES ('Jon');
INSERT INTO @d1 ([Name]) VALUES ('Jo');
INSERT INTO @d1 ([Name]) VALUES ('Bob');
SELECT FROM @d1;


The 3 row insert above has occurred using the values 1,2 and 3.

ID Name
1 Jon
2 Jo
3 Bob

 

2. CREATE SEQUENCE (Transact-SQL)

Starting with SQL 2012 a new option was made available via a new construct called SEQUENCE (Transact-SQL)  . Unlike the identity property which is bound to a single column of a table, a sequence is a stand-alone object. This removes the coupling between a particular column of a table and allows generation of sequences that could span multiple objects, or perhaps be cyclical in nature.

A simple example is shown below to mimic the previous example, although there are several additional options that can be specified which affect not only the way the sequence behaves but also how it performs.

CREATE SEQUENCE dbo.Seq2012 AS BIGINT START WITH 1 INCREMENT BY 1;
--ALTER SEQUENCE dbo.Seq2012 RESTART WITH 1;--Restarting the sequence
DECLARE @d2 TABLE ([ID] BIGINT PRIMARY KEY CLUSTERED,[NameVARCHAR(50))
INSERT INTO @d2 ([ID],[Name]) VALUES (NEXT VALUE FOR dbo.Seq2012,'Jon');
INSERT INTO @d2 ([ID],[Name]) VALUES (NEXT VALUE FOR dbo.Seq2012,'Jo');
INSERT INTO @d2 ([ID],[Name]) VALUES (NEXT VALUE FOR dbo.Seq2012,'Bob');
SELECT FROM @d2;


3. Custom Sequence

If more control is needed over a sequence then the most appropriate solution in some cases is to produce a custom solution. This will need a location to store the values and a way to increment. The simplest way to achieve this is dedicated table and stored procedure.

The table is best kept to a minimum with regards implementation as we do not want to introduce any performance degradation. The procedure is an update of the stored value based on a passed increment, however the update itself uses syntax which may seem a little strange. The UPDATE   syntax documentation shows this as

@variable = column = expression.

USE [tempdb];
GO
IF OBJECT_ID('dbo.NumberSequence''U'IS NOT NULL
DROP TABLE dbo.NumberSequence;
GO
CREATE TABLE dbo.NumberSequence(NextNumber BIGINT NOT NULL);
GO
--Seed the table with an initial value of 0, so that the first increment is a value of 1.
INSERT INTO dbo.NumberSequence(NextNumber) VALUES(0);
GO
CREATE PROCEDURE [dbo].[GetNextNumber]
@NextNumber AS BIGINT OUTPUT,
@IncrementBy BIGINT = 1
AS
SET NOCOUNT ON;
UPDATE [dbo].[NumberSequence]
SET @NextNumber = [NextNumber] = [NextNumber] + ISNULL(@IncrementBy,0);
GO
--Incrementing the custom sequence.
DECLARE @NextNumber BIGINT
EXECUTE [dbo].[GetNextNumber] @NextNumber OUTPUT,1
SELECT @NextNumber



The above solution will allow the increment of a numerical value based on a stored procedure call, in which we could increase the value by 1 or 100. This enables ranges of numbers to be requested as well as individual values.

4. No stored sequence

Another alternative is to use the highest values + 1 as the next in the sequence. However as this relies on getting the max value of what could be a large table this may have performance/concurrency implications if the corresponding inserts do not happen in a timely fashion.

DECLARE @d3 TABLE ([ID] BIGINT PRIMARY KEY CLUSTERED,[NameVARCHAR(50))
INSERT INTO @d3 ([ID],[Name]) VALUES ((SELECT COALESCE(MAX([ID]),0)+1 FROM @d3),'Jon');
INSERT INTO @d3([ID],[Name]) VALUES ((SELECT COALESCE(MAX([ID]),0)+1 FROM @d3),'Jo');
INSERT INTO @d3 ([ID],[Name]) VALUES ((SELECT COALESCE(MAX([ID]),0)+1 FROM @d3),'Bob');
SELECT FROM @d3;

 

 

http://social.technet.microsoft.com/wiki/contents/articles/25552.t-sql-sequential-numbering.aspx

Check status of SQL Jobs

Finding out the status of SQL Jobs is a simple task which can be accomplished via the GUI or in code using EXEC msdb.dbo.sp_help_job.

However one issue that is encountered is that the results from this procedure can not easily be used. If an attempt is made to insert the results into another table an error is thrown.

“An INSERT EXEC statement cannot be nested.”

CREATE TABLE #Results
(
[job_id] UNIQUEIDENTIFIER,
[originating_server] NVARCHAR(256),
[name] SYSNAME,
[enabled] TINYINT,
[description] NVARCHAR(1024),
[start_step_id] INT,
[category] SYSNAME,
[owner] SYSNAME,
[notify_level_eventlog] INT,
[notify_level_email] INT,
[notify_level_netsend] INT,
[notify_level_page] INT,
[notify_email_operator] NVARCHAR(MAX),
[notify_netsend_operator] NVARCHAR(MAX),
[notify_page_operator] NVARCHAR(MAX),
[delete_level] INT,
[date_created] DATETIME,
[date_modified] DATETIME,
[version_number] INT,
[last_run_date] INT,
[last_run_time] INT,
[last_run_outcome] INT,
[next_run_date] INT,
[next_run_time] INT,
[next_run_schedule_id] INT,
[current_execution_status] INT,
[current_execution_step] SYSNAME,
[current_retry_attempt] INT,
[has_step] INT,
[has_schedule] INT,
[has_target] INT,
[type] INT
)

INSERT INTO #Results
EXEC msdb.dbo.sp_help_job
--Msg 8164, Level 16, State 1, Procedure sp_get_composite_job_info, Line 72
--An INSERT EXEC statement cannot be nested.

Workarounds

In order to capture the data there are several options.

0. Perhaps Microsoft will refactor the code and create a management view/tvf to retrieve jobs with status? ;-)

1. Use Openrowset and enable “ad hoc distributed queries” = Lazy Option and comes with security issues. There is also further issues with 2012 see links.

http://blogs.msdn.com/b/sqlagent/archive/2012/07/12/workaround-sql-server-2012-openrowset-on-msdb-dbo-sp-help-job-throws-error.aspx
https://connect.microsoft.com/SQLServer/feedback/details/737341/sql-server-2012-openrowset-on-msdb-dbo-sp-help-job-throws-error

2. Check status via SMO

[Void][Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")|Out-Null;
[Void][Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended")|Out-Null;
$Instance = New-Object "Microsoft.SqlServer.Management.Smo.Server" "localhost";
$Instance.JobServer.Jobs|ft

I suppose you could create a job step to put this data in a table and query that instead.

Note it is NOT possible to create a SQL CLR and use SMO as you will get one of the following errors:

System.Security.SecurityException: That assembly does not allow partially trusted callers.
System.Exception: This functionality is disabled in the SQLCLR. It is recommended that you execute from your client application.

3. Use master.dbo.xp_sqlagent_enum_jobs instead which is called as part of EXEC msdb.dbo.sp_help_job.

--http://jongurgul.com/blog/check-status-of-sql-jobs
DECLARE @t TABLE
(
[Job ID] UNIQUEIDENTIFIER,[Last Run Date] CHAR(8),[Last Run Time] CHAR(6),[Next Run Date] CHAR(8),[Next Run Time] CHAR(6),[Next Run Schedule ID] INT,
[Requested To Run] INT,[Request Source] INT,[Request Source ID] SQL_VARIANT,[Running] INT,[Current Step] INT,[Current Retry Attempt] INT,[State] INT
)

INSERT INTO @t
EXECUTE master.dbo.xp_sqlagent_enum_jobs @can_see_all_running_jobs=1,@job_owner='0x4A6F6E47757267756C'

SELECT
 es.[session_id] [SessionID]
,t.[Request Source ID] [Requester]
,t.[Job ID] [JobID]
,sj.[name] [JobName]
,sjs.[step_id] [StepID]
,sjs.[step_name] [StepName]
,CASE t.[State]
 WHEN 0 THEN 'Not idle or suspended'
 WHEN 1 THEN 'Executing'
 WHEN 2 THEN 'Waiting For Thread'
 WHEN 3 THEN 'Between Retries'
 WHEN 4 THEN 'Idle'
 WHEN 5 THEN 'Suspended'
 WHEN 6 THEN 'WaitingForStepToFinish'
 WHEN 7 THEN 'PerformingCompletionActions'
 ELSE ''
 END [State]
,sja.[start_execution_date] [FirstStepStartDate]
,sja.[last_executed_step_id] [LastStepID]
,sja.[last_executed_step_date] [LastStepStartDate]
,sja.[stop_execution_date] [LastStepEndDate]
FROM @t t
INNER JOIN msdb..sysjobs sj ON t.[Job ID] = sj.[job_id]
INNER JOIN msdb..sysjobsteps sjs ON sjs.[job_id] = sj.[job_id]
AND t.[Job ID] = sjs.[job_id]
AND t.[Current Step] = sjs.[step_id]
INNER JOIN
(
	SELECT * FROM msdb..sysjobactivity d
	WHERE EXISTS
	(
	SELECT 1
	FROM msdb..sysjobactivity l
	GROUP BY l.[job_id]
	HAVING l.[job_id] = d.[job_id]
	AND MAX(l.[start_execution_date]) = d.[start_execution_date]
	)
) sja
ON sja.[job_id] = sj.[job_id]
LEFT JOIN (SELECT SUBSTRING([program_name],30,34) p,[session_id] FROM sys.dm_exec_sessions
WHERE [program_name] LIKE 'SQLAgent - TSQL JobStep%') es
ON CAST('' AS XML).value('xs:hexBinary(substring(sql:column("es.p"),3))','VARBINARY(MAX)') = sj.[job_id]

http://social.msdn.microsoft.com/Forums/en-IN/transactsql/thread/831c2dcc-75fc-41ac-943d-6457d9fb2ca9

SQL Event for July 2014

Almost July 2014 and looking forward to what will no doubt be another great SQLBits. This time in Telford.

SQLBits XII 17th – 19th July 2014 The International Centre, Telford
http://sqlbits.com/

However before SQLBits there is another event that you can still go to if you are quick, and it is in the South West!

There are currently a few spots available to see Brent Ozar present “Are AlwaysOn Availability Groups Right for You?” in Bristol. So get in quick

Updated 2 July 2014 No places left..

http://www.sqlserverclub.co.uk/sql-server-bristol-user-group-events.aspx

Checking what permissions they have in SQL Server

The function sys.fn_my_permissions is very useful for seeing what permissions you have, but it can be just as useful to check someone else. For this all that is needed is to use EXECUTE AS to impersonate them.

--http://jongurgul.com/blog/checking-permissions-sql-server
CREATE USER Meow WITHOUT LOGIN

SELECT SUSER_NAME() [LoginName],USER_NAME() [DatabaseLoginName];

EXECUTE AS USER = 'Meow'
--EXECUTE AS LOGIN = 'Meow'

SELECT * FROM
(
SELECT SUSER_NAME() [LoginName],USER_NAME() [DatabaseLoginName],2 [Level],ao.[name],p.[permission_name],p.[entity_name],p.[subentity_name]
FROM sys.all_objects ao
CROSS APPLY sys.fn_my_permissions(QUOTENAME(ao.[name]),'OBJECT') p
UNION ALL
SELECT SUSER_NAME() [LoginName],USER_NAME() [DatabaseLoginName],1,d.[name],p.[permission_name],p.[entity_name],p.[subentity_name]
FROM sys.databases d
CROSS APPLY sys.fn_my_permissions(QUOTENAME(d.name), 'DATABASE') p
UNION ALL
SELECT SUSER_NAME() [LoginName],USER_NAME() [DatabaseLoginName],0,@@SERVERNAME,p.[permission_name],p.[entity_name],p.[subentity_name]
FROM sys.fn_my_permissions(NULL, 'SERVER') p
) x
ORDER BY 1,2,3
REVERT 

SELECT SUSER_NAME() [LoginName],USER_NAME() [DatabaseLoginName];

DROP USER Meow