Rolling back transactions with cursors and local variables.

I’ve been doing some work on a credit card payment system lately. Obviously this needs to be robust and consistent so I’ve been working on error handling and “transactionalising” certain parts of the process.

I needed to answer a couple of questions that I had along the way. What happens to local variables when a transaction is rolled back? What happens to an open cursor when a transaction is rolled back?

The first question came about because I was looking for a way to log errors if my transaction had been rolled back. Any logging outside of the transaction would not have access to the error details while logging to tables inside the transaction would be rolled back as well (you laugh? … I’ve seen this done!). What about using a local variable to hold interim values? Would they persist across a rollback? This was pretty quick and easy to test.

DECLARE @test INT

SET @test = 1

BEGIN TRAN test

    SET @test += 10

ROLLBACK TRAN test

SELECT @test

--@test = 1 means the variable assignment is rolled back
--@test = 11 means the variable assignment persists

This returns 11 for SQL 2005, 2008, 2008 R2 and 2012 (the += syntax wont work on 2005). Local variables are outside of the scope of the transaction. Intuitively this makes sense, there’s no real requirement for variables to have ACID properties – they’re only scoped to a particular session.

The next question arose because I needed to do a couple of things within an explicit transaction. There was a business requirement to only update a single row at a time so I chose to use a cursor within the transaction. I wrapped the logic in a try – catch block to rollback the transaction and handle any errors, and I wanted to know what would happen to the open cursor after a rollback?

Testing this I could use either CURSOR_STATUS() or sys.dm_exec_cursors() to check the status of my cursor. I stripped out all the code from my proc leaving just the control flow logic, used a simple query to populate the cursor, and faked out an error using raiserror().

DECLARE @db_name SYSNAME

BEGIN TRAN test

	BEGIN TRY

		DECLARE test_cursor CURSOR LOCAL FAST_FORWARD FOR
		SELECT [name] FROM sys.databases
			
		OPEN test_cursor
		FETCH NEXT FROM test_cursor INTO @db_name
			
		WHILE @@FETCH_STATUS = 0
		BEGIN

			FETCH NEXT FROM test_cursor 
                                          INTO @db_name
					
			PRINT @db_name

			--create a fake error
			IF @db_name = 'tempdb'
				RAISERROR('BOOM',16,1)
												
		END
			
		CLOSE test_cursor
		DEALLOCATE test_cursor
			
		COMMIT TRAN test

	END TRY
	BEGIN CATCH
				
		IF XACT_STATE() <> 0 BEGIN
			ROLLBACK TRAN test;
		END

		--state of my cursor?
		SELECT CURSOR_STATUS('local','test_cursor')
		SELECT * FROM sys.dm_exec_cursors(@@SPID)

	END CATCH;

And on my system (SQL 2005, 2008, 2008R2 & 2012) I got:

cursor

Despite the cursor being declared and opened inside the transaction, it remains open after the transaction has rolled back. To me this was a little counterintuitive, but easy enough to deal with within the catch block:

IF EXISTS(SELECT 1 FROM sys.dm_exec_cursors(@@SPID))
	BEGIN
		CLOSE test_cursor
		DEALLOCATE test_cursor
	END

Or:

IF CURSOR_STATUS('local','test_cursor') > -1
	BEGIN
		CLOSE test_cursor
		DEALLOCATE test_cursor
	END

Happy transacting – but please clean up after yourself.

Posted in Database Theory, SQL Server, T-SQL | Tagged | Leave a comment

Quick Tip: Migrating a SQL Server Mirroring Witness server.

If you’re like me, you might be thinking “I want some more things to think about when I migrate between SQL Servers.” … No? Okay, fair enough. There’s a whole list of things to think about when migrating from one SQL instance to another including:

  1. Logins.
  2. Jobs/Alerts/Operators.
  3. Credentials/Proxies.
  4. Linked Servers.
  5. Encryption keys/certificates.
  6. SSIS packages.
  7. SSRS reports.
  8. Server configurations.
  9. Any jobs or sprocs that shell out to the OS, and use specific paths.
  10. Not to mention third party monitoring, application connection strings etc.
  11. And I imagine any DBA reading this list could add others of their own…

I came across another consideration today. The SQL server that you are migrating from is acting as a witness in a mirroring configuration. This isn’t something that is immediately obvious – you have to look for it.

Risks: If you lose the witness server you lose the ability for an automatic failover. You will have a partner quorum so mirroring remains synchronised, the principal remains the principal, and manual failover is possible. As soon as a witness is configured – automatic failover is possible again. See here for details.

Checks: My first though was to check for a started mirroring endpoint, on the source server and potential witness run:

select * 
from sys.database_mirroring_endpoints
where state = 1
and role = 2;

But this isn’t conclusive – it could indicate an endpoint that had been used in the past but is not currently active. A more definitive query is:

select * from sys.database_mirroring_witnesses;

This DMV returns one row for each witness role that the server performs. No rows returned means this server is not acting as a witness in any current mirroring configurations. See here for details.

Mitigation: Script out the mirroring endpoint from the source witness server and run that script on the destination witness server.

Grant the connect privilege to the account being used for mirroring (or certificate if you’re using that method):

GRANT CONNECT ON ENDPOINT::[mirroring_endpoint] TO [principal];

On the principal server remove the witness from the mirroring configuration. This has to be done for each mirrored database.

ALTER DATABASE [database] SET WITNESS OFF;

Again on the principal, add in the new witness to each database mirroring configuration:

ALTER DATABASE  [database]
  SET WITNESS = 'TCP://[system-address]:[port]';

Best of luck with your migrations. I’ve got a few coming up so I probably have still more to discover.

Posted in Database options, Disaster Recovery, SQL Server, T-SQL | Tagged , , | 1 Comment

Quick tip: Querying AD with xp_logininfo

We all have blind spots in our knowledge. We don’t know they are there and when we are thinking through an issue our mind quickly rejects some plans because of these blind spots. This happened to me recently when I was tasked with doing a security audit of a SQL instance.

Aside from a small handful of SQL logins all access was via Windows groups. I knew that I could use xp_logininfo to discover whether a windows user account had access to SQL Server – and via which groups.

exec xp_logininfo '[domain]\[username]','all'

But I needed to know which members belonged to which group. I was starting to look at cutting and pasting from the AD dialogs into Excel or writing LDAP queries when a colleague pointed out another xp_logininfo option that had been in my blind spot.

exec xp_logininfo '[domain]\[group]','members'

Will return all of the members in the Active Director global group (doesn’t return information for universal groups).

Nothing a closer inspection of books online couldn’t have told me – but I’d just never noticed it. Not the greatest discovery in the world either but it made my day a lot easier.

Posted in Security, SQL Server, T-SQL | 1 Comment

Issue with sys.dm_db_index_usage_stats in SQL Server 2012

I’d be struggling to remember a week where I hadn’t run a query against the dmv sys.dm_db_index_usage_stats to find unused or under used indexes (or even heaps). It would have to be in the top shelf of many DBA’s toolboxes. So I was alarmed when I came across this post from Joseph Sack over at SQL Skills.

The gist of this is if you rebuild an index in SQL Server 2012 then that indexes’ stats are removed from the DMV! It doesn’t seem to be the case with reorganize, just with rebuilds. So the take away is to treat results from sys.dm_db_usage_stats with suspicion, be aware of when you last ran a rebuild against the index. The best advice I’ve seen is to snapshot the DMV before running index maintenance.

I wanted to see this for myself, and add my 2 cents worth so I ran the following tests.

Create a table:

if exists(select 1 from sys.objects where name = 'table' 
          and type = 'U')
	drop table [table];
go

create table [table]
  (
  [id] int identity(1,1),
  [value] varchar(20) 
  default convert(varchar(20),datepart(millisecond,getdate()))
  );
go

Add a clustered and a non clustered index:

create clustered index ix_cls on [table](id);
go

create nonclustered index ix_noncls on [table](value);
go

Populate it with some fake data:

insert [table]([value])
	values(default);
go 1000

Run a couple of query’s that use the indexes:

select * from [table] where [id] = 498;
select * from [table] where [value] = 498;

Because of the way values are populated you might need to tinker with the where clause of the second select to hit the non-clustered index, or rerun the drop and create table script until you get both indexes being used.

Check usage stats:

select 
	OBJECT_NAME(us.[object_id]) as [Table],
	i.name as [Index],
	user_seeks,
	user_scans,
	last_user_seek,
	last_user_scan
from sys.dm_db_index_usage_stats us
inner join sys.indexes i
	on us.index_id = i.index_id 
        and us.[object_id] = i.[object_id]
where database_id = db_id()
and OBJECT_NAME(us.[object_id]) = 'table'

Reorganize the indexes:

alter index ix_cls on [table] reorganize
alter index ix_noncls on [table] reorganize

Check usage stats:

select 
	OBJECT_NAME(us.[object_id]) as [Table],
	i.name as [Index],
	user_seeks,
	user_scans,
	last_user_seek,
	last_user_scan
from sys.dm_db_index_usage_stats us
inner join sys.indexes i
	on us.index_id = i.index_id 
        and us.[object_id] = i.[object_id]
where database_id = db_id()
and OBJECT_NAME(us.[object_id]) = 'table'

Rebuild the indexes:

alter index ix_cls on [table] rebuild
alter index ix_noncls on [table] rebuild

Check the stats:

select 
	OBJECT_NAME(us.[object_id]) as [Table],
	i.name as [Index],
	user_seeks,
	user_scans,
	last_user_seek,
	last_user_scan
from sys.dm_db_index_usage_stats us
inner join sys.indexes i
	on us.index_id = i.index_id 
        and us.[object_id] = i.[object_id]
where database_id = db_id()
and OBJECT_NAME(us.[object_id]) = 'table'

Results: I ran the above tests against the SQL Server Builds that I had available with the following results:

Build Reorganize Rebuild
9.0.5000 Usage stats persist. Usage stats persist.
10.0.5500 Usage stats persist. Usage stats persist.
10.50.2500 Usage stats persist. Usage stats persist.
11.0.3000 Usage stats persist. Usage stats cleared.
11.0.3401 Usage stats persist. Usage stats cleared.

It seems that this bug/feature has been introduced in SQL Server 2012 (as of the time of writing 11.0.3401 is the latest build of SQL Server 2012, SP1, CU8). My recommendation is that you keep this in mind when reviewing index usage.

Posted in SQL Server | 5 Comments

SQL Server configuration files

Problem: You require the ability to perform a repeatable standardised installation of multiple instances of SQL Server or the ability to create as-built documentation of an installation.

Solution: I’ve made a few attempts at documenting a SQL Server install, and all the various options that were chosen at each step, with mixed success. And ultimately following documentation is pretty susceptible to human error. While investigating something else entirely I came across the SQL Server configuration file – and I was sold.

How it works: When running through the SQL Server installation wizard, behind the scenes a configuration file is being created. This file is saved into the C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\<date_time_stamp>\ folder. (YMMV – this location depends on SQL version and whether you install to the default path). The file is created just before the final step to install SQL Server. You have the option of exiting the setup program at this point and using the configuration file, letting the install proceed and use the setup file for further installs, or even copying a configuration file from another extant installation.

ConfigFile

Here is what my config file looks like for a named instance called CONFIG (with a little data scrubbing):

;SQL Server 2012 Configuration File
[OPTIONS]

; Specifies a Setup work flow, like INSTALL, UNINSTALL, or UPGRADE. This is a required parameter. 

ACTION="Install"

; Detailed help for command line argument ENU has not been defined yet. 

ENU="True"

; Parameter that controls the user interface behavior. Valid values are Normal for the full UI,AutoAdvance for a simplied UI, and EnableUIOnServerCore for bypassing Server Core setup GUI block. 

UIMODE="Normal"

; Setup will not display any user interface. 

QUIET="False"

; Setup will display progress only, without any user interaction. 

QUIETSIMPLE="False"

; Specify whether SQL Server Setup should discover and include product updates. The valid values are True and False or 1 and 0. By default SQL Server Setup will include updates that are found. 

UpdateEnabled="False"

; Specifies features to install, uninstall, or upgrade. The list of top-level features include SQL, AS, RS, IS, MDS, and Tools. The SQL feature will install the Database Engine, Replication, Full-Text, and Data Quality Services (DQS) server. The Tools feature will install Management Tools, Books online components, SQL Server Data Tools, and other shared components. 

FEATURES=SQLENGINE,REPLICATION

; Specify the location where SQL Server Setup will obtain product updates. The valid values are "MU" to search Microsoft Update, a valid folder path, a relative path such as .\MyUpdates or a UNC share. By default SQL Server Setup will search Microsoft Update or a Windows Update service through the Window Server Update Services. 

UpdateSource="MU"

; Displays the command line parameters usage 

HELP="False"

; Specifies that the detailed Setup log should be piped to the console. 

INDICATEPROGRESS="False"

; Specifies that Setup should install into WOW64. This command line argument is not supported on an IA64 or a 32-bit system. 

X86="False"

; Specify the root installation directory for shared components.  This directory remains unchanged after shared components are already installed. 

INSTALLSHAREDDIR="C:\Program Files\Microsoft SQL Server"

; Specify the root installation directory for the WOW64 shared components.  This directory remains unchanged after WOW64 shared components are already installed. 

INSTALLSHAREDWOWDIR="C:\Program Files (x86)\Microsoft SQL Server"

; Specify a default or named instance. MSSQLSERVER is the default instance for non-Express editions and SQLExpress for Express editions. This parameter is required when installing the SQL Server Database Engine (SQL), Analysis Services (AS), or Reporting Services (RS). 

INSTANCENAME="CONFIG"

; Specify the Instance ID for the SQL Server features you have specified. SQL Server directory structure, registry structure, and service names will incorporate the instance ID of the SQL Server instance. 

INSTANCEID="CONFIG"

; Specify that SQL Server feature usage data can be collected and sent to Microsoft. Specify 1 or True to enable and 0 or False to disable this feature. 

SQMREPORTING="False"

; Specify if errors can be reported to Microsoft to improve future SQL Server releases. Specify 1 or True to enable and 0 or False to disable this feature. 

ERRORREPORTING="False"

; Specify the installation directory. 

INSTANCEDIR="C:\Program Files\Microsoft SQL Server"

; Agent account name 

AGTSVCACCOUNT="NT Service\SQLAgent$CONFIG"

; Auto-start service after installation.  

AGTSVCSTARTUPTYPE="Manual"

; CM brick TCP communication port 

COMMFABRICPORT="0"

; How matrix will use private networks 

COMMFABRICNETWORKLEVEL="0"

; How inter brick communication will be protected 

COMMFABRICENCRYPTION="0"

; TCP port used by the CM brick 

MATRIXCMBRICKCOMMPORT="0"

; Startup type for the SQL Server service. 

SQLSVCSTARTUPTYPE="Automatic"

; Level to enable FILESTREAM feature at (0, 1, 2 or 3). 

FILESTREAMLEVEL="0"

; Set to "1" to enable RANU for SQL Server Express. 

ENABLERANU="False"

; Specifies a Windows collation or an SQL collation to use for the Database Engine. 

SQLCOLLATION="SQL_Latin1_General_CP1_CI_AS"

; Account for SQL Server service: Domain\User or system account. 

SQLSVCACCOUNT="NT Service\MSSQL$CONFIG"

; Windows account(s) to provision as SQL Server system administrators. 

SQLSYSADMINACCOUNTS=""

; The default is Windows Authentication. Use "SQL" for Mixed Mode Authentication. 

SECURITYMODE="SQL"

; Provision current user as a Database Engine system administrator for SQL Server 2012 Express. 

ADDCURRENTUSERASSQLADMIN="False"

; Specify 0 to disable or 1 to enable the TCP/IP protocol. 

TCPENABLED="0"

; Specify 0 to disable or 1 to enable the Named Pipes protocol. 

NPENABLED="0"

; Startup type for Browser Service. 

BROWSERSVCSTARTUPTYPE="Automatic"

Essentially this is a list of the various parameters and the options chosen for the installation. I won’t go through all the parameters, SQL Server installations are well documented elsewhere. I encourage you to seek out the config files of any SQL installations and scan through them to get a feel for the options.

Installing SQL Server using the configuration file: To use the configuration file for a SQL Server installation requires you to use the setup.exe command from the command line. One of the options for the command is /ConfigurationFile, which accepts a filepath/filename argument.

No passwords are stored in the configuration files (and you wouldn’t want that would you?) so these will need to be passed in at the command line.

If you want an unattended install you can also pass in the /Q or /QS switches. /Q is Quiet mode and surpresses the user interface, /QS is QuietSimple and displays the user interface but does not require any user input. These parameters are also specified in the configuration file and can be set there. Any parameters passed in to the command line will override parameters in the file.

You will have to accept the license terms – you can do this via the command line by supplying the /IACCEPTSQLSERVERLICENSETERMS option.

Here is a (scrubbed) example of a command line install.

Setup.exe /SQLSVCPASSWORD="password" /AGTSVCPASSWORD="password" 
/SAPWD="password" /ConfigurationFile=C:\Temp\ConfigurationFile.ini 
/IAcceptSQLServerLicenseTerms

Conclusion: Whether you are looking for a way to standardise multiple SQL Server instances, automate SQL Server installations or to document or audit existing installation configuration files might be what you are looking for.

Posted in SQL Server | 2 Comments

Conditional Aggregates

The Problem: A legacy table contains amounts and a char column indicating whether the amount is a credit or a debit. We want an aggregate (over a given range) of the amounts. We need to add the credits and subtract the debits.

The Solution(s): Two possible approaches to the problem suggest themselves. One is to sum all of the credits and all the debits and them subtract the total debit from the total credit. The second approach is to include a case statement inside the aggregate function to add or subtract the given row’s amount based on the credit/debit column.

The Code: Implement a cut down version of the table with some contrived sample data:

create table tbl_ledger
(
	id int identity(1,1),
	credit_debit char(1),
	amount money,
	[description] varchar(128) 
)

insert tbl_ledger(credit_debit,amount,[description])
values 
	('C',10,'Payment'),
	('D',10,'Refund'),
	('C',10,'Payment'),
	('C',10,'Payment'),
	('D',10,'Refund'),
	('C',10,'Payment'),
	('C',10,'Payment'),
	('D',10,'Refund'),
	('C',10,'Payment'),
	('C',10,'Payment')

This table has 10 rows, 7 are $10 credits and 3 are $10 debits so we would expect the sum to return $40 ($70 credit – $30 debit).

Implement option 1, subtract the aggregates (this could be done a number of ways, including using variables to hold interim results, or common table expressions. I’ve chosen a sub query as I think it reads well. The query plans for sub queries and cte’s are similar, using variables muddies the waters a bit):

select sum(amount) - (select sum(amount) as [debit]
			from tbl_ledger 
			where credit_debit = 'D'
			)
as [credit]
from tbl_ledger 
where credit_debit = 'C'

As hoped this returns 40.

Implement option 2 using a case statement inside the sum function:

select
	sum(case
			when credit_debit = 'C' then amount
			when credit_debit = 'D' then -1 * amount
		end) as [total cash]
from tbl_ledger

This again returns a correct result.

The Analysis: To me option 2 is a more elegantly coded solution, but how does it stack up performance wise? To get some measurable results rerun the insert multiple times.

insert tbl_ledger(credit_debit,amount,[description])
values 
	('C',10,'Payment'),
	('D',10,'Refund'),
	('C',10,'Payment'),
	('C',10,'Payment'),
	('D',10,'Refund'),
	('C',10,'Payment'),
	('C',10,'Payment'),
	('D',10,'Refund'),
	('C',10,'Payment'),
	('C',10,'Payment')
go 100000

And run the two queries side by side with statistics on:

set statistics io on
set statistics time on

select
	sum(case
			when credit_debit = 'C' then amount
			when credit_debit = 'D' then -1 * amount
		end) as [total cash]
from tbl_ledger

select sum(amount) - (select sum(amount) as [debit]
						from tbl_ledger 
						where credit_debit = 'D'
						)
as [credit]
from tbl_ledger 
where credit_debit = 'C' 

set statistics io off
set statistics time off

I got the following:

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 1, logical reads 4167, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 281 ms, elapsed time = 295 ms.

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 10, logical reads 8334, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 483 ms, elapsed time = 130 ms.

With the execution plans:

QPlan1

The sub query outperforms the conditional aggregate primarily because the query can be parallelised. The conditional aggregate does a single table scan versus two table scans for the sub query, 4167 logical reads versus 8344 logical reads – but when I ran the tests against cold caches the read ahead mechanism pulled all data pages into cache for both queries so I didn’t notice a significant change in the comparative result.

Running the second query with a maxdop hint brings the results closer:

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 1, logical reads 4167, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

SQL Server Execution Times:
CPU time = 328 ms, elapsed time = 347 ms.

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 2, logical reads 8334, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

SQL Server Execution Times:
CPU time = 249 ms, elapsed time = 296 ms.

And drops the parallel operators:

QPlan2

The Conclusion: After analysis it seems that the sub query out performs the conditional aggregate option – even if we suppress parallelism. That said these tests were on a cut down table, and a wider table with more rows might bring IO more into play – especially if indexes are added to the mix. Adding an index on the credit_debit column with amount as an included column speeds up the sub query even more – replacing the two table scans with index seeks.

Posted in SQL Server, T-SQL | 3 Comments

My Favourite SSMS productivity tips

It’s a well known fact that any sitcom that runs for a certain length of time will release a clip show. Similarly any blog about SQL Server will end up doing a top tips for using SQL Server Management Studio. Here then, in no particular order, are mine:

1. Filtered views in object explorer.

This is a great feature for navigating around large unfamiliar databases. Available on tables, views, stored procedures and functions. Simply right click the node in the object explorer:

 

filter1

 

filter2

Enter your filter conditions into the filter setting dialog and you can zoom in on the objects you need quickly:

filter3

 

Object explorer will helpfully indicate that the view is filtered so you don’t think you’ve lost a bunch of tables!

 

filter4

 

2. Object explorer details.

I’ve overlooked this in the past – but it’s quite handy for a quick look at object properties. Available from the view menu or by pressing [F7]. The objects in this view can be filtered using the tip above. By right clicking on the column headers you can select the properties you want to see.

objectDeets1

 

Column headings can be rearranged by dragging and dropping the headers and the rows can be sorted ascending or descending by column.

objectDeets2

 

I use this a lot to get a quick look at what is consuming space in the database. What are my biggest tables? How many rows in a given table?

3. Drag column names.

We all know that we can drag individual items from the object explorer into the Query window – but you can also drag the column node and get a nice comma separated list of column names.

column1

 

column2

 

All right – it’s a bit annoying being all in one line like that. But while it’s still highlighted press [ctrl]-[h] to get the find and replace dialog:

column3

 

Make sure you change the ‘look in’ drop down to ‘selection’ (or you can get some strange results). Expand the find options node and select Use > Regular expressions. In ‘find what’ put [,] & [space] and in ‘replace with’ put [,] & [\] & [n].

column4

 

For an extremely comprehensive look at using regular expressions in SSMS look here.

4. Block move.

Finally, if you have your column names all on a separate row from the tip above, but you want them indented you could manually enter a [tab] at the start of each row (like the good old days) or you could use a regular expression find and replace as above. But, let’s face it, no one wants to deal with those things if we don’t have to. The third option is to hold the [alt] key down and drag across the first few characters of your column name and down the rows. Once you have the block selected simply press [tab] to indent.

column5

There’s a million tips and tricks in SSMS. I have shown you 4 of my current favourites. Feel free to share yours…

Posted in SQL Server | Leave a comment

Apply SP1 to SQL Server 2012 on active/active Windows Server 2012 cluster.

A while ago I blogged my about my approach to service packing a SQL Server 2008R2 active active cluster. Time to apply SP1 to SQL 2012 on top of a Windows 2012 failover cluster. The basic approach is similar but there are a couple of subtle differences.

The key to the approach with 2008 was to patch the passive node, while not allowing the SQL instances the option of failing back to the passive node. In Windows Server 2008 this was possible through the Failover Cluster Adminstration GUI tool. I haven’t seen any literature suggesting that this approach is no longer valid – although I have seen moves towards ‘cluster aware’ updating. But this is only available from SQL 2012 SP1, so to apply SP1 I’m going to stick to my tried and tested method.

1.The management of possible failover partners is no longer available via the GUI and needs to be done with cluster.exe. Cluster.exe is not required for the configuring of a server 2012 failover cluster so it may or may not be configured in your environment. You can add the Failover Cluster Command Interface feature using the server manager tool.

clus3

2. Make a note of all node names (and/or IP addresses), SQL Server virtual names along with preferred nodes. If there are more than three nodes you may need to also take note of possible owners for each SQL resource group. For my example assume that I have a cluster with node1 and node2, SQL1 normally lives on node1 and SQL2 normally lives on node2.

3. To start with a clean slate and ensure any previous updates are completed both nodes should be restarted if possible. Choose the physical node that you you want to patch second and restart that node (in my example node2).

4. Restart the node you want to patch first (node1). This will mean that both active SQL instances are now running on node2. Some restarts will be essential, but you could avoid the first two restarts if you need to keep downtime to a minimum and just fail SQL1 over to node2. The main point here is to always patch a passive node.

5. Use cluster.exe to remove node1 from the possible owners lists of SQL1 and SQL2. This means that neither SQL instance can fail over to node1 while it is being patched. The relevant commands are:

  • Check the possible owners: cluster.exe resource “SQL1″ /listowners
  • Remove passive node from possible owner list of all instances: cluster.exe resource “SQL1″ /RemoveOwner:node1
  • Confirm: cluster.exe resource “SQL1″ /listowners

(repeat for SQL2)

6. Run the service pack executable on node1 and apply SP1 to both SQL instances.

7. Restart node1.

8. Add node1 back into the possible owners lists of SQL1 and SQL2 using:

  • cluster.exe resource “SQL1″ /AddOwner:node1

(repeat for SQL2)

9.Fail both instances over to node1. You could do this by restarting node2 or using powershell to suspend node2 with the drain parameter. Once the instances have successfully moved and you have removed the possible owners  - resume the node to patch.

10. Repeat steps 5 – 8 on node2.

11. Add node2 back into the possible owners lists of SQL1 and SQL2 and fail both instances over to node2. Check that the build level is correct and review the SQL Server error logs.

12. Fail SQL1 over to node1. Check build levels and SQL Server error logs.

I want acknowledge Sean Gallardy who helped clarify my thinking by responding to my msdn forum question.

Posted in SQL Server | 4 Comments

Quick tip: Could not load package “xxx” because of error 0xC0014062

I struck this error in one of my SSIS packages, I found lots of tips and suggestions online for the error but none of the solutions worked for me. So I want to add my solution to the mix so that it might give the next person searching an extra option to try.

Setup: I have a simple SSIS package that loads a couple of csv files into SQL Server tables and then calls stored procedures to clean and scrub the data. Everything works on my development server so I deploy to production by saving the package to the SQL Server and scheduling a job to run the package. (I’m using 2012 database engine and SSIS). But the job fails with the following error:

Could not load package “xxx” because of error 0xC0014062.
Description: The LoadFromSQLServer method has encountered OLE DB error code 0×80004005 (Login timeout expired).
The SQL statement that was issued has failed.

Investigation: My first thought is that this is some sort of permissions issue, and a lot of my search results seemed to suggest either that or a mistake in server name or other parameters.

-I was able to run the package manually in production so I reasoned that the agent service account might be the issue.

-Check that the SQL Agent service account has sufficient permissions to the various folders. I wasn’t able to get onto the server and use config manager for some reason so I used:

DECLARE @sn NVARCHAR(128);

EXEC master.dbo.xp_regread
    'HKEY_LOCAL_MACHINE',
    'SYSTEM\CurrentControlSet\services\SQLSERVERAGENT',
    'ObjectName', 
    @sn OUTPUT;

SELECT @sn;

-Check that the agent account had sufficient permissions to execute the stored procedures.

use [Database]
go

grant execute on object:: [procedure 1] to [agent service account]
grant execute on object:: [procedure 2] to [agent service account]
...

- Try again. Still not working!

- After much scratching of heads and comparing the failing job configuration with that of successful jobs it turned out that using the SQL Server’s fully qualified domain name in the Package configuration tab fixed the issue!

SSIS1

Conclusion: I don’t really know if this is a quirk of my current AD environment, or a requirement of SSIS 2012. It worked for me, it might work for you. If anyone can shed any further light on what’s going on here – please do.

Posted in Security, SSIS | 2 Comments

Duplicate Statistics

The Setup:

Standard best practise is to have auto create and auto update statistics set for SQL Server databases. But there is no corresponding setting to remove statistics that are no longer necessary. Unless explicitly managed by the DBA these statistics could be causing excess resource usage.

Demo:

Create a new database and port some data across from the AdventureWorks sample database.

if db_id('stats_duplicate') is not null
	drop database stats_duplicate
go

create database stats_duplicate
go

use stats_duplicate
go

select * into dbo.only_table
from AdventureWorks2012.Sales.SalesOrderDetail

Verify that auto create and auto update statistics are set as expected.

select 
	is_auto_create_stats_on
	, is_auto_update_stats_on
from sys.databases 
where name = 'stats_duplicate'

There is a single index (the heap) on the new table and no statistics.

select * from sys.indexes
where [object_id] = object_id('dbo.only_table')

select * from sys.stats
where [object_id] = object_id('dbo.only_table')

no_stats

Now we run a query with a predicate.

select * from dbo.only_table
where ProductID = 715

The query optimiser will automatically create a statistic on the ProductID column.

auto_stat

And we can examine the statistic:

dbcc show_statistics('dbo.only_table',_WA_Sys_00000005_0EA330E9)

stat_header1

We can see that the statistic is created on the ProductID column. Take note of when the statistic was last updated. (I know – I should have better things to do at 9:45pm!)

Now if we create an index on the ProductID column this will also create a statistic on the column.

create nonclustered index ix_productid 
on dbo.only_table(ProductID)

Which we can examine:

dbcc show_statistics('dbo.only_table',ix_productid)

stat_header2

Again – take note of the updated time.

Now let’s run an update of the ProductID column against all rows. This will mark the statistics as stale. If we run a query that will use the statistics this will trigger an automatic update.

update dbo.only_table
set ProductID += 1

select * from dbo.only_table
where ProductID = 715

And look at the stat headers.

dbcc show_statistics('dbo.only_table',_WA_Sys_00000005_0EA330E9) 
with stat_header

dbcc show_statistics('dbo.only_table',ix_productid) 
with stat_header

stat_header3

We can see by the dates that both statistics were updated even though only one will ever be used. We can confirm this by using a technique from Paul White that can show what statistics are considered, and what statistics are used for a given query.

DBCC FREEPROCCACHE
go
select * from dbo.only_table
where ProductID = 715
OPTION
(
    QUERYTRACEON 3604,
    QUERYTRACEON 9292,
    QUERYTRACEON 9204
)

Conclusion: Automatically created statistics can be rendered obsolete by subsequent index creations but they will still be updated and managed by SQL Server – consuming resources. I’ll leave with a script that I’ve based on ideas from Erin Stellato that will pick out any duplicate statistics in a given database.

with stats_on_indexes([object_id],[table_column_id],[index_name])
	as(
		select
			o.[object_id] as [object_id],
			ic.[column_id] as [table_column_id],
			i.name
		from sys.indexes i
		join sys.objects o on i.[object_id] = o.[object_id]
		join sys.stats st on i.[object_id] = st.[object_id] and i.name = st.name
		join sys.index_columns ic on i.index_id = ic.index_id and i.[object_id] = ic.[object_id]
		where o.is_ms_shipped = 0
		and i.has_filter = 0
		and ic.index_column_id = 1
	)
select
	o.[object_id] as [ID],
	o.name as [Table],
	c.name as [Column],
	s.name as [AutoCreatedStatistic],
	stats_on_indexes.index_name as [Index]
from sys.stats s
	join sys.stats_columns sc
		on s.stats_id = sc.stats_id and s.[object_id] = sc.[object_id]
	join sys.objects o 
		on sc.[object_id] = o.[object_id]
	join sys.columns c 
		on sc.[object_id] = c.[object_id] and sc.column_id = c.column_id
	join stats_on_indexes 
		on o.[object_id] = stats_on_indexes.[object_id] and stats_on_indexes.table_column_id = c.column_id
where s.auto_created = 1
and s.has_filter = 0
Posted in SQL Server | 5 Comments