Migrating SQL Server logins

Once upon a migration I would use sp_help_revlogin to migrate logins and remap the SQL server logins using sp_change_users_login. During some recent migration work I realised that SQL Server Management Studio could do most of the work of sp_help_revlogin. I still needed to use sp_change_users_login – but I came across a couple of other tips to solve a couple of issues.

Note: The terminology can be a bit ambiguous and confusing here. When I talk about logins I mean all security principals that can be used to access the SQL Server instance, either Windows or SQL principals. When I talk about SQL server logins – I mean logins that use SQL authentication, as opposed to logins that use Windows authentication.

SSMS: To script out all logins from a SQL server instance simply select the logins node in the security node of object explorer and open the object explorer details window ([F7]). This window allows you to shift click to multi-select logins and right click to script out as desired. One caveat here is that the script to create the SQL server logins will generate random passwords – these need to be replaced with the correct passwords.



Mapping SQL logins: Logins based on Windows AD accounts will remap automatically across SQL instances in the same domain because the SIDs are the same, set by Windows as opposed to SQL Server. But SQL server logins will be assigned a random SID when created. Because the mapping between a SQL server login and a database user is based on SIDs the Windows logins will remap, but the SQL server logins will not. The well known procedure sp_change_users_login can be run in a database context to identify these ‘orphaned’ logins and remap them.

Creating SQL logins with specific SIDs: I had a situation recently where I migrated a data warehouse server. Production databases were restored to the server each night, data scrubbed and then set to read-only. I needed a login on the SQL Server to be mapped to a database user so that queries could be run against the warehouse. In this case the login I had was a SQL server login – so when I migrated instances the login SID did not match the SID of the user in the production databases. Sp_change_users_login won’t work on a read-only database because the metadata can’t be written. My options where to remap the database users to the login after every restore but before setting the database to read-only or to create the login on the server with a SID matching the SID on the production boxes. It turns out that a SQL login can be created with a specific SID. The following query can be run against the source server master database and will produce create scripts for all logins with explicit creation of SIDs – which can be run as required on the destination server.

'create login [' + p.name + '] ' 
+ case 
	when p.type in('U','G') then 'from windows ' 
	else '' 
+ 'with ' 
+ case 
	when p.type = 'S' then 'password = ' 
		+ master.sys.fn_varbintohexstr(l.password_hash) 
		+ ' hashed, ' 
		+ 'sid = ' 
		+ master.sys.fn_varbintohexstr(l.sid) 
		+ ', check_expiration = ' 
		+ case 
			when l.is_expiration_checked > 0 then 'ON, ' 
			else 'OFF, ' 
		+ 'check_policy = ' 
		+ case 
			when l.is_policy_checked > 0 then 'ON, ' 
			else 'OFF, ' 
	else '' 
+ 'default_database = [' 
+ p.default_database_name 
+ case 
	when len(p.default_language_name) > 0 then '], default_language = [' 
		+ p.default_language_name 
		+ ']'
	else ']' 
FROM sys.server_principals p
	LEFT JOIN sys.sql_logins l
	ON p.principal_id = l.principal_id
WHERE p.type in('S','U','G')

Forgotten passwords: Another issue I came across in a recent migration was passwords that had fallen through the cracks and been forgotten. After spending some fruitless hours trying to get hashcat to work properly – and failing – I revisited the query above. Turns out I didn’t need to recover passwords – I could recreate the logins using the password_hash. To convince myself of this I create a login to test with.

USE [master]
CREATE LOGIN [lostpassword] 

Run the login script from above which for [lostpassword] gives:

create login [lostpassword] 
with password = 0x02002c4b16cfbc80f3d07ab3d67042262a87370fc7f0a6300badcddd999ec304c9e69137d5f90b307a230330a25e41ca3e9d3a2664d2782aafe1611608afedfabf67b44ef22e hashed
, sid = 0xed5cd87e54a8bc43b7ff9435f052e2cd
, check_expiration = OFF
, check_policy = OFF
, default_database = [master]
, default_language = [us_english]

Drop the login that I originally created.

DROP LOGIN [lostpassword]

And recreate the login using the generated script.

create login [lostpassword] 
with password = 0x02002c4b16cfbc80f3d07ab3d67042262a87370fc7f0a6300badcddd999ec304c9e69137d5f90b307a230330a25e41ca3e9d3a2664d2782aafe1611608afedfabf67b44ef22e hashed
, sid = 0xed5cd87e54a8bc43b7ff9435f052e2cd
, check_expiration = OFF
, check_policy = OFF
, default_database = [master]
, default_language = [us_english]

And confirm that the recreated login’s password matches the original.

SELECT name FROM sys.sql_logins 
WHERE PWDCOMPARE('I_f0rg0t!', password_hash) = 1 ;

Which it does:


Another tool for the migration toolkit.

Posted in Security, SQL Server, T-SQL, Uncategorized | Tagged , | 7 Comments

Rolling back transactions with cursors and local variables.

I’ve been doing some work on a credit card payment system lately. Obviously this needs to be robust and consistent so I’ve been working on error handling and “transactionalising” certain parts of the process.

I needed to answer a couple of questions that I had along the way. What happens to local variables when a transaction is rolled back? What happens to an open cursor when a transaction is rolled back?

The first question came about because I was looking for a way to log errors if my transaction had been rolled back. Any logging outside of the transaction would not have access to the error details while logging to tables inside the transaction would be rolled back as well (you laugh? … I’ve seen this done!). What about using a local variable to hold interim values? Would they persist across a rollback? This was pretty quick and easy to test.


SET @test = 1


    SET @test += 10


SELECT @test

--@test = 1 means the variable assignment is rolled back
--@test = 11 means the variable assignment persists

This returns 11 for SQL 2005, 2008, 2008 R2 and 2012 (the += syntax wont work on 2005). Local variables are outside of the scope of the transaction. Intuitively this makes sense, there’s no real requirement for variables to have ACID properties – they’re only scoped to a particular session.

The next question arose because I needed to do a couple of things within an explicit transaction. There was a business requirement to only update a single row at a time so I chose to use a cursor within the transaction. I wrapped the logic in a try – catch block to rollback the transaction and handle any errors, and I wanted to know what would happen to the open cursor after a rollback?

Testing this I could use either CURSOR_STATUS() or sys.dm_exec_cursors() to check the status of my cursor. I stripped out all the code from my proc leaving just the control flow logic, used a simple query to populate the cursor, and faked out an error using raiserror().




		SELECT [name] FROM sys.databases
		OPEN test_cursor
		FETCH NEXT FROM test_cursor INTO @db_name

			FETCH NEXT FROM test_cursor 
                                          INTO @db_name
			PRINT @db_name

			--create a fake error
			IF @db_name = 'tempdb'
		CLOSE test_cursor
		DEALLOCATE test_cursor


		--state of my cursor?
		SELECT CURSOR_STATUS('local','test_cursor')
		SELECT * FROM sys.dm_exec_cursors(@@SPID)


And on my system (SQL 2005, 2008, 2008R2 & 2012) I got:


Despite the cursor being declared and opened inside the transaction, it remains open after the transaction has rolled back. To me this was a little counterintuitive, but easy enough to deal with within the catch block:

IF EXISTS(SELECT 1 FROM sys.dm_exec_cursors(@@SPID))
		CLOSE test_cursor
		DEALLOCATE test_cursor


IF CURSOR_STATUS('local','test_cursor') > -1
		CLOSE test_cursor
		DEALLOCATE test_cursor

Happy transacting – but please clean up after yourself.

Posted in Database Theory, SQL Server, T-SQL | Tagged | Leave a comment

Quick Tip: Migrating a SQL Server Mirroring Witness server.

If you’re like me, you might be thinking “I want some more things to think about when I migrate between SQL Servers.” … No? Okay, fair enough. There’s a whole list of things to think about when migrating from one SQL instance to another including:

  1. Logins.
  2. Jobs/Alerts/Operators.
  3. Credentials/Proxies.
  4. Linked Servers.
  5. Encryption keys/certificates.
  6. SSIS packages.
  7. SSRS reports.
  8. Server configurations.
  9. Any jobs or sprocs that shell out to the OS, and use specific paths.
  10. Not to mention third party monitoring, application connection strings etc.
  11. And I imagine any DBA reading this list could add others of their own…

I came across another consideration today. The SQL server that you are migrating from is acting as a witness in a mirroring configuration. This isn’t something that is immediately obvious – you have to look for it.

Risks: If you lose the witness server you lose the ability for an automatic failover. You will have a partner quorum so mirroring remains synchronised, the principal remains the principal, and manual failover is possible. As soon as a witness is configured – automatic failover is possible again. See here for details.

Checks: My first though was to check for a started mirroring endpoint, on the source server and potential witness run:

select * 
from sys.database_mirroring_endpoints
where state = 1
and role = 2;

But this isn’t conclusive – it could indicate an endpoint that had been used in the past but is not currently active. A more definitive query is:

select * from sys.database_mirroring_witnesses;

This DMV returns one row for each witness role that the server performs. No rows returned means this server is not acting as a witness in any current mirroring configurations. See here for details.

Mitigation: Script out the mirroring endpoint from the source witness server and run that script on the destination witness server.

Grant the connect privilege to the account being used for mirroring (or certificate if you’re using that method):

GRANT CONNECT ON ENDPOINT::[mirroring_endpoint] TO [principal];

On the principal server remove the witness from the mirroring configuration. This has to be done for each mirrored database.


Again on the principal, add in the new witness to each database mirroring configuration:

ALTER DATABASE  [database]
  SET WITNESS = 'TCP://[system-address]:[port]';

Best of luck with your migrations. I’ve got a few coming up so I probably have still more to discover.

Posted in Database options, Disaster Recovery, SQL Server, T-SQL | Tagged , , | 1 Comment

Quick tip: Querying AD with xp_logininfo

We all have blind spots in our knowledge. We don’t know they are there and when we are thinking through an issue our mind quickly rejects some plans because of these blind spots. This happened to me recently when I was tasked with doing a security audit of a SQL instance.

Aside from a small handful of SQL logins all access was via Windows groups. I knew that I could use xp_logininfo to discover whether a windows user account had access to SQL Server – and via which groups.

exec xp_logininfo '[domain]\[username]','all'

But I needed to know which members belonged to which group. I was starting to look at cutting and pasting from the AD dialogs into Excel or writing LDAP queries when a colleague pointed out another xp_logininfo option that had been in my blind spot.

exec xp_logininfo '[domain]\[group]','members'

Will return all of the members in the Active Director global group (doesn’t return information for universal groups).

Nothing a closer inspection of books online couldn’t have told me – but I’d just never noticed it. Not the greatest discovery in the world either but it made my day a lot easier.

Posted in Security, SQL Server, T-SQL | 1 Comment

Issue with sys.dm_db_index_usage_stats in SQL Server 2012

I’d be struggling to remember a week where I hadn’t run a query against the dmv sys.dm_db_index_usage_stats to find unused or under used indexes (or even heaps). It would have to be in the top shelf of many DBA’s toolboxes. So I was alarmed when I came across this post from Joseph Sack over at SQL Skills.

The gist of this is if you rebuild an index in SQL Server 2012 then that indexes’ stats are removed from the DMV! It doesn’t seem to be the case with reorganize, just with rebuilds. So the take away is to treat results from sys.dm_db_usage_stats with suspicion, be aware of when you last ran a rebuild against the index. The best advice I’ve seen is to snapshot the DMV before running index maintenance.

I wanted to see this for myself, and add my 2 cents worth so I ran the following tests.

Create a table:

if exists(select 1 from sys.objects where name = 'table' 
          and type = 'U')
	drop table [table];

create table [table]
  [id] int identity(1,1),
  [value] varchar(20) 
  default convert(varchar(20),datepart(millisecond,getdate()))

Add a clustered and a non clustered index:

create clustered index ix_cls on [table](id);

create nonclustered index ix_noncls on [table](value);

Populate it with some fake data:

insert [table]([value])
go 1000

Run a couple of query’s that use the indexes:

select * from [table] where [id] = 498;
select * from [table] where [value] = 498;

Because of the way values are populated you might need to tinker with the where clause of the second select to hit the non-clustered index, or rerun the drop and create table script until you get both indexes being used.

Check usage stats:

	OBJECT_NAME(us.[object_id]) as [Table],
	i.name as [Index],
from sys.dm_db_index_usage_stats us
inner join sys.indexes i
	on us.index_id = i.index_id 
        and us.[object_id] = i.[object_id]
where database_id = db_id()
and OBJECT_NAME(us.[object_id]) = 'table'

Reorganize the indexes:

alter index ix_cls on [table] reorganize
alter index ix_noncls on [table] reorganize

Check usage stats:

	OBJECT_NAME(us.[object_id]) as [Table],
	i.name as [Index],
from sys.dm_db_index_usage_stats us
inner join sys.indexes i
	on us.index_id = i.index_id 
        and us.[object_id] = i.[object_id]
where database_id = db_id()
and OBJECT_NAME(us.[object_id]) = 'table'

Rebuild the indexes:

alter index ix_cls on [table] rebuild
alter index ix_noncls on [table] rebuild

Check the stats:

	OBJECT_NAME(us.[object_id]) as [Table],
	i.name as [Index],
from sys.dm_db_index_usage_stats us
inner join sys.indexes i
	on us.index_id = i.index_id 
        and us.[object_id] = i.[object_id]
where database_id = db_id()
and OBJECT_NAME(us.[object_id]) = 'table'

Results: I ran the above tests against the SQL Server Builds that I had available with the following results:

Build Reorganize Rebuild
9.0.5000 Usage stats persist. Usage stats persist.
10.0.5500 Usage stats persist. Usage stats persist.
10.50.2500 Usage stats persist. Usage stats persist.
11.0.3000 Usage stats persist. Usage stats cleared.
11.0.3401 Usage stats persist. Usage stats cleared.

It seems that this bug/feature has been introduced in SQL Server 2012 (as of the time of writing 11.0.3401 is the latest build of SQL Server 2012, SP1, CU8). My recommendation is that you keep this in mind when reviewing index usage.

Posted in SQL Server | 5 Comments

SQL Server configuration files

Problem: You require the ability to perform a repeatable standardised installation of multiple instances of SQL Server or the ability to create as-built documentation of an installation.

Solution: I’ve made a few attempts at documenting a SQL Server install, and all the various options that were chosen at each step, with mixed success. And ultimately following documentation is pretty susceptible to human error. While investigating something else entirely I came across the SQL Server configuration file – and I was sold.

How it works: When running through the SQL Server installation wizard, behind the scenes a configuration file is being created. This file is saved into the C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\<date_time_stamp>\ folder. (YMMV – this location depends on SQL version and whether you install to the default path). The file is created just before the final step to install SQL Server. You have the option of exiting the setup program at this point and using the configuration file, letting the install proceed and use the setup file for further installs, or even copying a configuration file from another extant installation.


Here is what my config file looks like for a named instance called CONFIG (with a little data scrubbing):

;SQL Server 2012 Configuration File

; Specifies a Setup work flow, like INSTALL, UNINSTALL, or UPGRADE. This is a required parameter. 


; Detailed help for command line argument ENU has not been defined yet. 


; Parameter that controls the user interface behavior. Valid values are Normal for the full UI,AutoAdvance for a simplied UI, and EnableUIOnServerCore for bypassing Server Core setup GUI block. 


; Setup will not display any user interface. 


; Setup will display progress only, without any user interaction. 


; Specify whether SQL Server Setup should discover and include product updates. The valid values are True and False or 1 and 0. By default SQL Server Setup will include updates that are found. 


; Specifies features to install, uninstall, or upgrade. The list of top-level features include SQL, AS, RS, IS, MDS, and Tools. The SQL feature will install the Database Engine, Replication, Full-Text, and Data Quality Services (DQS) server. The Tools feature will install Management Tools, Books online components, SQL Server Data Tools, and other shared components. 


; Specify the location where SQL Server Setup will obtain product updates. The valid values are "MU" to search Microsoft Update, a valid folder path, a relative path such as .\MyUpdates or a UNC share. By default SQL Server Setup will search Microsoft Update or a Windows Update service through the Window Server Update Services. 


; Displays the command line parameters usage 


; Specifies that the detailed Setup log should be piped to the console. 


; Specifies that Setup should install into WOW64. This command line argument is not supported on an IA64 or a 32-bit system. 


; Specify the root installation directory for shared components.  This directory remains unchanged after shared components are already installed. 

INSTALLSHAREDDIR="C:\Program Files\Microsoft SQL Server"

; Specify the root installation directory for the WOW64 shared components.  This directory remains unchanged after WOW64 shared components are already installed. 

INSTALLSHAREDWOWDIR="C:\Program Files (x86)\Microsoft SQL Server"

; Specify a default or named instance. MSSQLSERVER is the default instance for non-Express editions and SQLExpress for Express editions. This parameter is required when installing the SQL Server Database Engine (SQL), Analysis Services (AS), or Reporting Services (RS). 


; Specify the Instance ID for the SQL Server features you have specified. SQL Server directory structure, registry structure, and service names will incorporate the instance ID of the SQL Server instance. 


; Specify that SQL Server feature usage data can be collected and sent to Microsoft. Specify 1 or True to enable and 0 or False to disable this feature. 


; Specify if errors can be reported to Microsoft to improve future SQL Server releases. Specify 1 or True to enable and 0 or False to disable this feature. 


; Specify the installation directory. 

INSTANCEDIR="C:\Program Files\Microsoft SQL Server"

; Agent account name 


; Auto-start service after installation.  


; CM brick TCP communication port 


; How matrix will use private networks 


; How inter brick communication will be protected 


; TCP port used by the CM brick 


; Startup type for the SQL Server service. 


; Level to enable FILESTREAM feature at (0, 1, 2 or 3). 


; Set to "1" to enable RANU for SQL Server Express. 


; Specifies a Windows collation or an SQL collation to use for the Database Engine. 


; Account for SQL Server service: Domain\User or system account. 


; Windows account(s) to provision as SQL Server system administrators. 


; The default is Windows Authentication. Use "SQL" for Mixed Mode Authentication. 


; Provision current user as a Database Engine system administrator for SQL Server 2012 Express. 


; Specify 0 to disable or 1 to enable the TCP/IP protocol. 


; Specify 0 to disable or 1 to enable the Named Pipes protocol. 


; Startup type for Browser Service. 


Essentially this is a list of the various parameters and the options chosen for the installation. I won’t go through all the parameters, SQL Server installations are well documented elsewhere. I encourage you to seek out the config files of any SQL installations and scan through them to get a feel for the options.

Installing SQL Server using the configuration file: To use the configuration file for a SQL Server installation requires you to use the setup.exe command from the command line. One of the options for the command is /ConfigurationFile, which accepts a filepath/filename argument.

No passwords are stored in the configuration files (and you wouldn’t want that would you?) so these will need to be passed in at the command line.

If you want an unattended install you can also pass in the /Q or /QS switches. /Q is Quiet mode and surpresses the user interface, /QS is QuietSimple and displays the user interface but does not require any user input. These parameters are also specified in the configuration file and can be set there. Any parameters passed in to the command line will override parameters in the file.

You will have to accept the license terms – you can do this via the command line by supplying the /IACCEPTSQLSERVERLICENSETERMS option.

Here is a (scrubbed) example of a command line install.

Setup.exe /SQLSVCPASSWORD="password" /AGTSVCPASSWORD="password" 
/SAPWD="password" /ConfigurationFile=C:\Temp\ConfigurationFile.ini 

Conclusion: Whether you are looking for a way to standardise multiple SQL Server instances, automate SQL Server installations or to document or audit existing installation configuration files might be what you are looking for.

Posted in SQL Server | 2 Comments

Conditional Aggregates

The Problem: A legacy table contains amounts and a char column indicating whether the amount is a credit or a debit. We want an aggregate (over a given range) of the amounts. We need to add the credits and subtract the debits.

The Solution(s): Two possible approaches to the problem suggest themselves. One is to sum all of the credits and all the debits and them subtract the total debit from the total credit. The second approach is to include a case statement inside the aggregate function to add or subtract the given row’s amount based on the credit/debit column.

The Code: Implement a cut down version of the table with some contrived sample data:

create table tbl_ledger
	id int identity(1,1),
	credit_debit char(1),
	amount money,
	[description] varchar(128) 

insert tbl_ledger(credit_debit,amount,[description])

This table has 10 rows, 7 are $10 credits and 3 are $10 debits so we would expect the sum to return $40 ($70 credit – $30 debit).

Implement option 1, subtract the aggregates (this could be done a number of ways, including using variables to hold interim results, or common table expressions. I’ve chosen a sub query as I think it reads well. The query plans for sub queries and cte’s are similar, using variables muddies the waters a bit):

select sum(amount) - (select sum(amount) as [debit]
			from tbl_ledger 
			where credit_debit = 'D'
as [credit]
from tbl_ledger 
where credit_debit = 'C'

As hoped this returns 40.

Implement option 2 using a case statement inside the sum function:

			when credit_debit = 'C' then amount
			when credit_debit = 'D' then -1 * amount
		end) as [total cash]
from tbl_ledger

This again returns a correct result.

The Analysis: To me option 2 is a more elegantly coded solution, but how does it stack up performance wise? To get some measurable results rerun the insert multiple times.

insert tbl_ledger(credit_debit,amount,[description])
go 100000

And run the two queries side by side with statistics on:

set statistics io on
set statistics time on

			when credit_debit = 'C' then amount
			when credit_debit = 'D' then -1 * amount
		end) as [total cash]
from tbl_ledger

select sum(amount) - (select sum(amount) as [debit]
						from tbl_ledger 
						where credit_debit = 'D'
as [credit]
from tbl_ledger 
where credit_debit = 'C' 

set statistics io off
set statistics time off

I got the following:

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 1, logical reads 4167, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 281 ms, elapsed time = 295 ms.

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 10, logical reads 8334, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 483 ms, elapsed time = 130 ms.

With the execution plans:


The sub query outperforms the conditional aggregate primarily because the query can be parallelised. The conditional aggregate does a single table scan versus two table scans for the sub query, 4167 logical reads versus 8344 logical reads – but when I ran the tests against cold caches the read ahead mechanism pulled all data pages into cache for both queries so I didn’t notice a significant change in the comparative result.

Running the second query with a maxdop hint brings the results closer:

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 1, logical reads 4167, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

SQL Server Execution Times:
CPU time = 328 ms, elapsed time = 347 ms.

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 2, logical reads 8334, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

SQL Server Execution Times:
CPU time = 249 ms, elapsed time = 296 ms.

And drops the parallel operators:


The Conclusion: After analysis it seems that the sub query out performs the conditional aggregate option – even if we suppress parallelism. That said these tests were on a cut down table, and a wider table with more rows might bring IO more into play – especially if indexes are added to the mix. Adding an index on the credit_debit column with amount as an included column speeds up the sub query even more – replacing the two table scans with index seeks.

Posted in SQL Server, T-SQL | 6 Comments

My Favourite SSMS productivity tips

It’s a well known fact that any sitcom that runs for a certain length of time will release a clip show. Similarly any blog about SQL Server will end up doing a top tips for using SQL Server Management Studio. Here then, in no particular order, are mine:

1. Filtered views in object explorer.

This is a great feature for navigating around large unfamiliar databases. Available on tables, views, stored procedures and functions. Simply right click the node in the object explorer:





Enter your filter conditions into the filter setting dialog and you can zoom in on the objects you need quickly:



Object explorer will helpfully indicate that the view is filtered so you don’t think you’ve lost a bunch of tables!




2. Object explorer details.

I’ve overlooked this in the past – but it’s quite handy for a quick look at object properties. Available from the view menu or by pressing [F7]. The objects in this view can be filtered using the tip above. By right clicking on the column headers you can select the properties you want to see.



Column headings can be rearranged by dragging and dropping the headers and the rows can be sorted ascending or descending by column.



I use this a lot to get a quick look at what is consuming space in the database. What are my biggest tables? How many rows in a given table?

3. Drag column names.

We all know that we can drag individual items from the object explorer into the Query window – but you can also drag the column node and get a nice comma separated list of column names.





All right – it’s a bit annoying being all in one line like that. But while it’s still highlighted press [ctrl]-[h] to get the find and replace dialog:



Make sure you change the ‘look in’ drop down to ‘selection’ (or you can get some strange results). Expand the find options node and select Use > Regular expressions. In ‘find what’ put [,] & [space] and in ‘replace with’ put [,] & [\] & [n].



For an extremely comprehensive look at using regular expressions in SSMS look here.

4. Block move.

Finally, if you have your column names all on a separate row from the tip above, but you want them indented you could manually enter a [tab] at the start of each row (like the good old days) or you could use a regular expression find and replace as above. But, let’s face it, no one wants to deal with those things if we don’t have to. The third option is to hold the [alt] key down and drag across the first few characters of your column name and down the rows. Once you have the block selected simply press [tab] to indent.


There’s a million tips and tricks in SSMS. I have shown you 4 of my current favourites. Feel free to share yours…

Posted in SQL Server | Leave a comment

Apply SP1 to SQL Server 2012 on active/active Windows Server 2012 cluster.

A while ago I blogged my about my approach to service packing a SQL Server 2008R2 active active cluster. Time to apply SP1 to SQL 2012 on top of a Windows 2012 failover cluster. The basic approach is similar but there are a couple of subtle differences.

The key to the approach with 2008 was to patch the passive node, while not allowing the SQL instances the option of failing back to the passive node. In Windows Server 2008 this was possible through the Failover Cluster Adminstration GUI tool. I haven’t seen any literature suggesting that this approach is no longer valid – although I have seen moves towards ‘cluster aware’ updating. But this is only available from SQL 2012 SP1, so to apply SP1 I’m going to stick to my tried and tested method.

1.The management of possible failover partners is no longer available via the GUI and needs to be done with cluster.exe. Cluster.exe is not required for the configuring of a server 2012 failover cluster so it may or may not be configured in your environment. You can add the Failover Cluster Command Interface feature using the server manager tool.


2. Make a note of all node names (and/or IP addresses), SQL Server virtual names along with preferred nodes. If there are more than three nodes you may need to also take note of possible owners for each SQL resource group. For my example assume that I have a cluster with node1 and node2, SQL1 normally lives on node1 and SQL2 normally lives on node2.

3. To start with a clean slate and ensure any previous updates are completed both nodes should be restarted if possible. Choose the physical node that you you want to patch second and restart that node (in my example node2).

4. Restart the node you want to patch first (node1). This will mean that both active SQL instances are now running on node2. Some restarts will be essential, but you could avoid the first two restarts if you need to keep downtime to a minimum and just fail SQL1 over to node2. The main point here is to always patch a passive node.

5. Use cluster.exe to remove node1 from the possible owners lists of SQL1 and SQL2. This means that neither SQL instance can fail over to node1 while it is being patched. The relevant commands are:

  • Check the possible owners: cluster.exe resource “SQL1″ /listowners
  • Remove passive node from possible owner list of all instances: cluster.exe resource “SQL1″ /RemoveOwner:node1
  • Confirm: cluster.exe resource “SQL1″ /listowners

(repeat for SQL2)

6. Run the service pack executable on node1 and apply SP1 to both SQL instances.

7. Restart node1.

8. Add node1 back into the possible owners lists of SQL1 and SQL2 using:

  • cluster.exe resource “SQL1″ /AddOwner:node1

(repeat for SQL2)

9.Fail both instances over to node1. You could do this by restarting node2 or using powershell to suspend node2 with the drain parameter. Once the instances have successfully moved and you have removed the possible owners  – resume the node to patch.

10. Repeat steps 5 – 8 on node2.

11. Add node2 back into the possible owners lists of SQL1 and SQL2 and fail both instances over to node2. Check that the build level is correct and review the SQL Server error logs.

12. Fail SQL1 over to node1. Check build levels and SQL Server error logs.

I want acknowledge Sean Gallardy who helped clarify my thinking by responding to my msdn forum question.

Posted in SQL Server | 4 Comments

Quick tip: Could not load package “xxx” because of error 0xC0014062

I struck this error in one of my SSIS packages, I found lots of tips and suggestions online for the error but none of the solutions worked for me. So I want to add my solution to the mix so that it might give the next person searching an extra option to try.

Setup: I have a simple SSIS package that loads a couple of csv files into SQL Server tables and then calls stored procedures to clean and scrub the data. Everything works on my development server so I deploy to production by saving the package to the SQL Server and scheduling a job to run the package. (I’m using 2012 database engine and SSIS). But the job fails with the following error:

Could not load package “xxx” because of error 0xC0014062.
Description: The LoadFromSQLServer method has encountered OLE DB error code 0x80004005 (Login timeout expired).
The SQL statement that was issued has failed.

Investigation: My first thought is that this is some sort of permissions issue, and a lot of my search results seemed to suggest either that or a mistake in server name or other parameters.

-I was able to run the package manually in production so I reasoned that the agent service account might be the issue.

-Check that the SQL Agent service account has sufficient permissions to the various folders. I wasn’t able to get onto the server and use config manager for some reason so I used:


EXEC master.dbo.xp_regread
    @sn OUTPUT;


-Check that the agent account had sufficient permissions to execute the stored procedures.

use [Database]

grant execute on object:: [procedure 1] to [agent service account]
grant execute on object:: [procedure 2] to [agent service account]

- Try again. Still not working!

- After much scratching of heads and comparing the failing job configuration with that of successful jobs it turned out that using the SQL Server’s fully qualified domain name in the Package configuration tab fixed the issue!


Conclusion: I don’t really know if this is a quirk of my current AD environment, or a requirement of SSIS 2012. It worked for me, it might work for you. If anyone can shed any further light on what’s going on here – please do.

Posted in Security, SSIS | 2 Comments