Another great reason to enable remote admin connections

I recently came cross another great reason to enable the remote admin connections server configuration on SQL server.

If you don’t know what the DAC is I’m not going to go into detail here so check out these posts here, here and here.

The DAC is great tool and a light for any DBA in dark places. By default it is enabled from the host that is running the SQL service. To enable remote access to the DAC you need to enable remote admin connections.

exec sp_configure 'remote admin connections',1

There’s a bunch of compelling reasons to enable this setting not least of which is the fact that if a server is in such bad way that you need to access it using the DAC – there’s a good chance you either wont be able to or wont want to access the box using an RDP session.

But the best reason of all is that – if you have SQL server clustered instances the DAC will not work locally! Not from the virtual IP, not from the host node, not using IP addresses and specific ports. No how, no way.

Enabling the remote admin connections setting is dynamic, no service restart is required, and a dynamic TCP port is assigned as soon as the reconfigure is run. If you do have any security or firewall restrictions it is possible to set a static port for the DAC but this requires a registry edit and a service restart.

Edit this key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.X\MSSQLServer\SuperSocketNetLib\AdminConnection\Tcp

Note The “X” in “MSSQL.X” is a number that indicates the directory where the instance is installed for SQL Server 2005 or the instance name for SQL Server 2008. (from




And restart the service.

Often when you use the DAC you’ll use a single threaded command line tool like SQLCMD but it is possible to use management studio.

You can’t connect using the object explorer because that takes more than one connection. Remember the DAC is a single dedicated scheduler – it is only possible to have one DAC connection at a time. But you can open a query window in SSMS, right click and choose connect (or change connection if the new query window has already connected). Connect to ‘admin:[servername]\[instance name]’.

Management studio will throw this error:



But you can see at the bottom that the connection has actually worked. I suspect that the error is probably SSMS trying to open another background connection for something.

Posted in Disaster Recovery, Security, SQL Server | Leave a comment

Tracking down CPU spikes using process explorer and DMVs

What is it about 4:45 pm on a Friday afternoon that makes CPUs spike, drives crash, databases corrupt? I wish I knew but I don’t. I do know that I have been on the receiving end a few times. Just shutting down the last applications when a call or a page comes in. Next minute you’re phoning home and dinner’s in the warmer.

On one such Friday afternoon, in the not too distant past, one of the DBA team noticed that CPU was running hot on our main production server. I could see from looking at task manager on the server that the SQL process was responsible for the CPU usage. But this is a busy OLTP production server and I needed to isolate the process or processes responsible. Looking at sys.sysprocesses it wasn’t immediately obvious what session could be responsible.

Next step was to run process explorer on the server to narrow down which CPU’s were spiking. If you haven’t already discovered it, process explorer is a great free lightweight diagnostic tool from the sys internals team. The executable can be downloaded and run without any installation.

Open process explorer and click on one of the small graphs near the top to get a detailed view.



Tick the “Show one graph per CPU” option.



Hovering over the individual graphs will give the CPU id.


Note: The screenshots above are not from the actual issue. At the time I was too busy trying to fix the problem to take screenshots for later use :)

Using process explorer this way I found that CPU usage was jumping up and down but there were two CPUs that were sitting at 100% consistently. Armed with the ids I hit SQL Server and some DMVs.

Using sys.dm_os_schedulers with the ids of the two rogue CPUs gave me the scheduler addresses.

select scheduler_address from sys.dm_os_schedulers
where cpu_id in 

Putting the scheduler addresses into sys.dm_os_workers gave me task addresses.

select task_address from sys.dm_os_workers 
where scheduler_address in 

And finally putting the task addresses into sys.dm_os_tasks gave me the session ids.

select session_id from sys.dm_os_tasks 
where task_address in 

Or if you want to put it all together you get something like.

from sys.dm_os_schedulers s
left join sys.dm_os_workers w
	on s.scheduler_address = w.scheduler_address
left join sys.dm_os_tasks t
	on w.task_address = t.task_address
where s.cpu_id in ([id1],[id2],...)
and w.[state] = 'RUNNING'

There’s a bunch of other columns that you might be interested in from these DMVs – but in my case I just wanted the session ids.

From there I could go back to sys.sysprocesses and see that in this case it was a system service broker process that was pegging the CPU. Restarting the service broker queue cleared the issue instantly.

ALTER QUEUE [my_queue]
ALTER QUEUE [my_queue]
Posted in Service Broker, SQL Server | Tagged , , , | 3 Comments

The full text error and why I was happy to see it.

The Situation: I was recently involved in the migration of a group of core databases from three SQL Server 2005 instances to three SQL Server 2012 instances. There were quite a few moving parts that needed to be considered as part of the upgrade including replication, CLR, service broker and full text search.

The basic approach was to build the 2012 instances in a DR site while the main site continued to run on SQL 2005. Then log-ship the databases from the 2005 instances to the 2012 instances. This way the upgrade of the databases from 2005 to 2012 is postponed until cut-over when recovery is run. (As an aside – this is the reason that you can’t use with standby on the log restores; because this means the database has to go through recovery and hence would upgrade.)

The Error: Before the cutting over the main SQL instances I was working on a single instance upgrade of a few smaller databases. Still going from 2005 to 2012, still using a bunch of technologies including full text search and still using log-shipping to migrate the databases. The morning of the cut-over arrived and I recovered one of the databases…

Converting database ‘my_database’ from version 611 to the current version 706.
Database ‘my_database’ running the upgrade step from version 611 to version 621.
Database ‘my_database’ running the upgrade step from version 621 to version 622.
Database ‘my_database’ running the upgrade step from version 622 to version 625.
Database ‘my_database’ running the upgrade step from version 625 to version 626.
Database ‘my_database’ running the upgrade step from version 626 to version 627.
Database ‘my_database’ running the upgrade step from version 627 to version 628.
Database ‘my_database’ running the upgrade step from version 628 to version 629.
Database ‘my_database’ running the upgrade step from version 629 to version 630.
Database ‘my_database’ running the upgrade step from version 630 to version 631.
Database ‘my_database’ running the upgrade step from version 631 to version 632.
Database ‘my_database’ running the upgrade step from version 632 to version 633.
Database ‘my_database’ running the upgrade step from version 633 to version 634.
Database ‘my_database’ running the upgrade step from version 634 to version 635.
Database ‘my_database’ running the upgrade step from version 635 to version 636.
Database ‘my_database’ running the upgrade step from version 636 to version 638.
Database ‘my_database’ running the upgrade step from version 638 to version 639.
Database ‘my_database’ running the upgrade step from version 639 to version 640.
Database ‘my_database’ running the upgrade step from version 640 to version 641.
Database ‘my_database’ running the upgrade step from version 641 to version 642.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.
Msg 30097, Level 23, State 100, Line 1
The fulltext catalog upgrade failed because of an inconsistency in metadata between sys.master_files and sys.fulltext_catalogs for the catalog ID xx in database ID x.
Try to reattach this database. If this fails, then the catalog will need to be dropped or recreated before attach.

During the course of setting up log-shipping and planning for the migration I had recovered the databases a number of times and had never seen this error before. The database was in a recovery pending state and the only way forward seemed to be to drop the full text catalogs at the source, take a fresh full backup and restore that backup. (Don’t be fooled by the suggestion in the message to reattach the database – I couldn’t find a way to make this work in this situation.)

That approach worked for the smaller database but it wouldn’t be an option for the main site. In this case we had several terabytes of database, we couldn’t drop full text, backup, copy and restore in any reasonable length outage window.

The Theory: As well as log-shipping from the main site to DR we also log-shipped into our data warehouse every night. The data warehouse server was also a SQL 2012 box. I was curious about why I hadn’t seen this error on the data warehouse server – after all we recovered the main database onto it every morning. Then one morning the recovery failed and we got the same error.

The theory was – if a change was made to the full text schema this would break the recovery. I needed to be able to reliably reproduce this – and then solve it before we attempted to migrate the main site.

The Reproduction: I set up a VM with a SQL 2005 instance and a SQL 2012 instance and log shipped from the 2005 to the 2012 instance. I made a change to one of the full text indexes (dropped a column) and allowed a log backup and restore to happen. Then I tried to recover the 2012 database and I got the error. I’d never been so happy to see an error in my life. I did it again, and got the error again. Now that I could reliably reproduce the error I could test ways of avoiding it.

The Solution: The first and most obvious solution was to put a change freeze on any full text schema changes. These are fairly rare so that wasn’t too much of a hardship. The other solution was to drop the full text indexes and catalogs at the source database, allow a log backup to occur (even a final tail of log backup). Allow the log backup to restore and then recover the database. Then rebuild the full text catalog and indexes.

For completeness dropping and recreating a full text index and catalog:

-- disable and drop the index and catalog

-- now recreate the catalog and index
    IN PATH N'C:\some_path' --no effect from SQL2008
    [my_column], [my_other_column]
KEY INDEX [PK_my_table] ON [my_table]
Posted in Database options, Disaster Recovery, SQL Server | Tagged , , , | 2 Comments

Migrating SQL Server logins

Once upon a migration I would use sp_help_revlogin to migrate logins and remap the SQL server logins using sp_change_users_login. During some recent migration work I realised that SQL Server Management Studio could do most of the work of sp_help_revlogin. I still needed to use sp_change_users_login – but I came across a couple of other tips to solve a couple of issues.

Note: The terminology can be a bit ambiguous and confusing here. When I talk about logins I mean all security principals that can be used to access the SQL Server instance, either Windows or SQL principals. When I talk about SQL server logins – I mean logins that use SQL authentication, as opposed to logins that use Windows authentication.

SSMS: To script out all logins from a SQL server instance simply select the logins node in the security node of object explorer and open the object explorer details window ([F7]). This window allows you to shift click to multi-select logins and right click to script out as desired. One caveat here is that the script to create the SQL server logins will generate random passwords – these need to be replaced with the correct passwords.



Mapping SQL logins: Logins based on Windows AD accounts will remap automatically across SQL instances in the same domain because the SIDs are the same, set by Windows as opposed to SQL Server. But SQL server logins will be assigned a random SID when created. Because the mapping between a SQL server login and a database user is based on SIDs the Windows logins will remap, but the SQL server logins will not. The well known procedure sp_change_users_login can be run in a database context to identify these ‘orphaned’ logins and remap them.

Creating SQL logins with specific SIDs: I had a situation recently where I migrated a data warehouse server. Production databases were restored to the server each night, data scrubbed and then set to read-only. I needed a login on the SQL Server to be mapped to a database user so that queries could be run against the warehouse. In this case the login I had was a SQL server login – so when I migrated instances the login SID did not match the SID of the user in the production databases. Sp_change_users_login won’t work on a read-only database because the metadata can’t be written. My options where to remap the database users to the login after every restore but before setting the database to read-only or to create the login on the server with a SID matching the SID on the production boxes. It turns out that a SQL login can be created with a specific SID. The following query can be run against the source server master database and will produce create scripts for all logins with explicit creation of SIDs – which can be run as required on the destination server.

'create login [' + + '] ' 
+ case 
	when p.type in('U','G') then 'from windows ' 
	else '' 
+ 'with ' 
+ case 
	when p.type = 'S' then 'password = ' 
		+ master.sys.fn_varbintohexstr(l.password_hash) 
		+ ' hashed, ' 
		+ 'sid = ' 
		+ master.sys.fn_varbintohexstr(l.sid) 
		+ ', check_expiration = ' 
		+ case 
			when l.is_expiration_checked > 0 then 'ON, ' 
			else 'OFF, ' 
		+ 'check_policy = ' 
		+ case 
			when l.is_policy_checked > 0 then 'ON, ' 
			else 'OFF, ' 
	else '' 
+ 'default_database = [' 
+ p.default_database_name 
+ case 
	when len(p.default_language_name) > 0 then '], default_language = [' 
		+ p.default_language_name 
		+ ']'
	else ']' 
FROM sys.server_principals p
	LEFT JOIN sys.sql_logins l
	ON p.principal_id = l.principal_id
WHERE p.type in('S','U','G')

Forgotten passwords: Another issue I came across in a recent migration was passwords that had fallen through the cracks and been forgotten. After spending some fruitless hours trying to get hashcat to work properly – and failing – I revisited the query above. Turns out I didn’t need to recover passwords – I could recreate the logins using the password_hash. To convince myself of this I create a login to test with.

USE [master]
CREATE LOGIN [lostpassword] 

Run the login script from above which for [lostpassword] gives:

create login [lostpassword] 
with password = 0x02002c4b16cfbc80f3d07ab3d67042262a87370fc7f0a6300badcddd999ec304c9e69137d5f90b307a230330a25e41ca3e9d3a2664d2782aafe1611608afedfabf67b44ef22e hashed
, sid = 0xed5cd87e54a8bc43b7ff9435f052e2cd
, check_expiration = OFF
, check_policy = OFF
, default_database = [master]
, default_language = [us_english]

Drop the login that I originally created.

DROP LOGIN [lostpassword]

And recreate the login using the generated script.

create login [lostpassword] 
with password = 0x02002c4b16cfbc80f3d07ab3d67042262a87370fc7f0a6300badcddd999ec304c9e69137d5f90b307a230330a25e41ca3e9d3a2664d2782aafe1611608afedfabf67b44ef22e hashed
, sid = 0xed5cd87e54a8bc43b7ff9435f052e2cd
, check_expiration = OFF
, check_policy = OFF
, default_database = [master]
, default_language = [us_english]

And confirm that the recreated login’s password matches the original.

SELECT name FROM sys.sql_logins 
WHERE PWDCOMPARE('I_f0rg0t!', password_hash) = 1 ;

Which it does:


Another tool for the migration toolkit.

Posted in Security, SQL Server, T-SQL, Uncategorized | Tagged , | 8 Comments

Rolling back transactions with cursors and local variables.

I’ve been doing some work on a credit card payment system lately. Obviously this needs to be robust and consistent so I’ve been working on error handling and “transactionalising” certain parts of the process.

I needed to answer a couple of questions that I had along the way. What happens to local variables when a transaction is rolled back? What happens to an open cursor when a transaction is rolled back?

The first question came about because I was looking for a way to log errors if my transaction had been rolled back. Any logging outside of the transaction would not have access to the error details while logging to tables inside the transaction would be rolled back as well (you laugh? … I’ve seen this done!). What about using a local variable to hold interim values? Would they persist across a rollback? This was pretty quick and easy to test.


SET @test = 1


    SET @test += 10


SELECT @test

--@test = 1 means the variable assignment is rolled back
--@test = 11 means the variable assignment persists

This returns 11 for SQL 2005, 2008, 2008 R2 and 2012 (the += syntax wont work on 2005). Local variables are outside of the scope of the transaction. Intuitively this makes sense, there’s no real requirement for variables to have ACID properties – they’re only scoped to a particular session.

The next question arose because I needed to do a couple of things within an explicit transaction. There was a business requirement to only update a single row at a time so I chose to use a cursor within the transaction. I wrapped the logic in a try – catch block to rollback the transaction and handle any errors, and I wanted to know what would happen to the open cursor after a rollback?

Testing this I could use either CURSOR_STATUS() or sys.dm_exec_cursors() to check the status of my cursor. I stripped out all the code from my proc leaving just the control flow logic, used a simple query to populate the cursor, and faked out an error using raiserror().




		SELECT [name] FROM sys.databases
		OPEN test_cursor
		FETCH NEXT FROM test_cursor INTO @db_name

			FETCH NEXT FROM test_cursor 
                                          INTO @db_name
			PRINT @db_name

			--create a fake error
			IF @db_name = 'tempdb'
		CLOSE test_cursor
		DEALLOCATE test_cursor


		--state of my cursor?
		SELECT CURSOR_STATUS('local','test_cursor')
		SELECT * FROM sys.dm_exec_cursors(@@SPID)


And on my system (SQL 2005, 2008, 2008R2 & 2012) I got:


Despite the cursor being declared and opened inside the transaction, it remains open after the transaction has rolled back. To me this was a little counterintuitive, but easy enough to deal with within the catch block:

IF EXISTS(SELECT 1 FROM sys.dm_exec_cursors(@@SPID))
		CLOSE test_cursor
		DEALLOCATE test_cursor


IF CURSOR_STATUS('local','test_cursor') > -1
		CLOSE test_cursor
		DEALLOCATE test_cursor

Happy transacting – but please clean up after yourself.

Posted in Database Theory, SQL Server, T-SQL | Tagged | Leave a comment

Quick Tip: Migrating a SQL Server Mirroring Witness server.

If you’re like me, you might be thinking “I want some more things to think about when I migrate between SQL Servers.” … No? Okay, fair enough. There’s a whole list of things to think about when migrating from one SQL instance to another including:

  1. Logins.
  2. Jobs/Alerts/Operators.
  3. Credentials/Proxies.
  4. Linked Servers.
  5. Encryption keys/certificates.
  6. SSIS packages.
  7. SSRS reports.
  8. Server configurations.
  9. Any jobs or sprocs that shell out to the OS, and use specific paths.
  10. Not to mention third party monitoring, application connection strings etc.
  11. And I imagine any DBA reading this list could add others of their own…

I came across another consideration today. The SQL server that you are migrating from is acting as a witness in a mirroring configuration. This isn’t something that is immediately obvious – you have to look for it.

Risks: If you lose the witness server you lose the ability for an automatic failover. You will have a partner quorum so mirroring remains synchronised, the principal remains the principal, and manual failover is possible. As soon as a witness is configured – automatic failover is possible again. See here for details.

Checks: My first thought was to check for a started mirroring endpoint, on the source server and potential witness run:

select * 
from sys.database_mirroring_endpoints
where state = 1
and role = 2;

But this isn’t conclusive – it could indicate an endpoint that had been used in the past but is not currently active. A more definitive query is:

select * from sys.database_mirroring_witnesses;

This DMV returns one row for each witness role that the server performs. No rows returned means this server is not acting as a witness in any current mirroring configurations. See here for details.

Mitigation: Script out the mirroring endpoint from the source witness server and run that script on the destination witness server.

Grant the connect privilege to the account being used for mirroring (or certificate if you’re using that method):

GRANT CONNECT ON ENDPOINT::[mirroring_endpoint] TO [principal];

On the principal server remove the witness from the mirroring configuration. This has to be done for each mirrored database.


Again on the principal, add in the new witness to each database mirroring configuration:

ALTER DATABASE  [database]
  SET WITNESS = 'TCP://[system-address]:[port]';

Best of luck with your migrations. I’ve got a few coming up so I probably have still more to discover.

Posted in Database options, Disaster Recovery, SQL Server, T-SQL | Tagged , , | 1 Comment

Quick tip: Querying AD with xp_logininfo

We all have blind spots in our knowledge. We don’t know they are there and when we are thinking through an issue our mind quickly rejects some plans because of these blind spots. This happened to me recently when I was tasked with doing a security audit of a SQL instance.

Aside from a small handful of SQL logins all access was via Windows groups. I knew that I could use xp_logininfo to discover whether a windows user account had access to SQL Server – and via which groups.

exec xp_logininfo '[domain]\[username]','all'

But I needed to know which members belonged to which group. I was starting to look at cutting and pasting from the AD dialogs into Excel or writing LDAP queries when a colleague pointed out another xp_logininfo option that had been in my blind spot.

exec xp_logininfo '[domain]\[group]','members'

Will return all of the members in the Active Director global group (doesn’t return information for universal groups).

Nothing a closer inspection of books online couldn’t have told me – but I’d just never noticed it. Not the greatest discovery in the world either but it made my day a lot easier.

Posted in Security, SQL Server, T-SQL | 1 Comment

Issue with sys.dm_db_index_usage_stats in SQL Server 2012

I’d be struggling to remember a week where I hadn’t run a query against the dmv sys.dm_db_index_usage_stats to find unused or under used indexes (or even heaps). It would have to be in the top shelf of many DBA’s toolboxes. So I was alarmed when I came across this post from Joseph Sack over at SQL Skills.

The gist of this is if you rebuild an index in SQL Server 2012 then that indexes’ stats are removed from the DMV! It doesn’t seem to be the case with reorganize, just with rebuilds. So the take away is to treat results from sys.dm_db_usage_stats with suspicion, be aware of when you last ran a rebuild against the index. The best advice I’ve seen is to snapshot the DMV before running index maintenance.

I wanted to see this for myself, and add my 2 cents worth so I ran the following tests.

Create a table:

if exists(select 1 from sys.objects where name = 'table' 
          and type = 'U')
	drop table [table];

create table [table]
  [id] int identity(1,1),
  [value] varchar(20) 
  default convert(varchar(20),datepart(millisecond,getdate()))

Add a clustered and a non clustered index:

create clustered index ix_cls on [table](id);

create nonclustered index ix_noncls on [table](value);

Populate it with some fake data:

insert [table]([value])
go 1000

Run a couple of query’s that use the indexes:

select * from [table] where [id] = 498;
select * from [table] where [value] = 498;

Because of the way values are populated you might need to tinker with the where clause of the second select to hit the non-clustered index, or rerun the drop and create table script until you get both indexes being used.

Check usage stats:

	OBJECT_NAME(us.[object_id]) as [Table], as [Index],
from sys.dm_db_index_usage_stats us
inner join sys.indexes i
	on us.index_id = i.index_id 
        and us.[object_id] = i.[object_id]
where database_id = db_id()
and OBJECT_NAME(us.[object_id]) = 'table'

Reorganize the indexes:

alter index ix_cls on [table] reorganize
alter index ix_noncls on [table] reorganize

Check usage stats:

	OBJECT_NAME(us.[object_id]) as [Table], as [Index],
from sys.dm_db_index_usage_stats us
inner join sys.indexes i
	on us.index_id = i.index_id 
        and us.[object_id] = i.[object_id]
where database_id = db_id()
and OBJECT_NAME(us.[object_id]) = 'table'

Rebuild the indexes:

alter index ix_cls on [table] rebuild
alter index ix_noncls on [table] rebuild

Check the stats:

	OBJECT_NAME(us.[object_id]) as [Table], as [Index],
from sys.dm_db_index_usage_stats us
inner join sys.indexes i
	on us.index_id = i.index_id 
        and us.[object_id] = i.[object_id]
where database_id = db_id()
and OBJECT_NAME(us.[object_id]) = 'table'

Results: I ran the above tests against the SQL Server Builds that I had available with the following results:

Build Reorganize Rebuild
9.0.5000 Usage stats persist. Usage stats persist.
10.0.5500 Usage stats persist. Usage stats persist.
10.50.2500 Usage stats persist. Usage stats persist.
11.0.3000 Usage stats persist. Usage stats cleared.
11.0.3401 Usage stats persist. Usage stats cleared.

It seems that this bug/feature has been introduced in SQL Server 2012 (as of the time of writing 11.0.3401 is the latest build of SQL Server 2012, SP1, CU8). My recommendation is that you keep this in mind when reviewing index usage.

Posted in SQL Server | 6 Comments

SQL Server configuration files

Problem: You require the ability to perform a repeatable standardised installation of multiple instances of SQL Server or the ability to create as-built documentation of an installation.

Solution: I’ve made a few attempts at documenting a SQL Server install, and all the various options that were chosen at each step, with mixed success. And ultimately following documentation is pretty susceptible to human error. While investigating something else entirely I came across the SQL Server configuration file – and I was sold.

How it works: When running through the SQL Server installation wizard, behind the scenes a configuration file is being created. This file is saved into the C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\<date_time_stamp>\ folder. (YMMV – this location depends on SQL version and whether you install to the default path). The file is created just before the final step to install SQL Server. You have the option of exiting the setup program at this point and using the configuration file, letting the install proceed and use the setup file for further installs, or even copying a configuration file from another extant installation.


Here is what my config file looks like for a named instance called CONFIG (with a little data scrubbing):

;SQL Server 2012 Configuration File

; Specifies a Setup work flow, like INSTALL, UNINSTALL, or UPGRADE. This is a required parameter. 


; Detailed help for command line argument ENU has not been defined yet. 


; Parameter that controls the user interface behavior. Valid values are Normal for the full UI,AutoAdvance for a simplied UI, and EnableUIOnServerCore for bypassing Server Core setup GUI block. 


; Setup will not display any user interface. 


; Setup will display progress only, without any user interaction. 


; Specify whether SQL Server Setup should discover and include product updates. The valid values are True and False or 1 and 0. By default SQL Server Setup will include updates that are found. 


; Specifies features to install, uninstall, or upgrade. The list of top-level features include SQL, AS, RS, IS, MDS, and Tools. The SQL feature will install the Database Engine, Replication, Full-Text, and Data Quality Services (DQS) server. The Tools feature will install Management Tools, Books online components, SQL Server Data Tools, and other shared components. 


; Specify the location where SQL Server Setup will obtain product updates. The valid values are "MU" to search Microsoft Update, a valid folder path, a relative path such as .\MyUpdates or a UNC share. By default SQL Server Setup will search Microsoft Update or a Windows Update service through the Window Server Update Services. 


; Displays the command line parameters usage 


; Specifies that the detailed Setup log should be piped to the console. 


; Specifies that Setup should install into WOW64. This command line argument is not supported on an IA64 or a 32-bit system. 


; Specify the root installation directory for shared components.  This directory remains unchanged after shared components are already installed. 

INSTALLSHAREDDIR="C:\Program Files\Microsoft SQL Server"

; Specify the root installation directory for the WOW64 shared components.  This directory remains unchanged after WOW64 shared components are already installed. 

INSTALLSHAREDWOWDIR="C:\Program Files (x86)\Microsoft SQL Server"

; Specify a default or named instance. MSSQLSERVER is the default instance for non-Express editions and SQLExpress for Express editions. This parameter is required when installing the SQL Server Database Engine (SQL), Analysis Services (AS), or Reporting Services (RS). 


; Specify the Instance ID for the SQL Server features you have specified. SQL Server directory structure, registry structure, and service names will incorporate the instance ID of the SQL Server instance. 


; Specify that SQL Server feature usage data can be collected and sent to Microsoft. Specify 1 or True to enable and 0 or False to disable this feature. 


; Specify if errors can be reported to Microsoft to improve future SQL Server releases. Specify 1 or True to enable and 0 or False to disable this feature. 


; Specify the installation directory. 

INSTANCEDIR="C:\Program Files\Microsoft SQL Server"

; Agent account name 


; Auto-start service after installation.  


; CM brick TCP communication port 


; How matrix will use private networks 


; How inter brick communication will be protected 


; TCP port used by the CM brick 


; Startup type for the SQL Server service. 


; Level to enable FILESTREAM feature at (0, 1, 2 or 3). 


; Set to "1" to enable RANU for SQL Server Express. 


; Specifies a Windows collation or an SQL collation to use for the Database Engine. 


; Account for SQL Server service: Domain\User or system account. 


; Windows account(s) to provision as SQL Server system administrators. 


; The default is Windows Authentication. Use "SQL" for Mixed Mode Authentication. 


; Provision current user as a Database Engine system administrator for SQL Server 2012 Express. 


; Specify 0 to disable or 1 to enable the TCP/IP protocol. 


; Specify 0 to disable or 1 to enable the Named Pipes protocol. 


; Startup type for Browser Service. 


Essentially this is a list of the various parameters and the options chosen for the installation. I won’t go through all the parameters, SQL Server installations are well documented elsewhere. I encourage you to seek out the config files of any SQL installations and scan through them to get a feel for the options.

Installing SQL Server using the configuration file: To use the configuration file for a SQL Server installation requires you to use the setup.exe command from the command line. One of the options for the command is /ConfigurationFile, which accepts a filepath/filename argument.

No passwords are stored in the configuration files (and you wouldn’t want that would you?) so these will need to be passed in at the command line.

If you want an unattended install you can also pass in the /Q or /QS switches. /Q is Quiet mode and surpresses the user interface, /QS is QuietSimple and displays the user interface but does not require any user input. These parameters are also specified in the configuration file and can be set there. Any parameters passed in to the command line will override parameters in the file.

You will have to accept the license terms – you can do this via the command line by supplying the /IACCEPTSQLSERVERLICENSETERMS option.

Here is a (scrubbed) example of a command line install.

Setup.exe /SQLSVCPASSWORD="password" /AGTSVCPASSWORD="password" 
/SAPWD="password" /ConfigurationFile=C:\Temp\ConfigurationFile.ini 

Conclusion: Whether you are looking for a way to standardise multiple SQL Server instances, automate SQL Server installations or to document or audit existing installation configuration files might be what you are looking for.

Posted in SQL Server | 2 Comments

Conditional Aggregates

The Problem: A legacy table contains amounts and a char column indicating whether the amount is a credit or a debit. We want an aggregate (over a given range) of the amounts. We need to add the credits and subtract the debits.

The Solution(s): Two possible approaches to the problem suggest themselves. One is to sum all of the credits and all the debits and them subtract the total debit from the total credit. The second approach is to include a case statement inside the aggregate function to add or subtract the given row’s amount based on the credit/debit column.

The Code: Implement a cut down version of the table with some contrived sample data:

create table tbl_ledger
	id int identity(1,1),
	credit_debit char(1),
	amount money,
	[description] varchar(128) 

insert tbl_ledger(credit_debit,amount,[description])

This table has 10 rows, 7 are $10 credits and 3 are $10 debits so we would expect the sum to return $40 ($70 credit – $30 debit).

Implement option 1, subtract the aggregates (this could be done a number of ways, including using variables to hold interim results, or common table expressions. I’ve chosen a sub query as I think it reads well. The query plans for sub queries and cte’s are similar, using variables muddies the waters a bit):

select sum(amount) - (select sum(amount) as [debit]
			from tbl_ledger 
			where credit_debit = 'D'
as [credit]
from tbl_ledger 
where credit_debit = 'C'

As hoped this returns 40.

Implement option 2 using a case statement inside the sum function:

			when credit_debit = 'C' then amount
			when credit_debit = 'D' then -1 * amount
		end) as [total cash]
from tbl_ledger

This again returns a correct result.

The Analysis: To me option 2 is a more elegantly coded solution, but how does it stack up performance wise? To get some measurable results rerun the insert multiple times.

insert tbl_ledger(credit_debit,amount,[description])
go 100000

And run the two queries side by side with statistics on:

set statistics io on
set statistics time on

			when credit_debit = 'C' then amount
			when credit_debit = 'D' then -1 * amount
		end) as [total cash]
from tbl_ledger

select sum(amount) - (select sum(amount) as [debit]
						from tbl_ledger 
						where credit_debit = 'D'
as [credit]
from tbl_ledger 
where credit_debit = 'C' 

set statistics io off
set statistics time off

I got the following:

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 1, logical reads 4167, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 281 ms, elapsed time = 295 ms.

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 10, logical reads 8334, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 483 ms, elapsed time = 130 ms.

With the execution plans:


The sub query outperforms the conditional aggregate primarily because the query can be parallelised. The conditional aggregate does a single table scan versus two table scans for the sub query, 4167 logical reads versus 8344 logical reads – but when I ran the tests against cold caches the read ahead mechanism pulled all data pages into cache for both queries so I didn’t notice a significant change in the comparative result.

Running the second query with a maxdop hint brings the results closer:

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 1, logical reads 4167, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

SQL Server Execution Times:
CPU time = 328 ms, elapsed time = 347 ms.

(1 row(s) affected)
Table ‘tbl_ledger’. Scan count 2, logical reads 8334, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

SQL Server Execution Times:
CPU time = 249 ms, elapsed time = 296 ms.

And drops the parallel operators:


The Conclusion: After analysis it seems that the sub query out performs the conditional aggregate option – even if we suppress parallelism. That said these tests were on a cut down table, and a wider table with more rows might bring IO more into play – especially if indexes are added to the mix. Adding an index on the credit_debit column with amount as an included column speeds up the sub query even more – replacing the two table scans with index seeks.

Posted in SQL Server, T-SQL | 6 Comments