Category Archives: SQL Server

Optober – SQL sp_configure Options – Remote Admin Connection

On about day one of being a DBA I was told this was an option that should always be enabled.  It took me much longer to understand what it achieved, and over 5 years before I had to use the dedicated administrator connection ‘in anger’.  But if you use it once and save having to reboot a server, you will come to appreciate it very quickly.

Option Name:  remote admin connection

Usage:  sp_configure ‘remote admin connection’, 1

Default: 0

Maximum: 1

Requires Restart:  No

What it does:  SQL Server keeps a single scheduler dedicated to allow an administrator to connect.  To do this you connect as normal, but prefix your servername with ADMIN: – now even if SQL Server is completely tied up – this allows you to connect and work out what’s going on.  And that’s a very valuable thing to be able to do.  However, when SQL Server gets itself tied up into the sort of knots that would normally require using the dedicated admin connection, you’ll often find the server so slow to respond that trying to do anything on the instance itself is nearly impossible.  I’m assuming you’re not going to have your server hooked up to a mouse, monitor and keyboard on your desk(and if you do……another day perhaps).  So you are most likely going to need to RDP into the server.  When you RDP into a server you are actually adding even more load as your profile gets loaded and your remote session is maintained.  If the server is dying you don’t want to make it work harder.  If this option is enabled the dedicated admin connection can be used from a remote version of management studio or sqlcmd.

When should I use it:  Always.  This should be a standard deployment.  You may get objections about it being a security risk but seriously – you already have to be a system administrator to use it, there’s no danger there…or at least no additional danger.  If you have dodgy sysadmins then you have bigger problems than this setting.

What else should you know:   Be aware that you have one thread to play with if you connect to the DAC.  It’s meant as a method to get in, run quick diagnosis on what has put the server into the state it is in, and resolve the issue.  It’s not meant to run complicated query, there’s no chance of parellelism(Because it’s just ONE thread) and it’s really only there as a last ditch option to prevent you having to restart the server.

Also be aware that when used locally the DAC is listening on the loopback address.  If you are using the DAC remotely you are going to need to make sure all the necessary firewall ports are open.

Optober – SQL sp_configure options – Backup Compression Default

I was surprised today when doing a review that a client was still under the impression that backup compression was still an enterprise feature.  No no no no no.  It’s there for everyone with standard edition since SQL Server 2008 R2 and there’s really no reason not to be using it.  The only real question is why is it not a default setting?compression

Option Name:  backup compression default

Usage:  sp_configure ‘backup compression default’, 1

Default: 0

Maximum: 1

Requires Restart:  No

What it does:  Who wants smaller, faster backups?  Everyone!  Backup compression is very impressive and can reduce your backup sizes considerably.  On average I would say you can expect your compressed backup to be about 40% of the size of a non compressed backup, but some data is going to compress better and some not so well, so your mileage will vary.

When should I use it:  Almost always!  Not only are you going to get a smaller final backup size, but because the IO is reduced the backups are likely to complete faster than a regular backup.  Now, it’s worth noting that there is compression going on here, and that means extra CPU.  If your CPU is already being heavily taxed then you may want to steer away from backup compression as it will add to your processor load.

What else should you know:  There’s a few other bits that might be helpful with the ‘default backup compression’ option.

  • It’s only a default – It can be overridden in the actual backup command.  So if you want to leave it disabled, but find yourself short on space to take an adhoc backup – you can specify COMPRESSION as a backup option and your backup will be compressed.
  • You cannot mix compressed and non compressed backups within the same device.  I don’t know how common it is to append backups anymore.  I certainly don’t see it a lot.  But if you do that then you’ll need to remove\rename the old backups before switching from uncompressed backups to compressed.
  • Restore syntax is exactly the same whether a backup is compressed or not.  SQL just looks at the headers and figures out what to do.
  • All the other options you can throw at the backup command remain the same.  For example if you are doing a backup with checksum it works regardless of whether the backup is compressed or not.

 

Optober – SQL sp_configure Options: Show Advanced Options

Option Name:  Show Advanced Options

Usage:  sp_configure ‘show advanced options’, 1

Default: 0

Maximum: 1

Requires Restart:  No

What it does:  If you have just installed SQL Server and run sp_configure you are going to get 18 options.  If you enable(Set to 1) the ‘show advanced options’ option you get access to all 70 options.  What’s an advanced option?  To be honest the list looks pretty arbitrary to me.  Things like ‘backup compression default’ and ‘remote admin connection’ are fine as ‘simple’ options, but why would ‘clr enabled’ or ‘nested triggers be considered simple?  You can do some serious damage with just the simple options.

The only official word I can find on it describes the advanced options as ‘Advanced options, which should be changed only by an experienced database administrator or a certified SQL Server technician’.  There’s a few advanced that I feel could probably be simple, and a few simple that could probably be advanced.  As we progress through the month I’ll talk about each of these and the damage(or not) they could do.

Optober: A Month of SQL Options:

And so it begins – Optober.  It’s a month of talking about SQL Server configuration options.  Why would I do such a thing?  Well partly because there’s a bunch of them I want to learn more about and it gives me a chance to do some fun testing.  Partly because I want to force myself to blog regularly this month and mostly because I have a bunch of study to do for my next exam, and this strikes me as an excellent way to procrastinate.

In SQL Server 2014 there are 70 SQL options available under sp_configure(If you only see 18 and don’t know what I’m talking about then proceed to lesson 1 – show advanced options), so we’ll probably take a little longer than a month to get through them, but lets see.  The Options are listed below – as I address each one I’ll update this post with a hyperlink.

  •  access check cache bucket count
  • access check cache quota
  • Ad Hoc Distributed Queries
  • affinity I/O mask
  • affinity mask
  • affinity64 I/O mask
  • affinity64 mask
  • Agent XPs
  • allow updates
  • backup checksum default
  • backup compression default
  • blocked process threshold (s)
  • c2 audit mode
  • clr enabled
  • common criteria compliance enabled
  • contained database authentication
  • cost threshold for parallelism
  • cross db ownership chaining
  • cursor threshold
  • Database Mail XPs
  • default full-text language
  • default language
  • default trace enabled
  • disallow results from triggers
  • EKM provider enabled
  • filestream access level
  • fill factor (%)
  • ft crawl bandwidth (max)
  • ft crawl bandwidth (min)
  • ft notify bandwidth (max)
  • ft notify bandwidth (min)
  • index create memory (KB)
  • in-doubt xact resolution
  • lightweight pooling
  • locks
  • max degree of parallelism
  • max full-text crawl range
  • max server memory (MB)
  • max text repl size (B)
  • max worker threads
  • media retention
  • min memory per query (KB)
  • min server memory (MB)
  • nested triggers
  • network packet size (B)
  • Ole Automation Procedures
  • open objects
  • optimize for ad hoc workloads
  • PH timeout (s)
  • precompute rank
  • priority boost
  • query governor cost limit
  • query wait (s)
  • recovery interval (min)
  • remote access
  • remote admin connection
  • remote login timeout (s)
  • remote proc trans
  • remote query timeout (s)
  • Replication XPs
  • scan for startup procs
  • server trigger recursion
  • set working set size
  • show advanced options
  • SMO and DMO XPs
  • transform noise words
  • two digit year cutoff
  • user connections
  • user options
  • xp_cmdshell

Stupid SQL Things To Do: Backup to NUL

NOTE:  This is an expansion of a note that was posted on our company news page over a year ago, but I’ve recently encountered the same issue a couple of times within a week, so thought it was worth repeating.

Occasionally I’ve come across a site where backups are taken to disk = ‘NUL’. Note that’s NUL with 1 L and not NULL with 2 L’s. This allows a database in full recoverability to perform a “pretend” log backup and therefore the log file to be cleared and re-used. No resultant file is placed on disk so I’ve seen it recommended in a few places online as a quick fix where a log file has grown out of control.

The important thing to know about this is that you have just broken your recoverability chain and have no point in time recoverability until a new full backup is taken(Or if you want to be tricksey a differential backup will do the trick.. Therefore it should NEVER be part of a regular maintenance plan (especially in conjunction with a scheduled shrink….EK!).

If point in time recoverability is not important to you – use SIMPLE recoverability mode and your transaction log will be managed by SQL Server itself. If you do require point in time recoverability then schedule regular log backups and if your transaction log is still growing larger than expected then look at the activity that is causing it rather than resorting to a sledgehammer fix like using backup to disk = ‘NUL’. If you use that you have achieved nothing more or less than taking a regular log backup and then deleting the file.

Now that we’ve talked about it does, it’s worth noting that this may automatically start appearing in your backup reports without you having changed anything on the SQL Server.  You can get a list of backups that are taken on your server by using this script, and I suggest you do that, and check for any extra backups that occur.  If you start to see a bunch of log backups to the NUL device appear from nowhere, go talk to your system administrator about what new backup software they are using, and once they’ve admitted they have just put a new and shiny backup product in place you have my full permission to bash them upside the head and tell them not to change stuff on your server without talking to you.  There’s a couple of popular backup product which ‘helps’ people with their SQL backups by offering to ‘prune’ or ‘truncate’ SQL Logs.  Make sure you read what you mean before implementing this as you may be stung by the NUL backup issue.

The most annoying bit about this is it is a tickbox or checkbox which doesn’t give you information on what it will actually be doing.  System administrators never want extra logs filling up their disk space so a checkbox offering to ‘prune’ them sounds like a great idea.  Of course, a database level setting that automatically shrinks your database to stop it filling up all that disk space sounds great to a system administrator too(HINT:  It’s not).

 

NSSUG – Session 1: Paul Randal on Wait Stats

Thanks to all those who made it to our first user group meeting tonight.  Paul Randal gave a great session on the Waits and Queues Troubleshooting methodology and we had some really good discussion.

Paul’s code and slide deck are uploaded to the user group file store and there’s a discussion thread posted for any questions or comments on the session.

There’s also a thread in the forum for you to let us know what sessions you are interested to see in the future.  We’ve had a lot of really good offers for speakers for future sessions, but want to make sure we are tailoring things to the content the group is after.

I’d also like to thank Gigatown Nelson who supplied the venue and were announced as finalists in the Chorus Gigatown competition today.  It’s a great effort from the team and I’d like to urge everyone to get behind them as they try to win the top prize and bring fast internet to Nelson.

Finally a reminder that Paul is doing an internals course in Australia in December.  These sessions go into great depth and are well worth the sticker price – especially if you take advantage of the discounts Paul offered last night..

Hope to see you all again next month!

Rob

Nelson SQL Server User Group

Wanted to take a moment to announce the great news that Nelson now has it’s own SQL Server User Group.  You can join in at the Nelson SSUG page.

Even better news is we have Paul Randal lined up as our first presenter.  The session is at 6pm on the 17th September at the #GIGATOWNNSN office in Halifax Street.  Plenty of parking across the road, free Beer and Pizza and a top quality presenter.  What excuse would you possibly have NOT to come?

Session Description:  One of the first things you should check when investigating performance issues are wait statistics – as these can often point you in the direction for further analysis. Unfortunately many people misinterpret what SQL Server is telling them and jump to conclusions about how to solve the problem – what is often called ‘knee-jerk performance tuning’. In this session, you will learn what waits are, how to analyze them, and potential solutions to common problem patterns.  Paul Randal will be presenting this remote session on SQL Server Wait Statistics, and how to make the best use of them.