Author Archives: Rob

About Rob

http://gumption.co.nz/about-me/

Optober – SQL sp_configure options – Optimise for Adhoc Workloads

The Optimise for Adhoc Workloads is a setting that I had never given a lot of thought to.  I credit my knowledge of it entirely to new MVP Martin Catherall, who casually dropped into conversation that it is an option he usually configures by default.  I asked a few other people around this and it seems that the option is either completely unknown or considered to be a standard deployment.  So I’ve done a bit of reading and a bit of testing, and now it’s one I consider pretty much a default deployment as well.

Option Name:  optimize for adhoc workloads

Usage:  sp_configure ‘optimize for adhoc workloads’, 1

Default: 0

Maximum: 1

Requires Restart:  No

What it does:  A certain portion of your SQL buffer pool is used to store cached execution plans to be reused by SQL Server if that query comes along again.  This efficiently saves SQL Server to go through the process of generating an execution plan for the same query over and over and over again(Unless your developers are in love with OPTION RECOMPILE of course).  This option is to protect against your cache being filled up with execution plans that only ever get used once.  Effectively it stores a ‘stub’ of that execution plan and then only stores the full plan if it sees it being executed again, which saves a lot of space.

When should I use it:  Almost always!  I’ve put the question to a lot of different people at various SQL Saturday expert panels and user group presentations.  The answer is pretty much always that there is really no downside in enabling this option.  If you have a server with a lot of adhoc querying occurring it will save space in the buffer pool than can be used for data pages = faster execution.  If you have a server with only a couple of queries executing multiple times there is a small impact in that the queries each have to run twice before the full plan is cached after a restart.  But in that instance your plan cache is likely to be smaller anyway due to the limited number of plans in it.

What else should you know:  To see if you are likely to get any benefit from the setting – check what’s happening on your server right now(disclaimer:  You’ll want to have had a little while since the last restart before running this script as the DMV’s it accesses are reset on service restart.)

SELECT objtype AS [CacheType]
, count_big(*) AS [Total Plans]
, sum(cast(size_in_bytes as decimal(18,2)))/1024/1024 AS [Total MBs]
, avg(usecounts) AS [Avg Use Count]
, sum(cast((CASE WHEN usecounts = 1 THEN size_in_bytes ELSE 0 END) as decimal(18,2)))/1024/1024 AS [Total MBs – USE Count 1]
, sum(CASE WHEN usecounts = 1 THEN 1 ELSE 0 END) AS [Total Plans – USE Count 1]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs – USE Count 1] DESC
go

(Code Courtesy of Kimberly Tripp’s blog post)

That code will tell you how many adhoc 1 use plans you have in the cache, together with how much space those plans are using right now.  You might be surprised!

Optober – SQL sp_configure options – Allow Updates

Option Name:  allow updates

Usage:  sp_configure ‘allow updates’, 1

Default: 0

Maximum: 1

Requires Restart:  No

What it does:  Once upon a time if you wanted to update system tables in your database you could do so, and this was the magic switch you needed to throw to do it.  In the current enlightened times people realize that updating system tables is generally speaking not a good idea, and if you really need to do it to (for example) fix an allocation error corruption, there is now emergency mode repair which can be used.

When should I use it:  Never after SQL 2000, and even in SQL 2000 only if you are very clear on what you are doing.  Books Online is pretty clear on this.  If you’ve updated system tables then your database is not in a supported state.  This feature currently has no effect and will be removed in a future version of SQL server.

What else should you know:  There are other ways to update system tables if there is corruption within your database.  But you need to be aware that if you change an underlying system table the date of the last change is logged in the header of the database.  It will be there for everyone who cares to look(ie Microsoft Support) to see.  I’m not going to go into details on that accept to point you over to Paul Randal’s blogs on database corruption.  Though if you are looking at this stuff god knows how you would have found my blog before his.

Optober – SQL sp_configure Options – Remote Admin Connection

On about day one of being a DBA I was told this was an option that should always be enabled.  It took me much longer to understand what it achieved, and over 5 years before I had to use the dedicated administrator connection ‘in anger’.  But if you use it once and save having to reboot a server, you will come to appreciate it very quickly.

Option Name:  remote admin connection

Usage:  sp_configure ‘remote admin connection’, 1

Default: 0

Maximum: 1

Requires Restart:  No

What it does:  SQL Server keeps a single scheduler dedicated to allow an administrator to connect.  To do this you connect as normal, but prefix your servername with ADMIN: – now even if SQL Server is completely tied up – this allows you to connect and work out what’s going on.  And that’s a very valuable thing to be able to do.  However, when SQL Server gets itself tied up into the sort of knots that would normally require using the dedicated admin connection, you’ll often find the server so slow to respond that trying to do anything on the instance itself is nearly impossible.  I’m assuming you’re not going to have your server hooked up to a mouse, monitor and keyboard on your desk(and if you do……another day perhaps).  So you are most likely going to need to RDP into the server.  When you RDP into a server you are actually adding even more load as your profile gets loaded and your remote session is maintained.  If the server is dying you don’t want to make it work harder.  If this option is enabled the dedicated admin connection can be used from a remote version of management studio or sqlcmd.

When should I use it:  Always.  This should be a standard deployment.  You may get objections about it being a security risk but seriously – you already have to be a system administrator to use it, there’s no danger there…or at least no additional danger.  If you have dodgy sysadmins then you have bigger problems than this setting.

What else should you know:   Be aware that you have one thread to play with if you connect to the DAC.  It’s meant as a method to get in, run quick diagnosis on what has put the server into the state it is in, and resolve the issue.  It’s not meant to run complicated query, there’s no chance of parellelism(Because it’s just ONE thread) and it’s really only there as a last ditch option to prevent you having to restart the server.

Also be aware that when used locally the DAC is listening on the loopback address.  If you are using the DAC remotely you are going to need to make sure all the necessary firewall ports are open.

Optober – SQL sp_configure options – Backup Compression Default

I was surprised today when doing a review that a client was still under the impression that backup compression was still an enterprise feature.  No no no no no.  It’s there for everyone with standard edition since SQL Server 2008 R2 and there’s really no reason not to be using it.  The only real question is why is it not a default setting?compression

Option Name:  backup compression default

Usage:  sp_configure ‘backup compression default’, 1

Default: 0

Maximum: 1

Requires Restart:  No

What it does:  Who wants smaller, faster backups?  Everyone!  Backup compression is very impressive and can reduce your backup sizes considerably.  On average I would say you can expect your compressed backup to be about 40% of the size of a non compressed backup, but some data is going to compress better and some not so well, so your mileage will vary.

When should I use it:  Almost always!  Not only are you going to get a smaller final backup size, but because the IO is reduced the backups are likely to complete faster than a regular backup.  Now, it’s worth noting that there is compression going on here, and that means extra CPU.  If your CPU is already being heavily taxed then you may want to steer away from backup compression as it will add to your processor load.

What else should you know:  There’s a few other bits that might be helpful with the ‘default backup compression’ option.

  • It’s only a default – It can be overridden in the actual backup command.  So if you want to leave it disabled, but find yourself short on space to take an adhoc backup – you can specify COMPRESSION as a backup option and your backup will be compressed.
  • You cannot mix compressed and non compressed backups within the same device.  I don’t know how common it is to append backups anymore.  I certainly don’t see it a lot.  But if you do that then you’ll need to remove\rename the old backups before switching from uncompressed backups to compressed.
  • Restore syntax is exactly the same whether a backup is compressed or not.  SQL just looks at the headers and figures out what to do.
  • All the other options you can throw at the backup command remain the same.  For example if you are doing a backup with checksum it works regardless of whether the backup is compressed or not.

 

Optober – SQL sp_configure Options: Show Advanced Options

Option Name:  Show Advanced Options

Usage:  sp_configure ‘show advanced options’, 1

Default: 0

Maximum: 1

Requires Restart:  No

What it does:  If you have just installed SQL Server and run sp_configure you are going to get 18 options.  If you enable(Set to 1) the ‘show advanced options’ option you get access to all 70 options.  What’s an advanced option?  To be honest the list looks pretty arbitrary to me.  Things like ‘backup compression default’ and ‘remote admin connection’ are fine as ‘simple’ options, but why would ‘clr enabled’ or ‘nested triggers be considered simple?  You can do some serious damage with just the simple options.

The only official word I can find on it describes the advanced options as ‘Advanced options, which should be changed only by an experienced database administrator or a certified SQL Server technician’.  There’s a few advanced that I feel could probably be simple, and a few simple that could probably be advanced.  As we progress through the month I’ll talk about each of these and the damage(or not) they could do.

Optober: A Month of SQL Options:

And so it begins – Optober.  It’s a month of talking about SQL Server configuration options.  Why would I do such a thing?  Well partly because there’s a bunch of them I want to learn more about and it gives me a chance to do some fun testing.  Partly because I want to force myself to blog regularly this month and mostly because I have a bunch of study to do for my next exam, and this strikes me as an excellent way to procrastinate.

In SQL Server 2014 there are 70 SQL options available under sp_configure(If you only see 18 and don’t know what I’m talking about then proceed to lesson 1 – show advanced options), so we’ll probably take a little longer than a month to get through them, but lets see.  The Options are listed below – as I address each one I’ll update this post with a hyperlink.

  •  access check cache bucket count
  • access check cache quota
  • Ad Hoc Distributed Queries
  • affinity I/O mask
  • affinity mask
  • affinity64 I/O mask
  • affinity64 mask
  • Agent XPs
  • allow updates
  • backup checksum default
  • backup compression default
  • blocked process threshold (s)
  • c2 audit mode
  • clr enabled
  • common criteria compliance enabled
  • contained database authentication
  • cost threshold for parallelism
  • cross db ownership chaining
  • cursor threshold
  • Database Mail XPs
  • default full-text language
  • default language
  • default trace enabled
  • disallow results from triggers
  • EKM provider enabled
  • filestream access level
  • fill factor (%)
  • ft crawl bandwidth (max)
  • ft crawl bandwidth (min)
  • ft notify bandwidth (max)
  • ft notify bandwidth (min)
  • index create memory (KB)
  • in-doubt xact resolution
  • lightweight pooling
  • locks
  • max degree of parallelism
  • max full-text crawl range
  • max server memory (MB)
  • max text repl size (B)
  • max worker threads
  • media retention
  • min memory per query (KB)
  • min server memory (MB)
  • nested triggers
  • network packet size (B)
  • Ole Automation Procedures
  • open objects
  • optimize for ad hoc workloads
  • PH timeout (s)
  • precompute rank
  • priority boost
  • query governor cost limit
  • query wait (s)
  • recovery interval (min)
  • remote access
  • remote admin connection
  • remote login timeout (s)
  • remote proc trans
  • remote query timeout (s)
  • Replication XPs
  • scan for startup procs
  • server trigger recursion
  • set working set size
  • show advanced options
  • SMO and DMO XPs
  • transform noise words
  • two digit year cutoff
  • user connections
  • user options
  • xp_cmdshell

Stupid SQL Things To Do: Backup to NUL

NOTE:  This is an expansion of a note that was posted on our company news page over a year ago, but I’ve recently encountered the same issue a couple of times within a week, so thought it was worth repeating.

Occasionally I’ve come across a site where backups are taken to disk = ‘NUL’. Note that’s NUL with 1 L and not NULL with 2 L’s. This allows a database in full recoverability to perform a “pretend” log backup and therefore the log file to be cleared and re-used. No resultant file is placed on disk so I’ve seen it recommended in a few places online as a quick fix where a log file has grown out of control.

The important thing to know about this is that you have just broken your recoverability chain and have no point in time recoverability until a new full backup is taken(Or if you want to be tricksey a differential backup will do the trick.. Therefore it should NEVER be part of a regular maintenance plan (especially in conjunction with a scheduled shrink….EK!).

If point in time recoverability is not important to you – use SIMPLE recoverability mode and your transaction log will be managed by SQL Server itself. If you do require point in time recoverability then schedule regular log backups and if your transaction log is still growing larger than expected then look at the activity that is causing it rather than resorting to a sledgehammer fix like using backup to disk = ‘NUL’. If you use that you have achieved nothing more or less than taking a regular log backup and then deleting the file.

Now that we’ve talked about it does, it’s worth noting that this may automatically start appearing in your backup reports without you having changed anything on the SQL Server.  You can get a list of backups that are taken on your server by using this script, and I suggest you do that, and check for any extra backups that occur.  If you start to see a bunch of log backups to the NUL device appear from nowhere, go talk to your system administrator about what new backup software they are using, and once they’ve admitted they have just put a new and shiny backup product in place you have my full permission to bash them upside the head and tell them not to change stuff on your server without talking to you.  There’s a couple of popular backup product which ‘helps’ people with their SQL backups by offering to ‘prune’ or ‘truncate’ SQL Logs.  Make sure you read what you mean before implementing this as you may be stung by the NUL backup issue.

The most annoying bit about this is it is a tickbox or checkbox which doesn’t give you information on what it will actually be doing.  System administrators never want extra logs filling up their disk space so a checkbox offering to ‘prune’ them sounds like a great idea.  Of course, a database level setting that automatically shrinks your database to stop it filling up all that disk space sounds great to a system administrator too(HINT:  It’s not).