Author Archives: Rob

About Rob

http://gumption.co.nz/about-me/

Optober – SQL sp_configure Options: Show Advanced Options

Option Name:  Show Advanced Options

Usage:  sp_configure ‘show advanced options’, 1

Default: 0

Maximum: 1

Requires Restart:  No

What it does:  If you have just installed SQL Server and run sp_configure you are going to get 18 options.  If you enable(Set to 1) the ‘show advanced options’ option you get access to all 70 options.  What’s an advanced option?  To be honest the list looks pretty arbitrary to me.  Things like ‘backup compression default’ and ‘remote admin connection’ are fine as ‘simple’ options, but why would ‘clr enabled’ or ‘nested triggers be considered simple?  You can do some serious damage with just the simple options.

The only official word I can find on it describes the advanced options as ‘Advanced options, which should be changed only by an experienced database administrator or a certified SQL Server technician’.  There’s a few advanced that I feel could probably be simple, and a few simple that could probably be advanced.  As we progress through the month I’ll talk about each of these and the damage(or not) they could do.

Optober: A Month of SQL Options:

And so it begins – Optober.  It’s a month of talking about SQL Server configuration options.  Why would I do such a thing?  Well partly because there’s a bunch of them I want to learn more about and it gives me a chance to do some fun testing.  Partly because I want to force myself to blog regularly this month and mostly because I have a bunch of study to do for my next exam, and this strikes me as an excellent way to procrastinate.

In SQL Server 2014 there are 70 SQL options available under sp_configure(If you only see 18 and don’t know what I’m talking about then proceed to lesson 1 – show advanced options), so we’ll probably take a little longer than a month to get through them, but lets see.  The Options are listed below – as I address each one I’ll update this post with a hyperlink.

  •  access check cache bucket count
  • access check cache quota
  • Ad Hoc Distributed Queries
  • affinity I/O mask
  • affinity mask
  • affinity64 I/O mask
  • affinity64 mask
  • Agent XPs
  • allow updates
  • backup checksum default
  • backup compression default
  • blocked process threshold (s)
  • c2 audit mode
  • clr enabled
  • common criteria compliance enabled
  • contained database authentication
  • cost threshold for parallelism
  • cross db ownership chaining
  • cursor threshold
  • Database Mail XPs
  • default full-text language
  • default language
  • default trace enabled
  • disallow results from triggers
  • EKM provider enabled
  • filestream access level
  • fill factor (%)
  • ft crawl bandwidth (max)
  • ft crawl bandwidth (min)
  • ft notify bandwidth (max)
  • ft notify bandwidth (min)
  • index create memory (KB)
  • in-doubt xact resolution
  • lightweight pooling
  • locks
  • max degree of parallelism
  • max full-text crawl range
  • max server memory (MB)
  • max text repl size (B)
  • max worker threads
  • media retention
  • min memory per query (KB)
  • min server memory (MB)
  • nested triggers
  • network packet size (B)
  • Ole Automation Procedures
  • open objects
  • optimize for ad hoc workloads
  • PH timeout (s)
  • precompute rank
  • priority boost
  • query governor cost limit
  • query wait (s)
  • recovery interval (min)
  • remote access
  • remote admin connection
  • remote login timeout (s)
  • remote proc trans
  • remote query timeout (s)
  • Replication XPs
  • scan for startup procs
  • server trigger recursion
  • set working set size
  • show advanced options
  • SMO and DMO XPs
  • transform noise words
  • two digit year cutoff
  • user connections
  • user options
  • xp_cmdshell

Stupid SQL Things To Do: Backup to NUL

NOTE:  This is an expansion of a note that was posted on our company news page over a year ago, but I’ve recently encountered the same issue a couple of times within a week, so thought it was worth repeating.

Occasionally I’ve come across a site where backups are taken to disk = ‘NUL’. Note that’s NUL with 1 L and not NULL with 2 L’s. This allows a database in full recoverability to perform a “pretend” log backup and therefore the log file to be cleared and re-used. No resultant file is placed on disk so I’ve seen it recommended in a few places online as a quick fix where a log file has grown out of control.

The important thing to know about this is that you have just broken your recoverability chain and have no point in time recoverability until a new full backup is taken(Or if you want to be tricksey a differential backup will do the trick.. Therefore it should NEVER be part of a regular maintenance plan (especially in conjunction with a scheduled shrink….EK!).

If point in time recoverability is not important to you – use SIMPLE recoverability mode and your transaction log will be managed by SQL Server itself. If you do require point in time recoverability then schedule regular log backups and if your transaction log is still growing larger than expected then look at the activity that is causing it rather than resorting to a sledgehammer fix like using backup to disk = ‘NUL’. If you use that you have achieved nothing more or less than taking a regular log backup and then deleting the file.

Now that we’ve talked about it does, it’s worth noting that this may automatically start appearing in your backup reports without you having changed anything on the SQL Server.  You can get a list of backups that are taken on your server by using this script, and I suggest you do that, and check for any extra backups that occur.  If you start to see a bunch of log backups to the NUL device appear from nowhere, go talk to your system administrator about what new backup software they are using, and once they’ve admitted they have just put a new and shiny backup product in place you have my full permission to bash them upside the head and tell them not to change stuff on your server without talking to you.  There’s a couple of popular backup product which ‘helps’ people with their SQL backups by offering to ‘prune’ or ‘truncate’ SQL Logs.  Make sure you read what you mean before implementing this as you may be stung by the NUL backup issue.

The most annoying bit about this is it is a tickbox or checkbox which doesn’t give you information on what it will actually be doing.  System administrators never want extra logs filling up their disk space so a checkbox offering to ‘prune’ them sounds like a great idea.  Of course, a database level setting that automatically shrinks your database to stop it filling up all that disk space sounds great to a system administrator too(HINT:  It’s not).

 

NSSUG – Session 1: Paul Randal on Wait Stats

Thanks to all those who made it to our first user group meeting tonight.  Paul Randal gave a great session on the Waits and Queues Troubleshooting methodology and we had some really good discussion.

Paul’s code and slide deck are uploaded to the user group file store and there’s a discussion thread posted for any questions or comments on the session.

There’s also a thread in the forum for you to let us know what sessions you are interested to see in the future.  We’ve had a lot of really good offers for speakers for future sessions, but want to make sure we are tailoring things to the content the group is after.

I’d also like to thank Gigatown Nelson who supplied the venue and were announced as finalists in the Chorus Gigatown competition today.  It’s a great effort from the team and I’d like to urge everyone to get behind them as they try to win the top prize and bring fast internet to Nelson.

Finally a reminder that Paul is doing an internals course in Australia in December.  These sessions go into great depth and are well worth the sticker price – especially if you take advantage of the discounts Paul offered last night..

Hope to see you all again next month!

Rob

Nelson SQL Server User Group

Wanted to take a moment to announce the great news that Nelson now has it’s own SQL Server User Group.  You can join in at the Nelson SSUG page.

Even better news is we have Paul Randal lined up as our first presenter.  The session is at 6pm on the 17th September at the #GIGATOWNNSN office in Halifax Street.  Plenty of parking across the road, free Beer and Pizza and a top quality presenter.  What excuse would you possibly have NOT to come?

Session Description:  One of the first things you should check when investigating performance issues are wait statistics – as these can often point you in the direction for further analysis. Unfortunately many people misinterpret what SQL Server is telling them and jump to conclusions about how to solve the problem – what is often called ‘knee-jerk performance tuning’. In this session, you will learn what waits are, how to analyze them, and potential solutions to common problem patterns.  Paul Randal will be presenting this remote session on SQL Server Wait Statistics, and how to make the best use of them.

 

Database Corruption Causes C-drive to Fill Up

Ever been fighting corruption on a server you don’t usually have anything to do with and suddenly get a message that C-drive is running out of space?  This happened to me today and not knowing the server I wasn’t sure if a very low amount of disk space on C-drive was normal or was something I was causing.  As I watched it I could see it continue to drop about 5-8mb a time.

I couldn’t identify where it was going, and being a client server I wasn’t keen to download some third party tool on there to tell me, so I did it the old fashioned way and looked through a bunch of likely suspects.  Eventually I checked with a colleague who went through roughly the same process before popping into the SQL log folder to check if there was huge logfiles being generated.  They were reasonably big, but more importantly was that every time one of my corrupt databases was being accessed it was generating an 8MB dump file.  There were 5-10 a minute being generated and the directory  contained some 4000 of them.  When I cleaned up those and moved off a couple of the larger error logs I was able to free up 15GB of space, and then taking the corrupted databases online and working on them one at a time reduced the buildup of stackdumps against the server to a manageable level.

When you are up to your ears in database corruption you don’t want to waste time dealing with not being able to open management studio properly etc because there is no disk space so hopefully this will be useful to someone.  By default your SQL error logs directory is going to be:

Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\LOG\ERRORLOG

If you see similar symptoms have a look in there, clear out what you aren’t going to need and then get back to beating that corruption!

Why are Primary Databases Smaller Than Log Shipped Secondaries?

This week I was asked a ‘simple’ question by a client – why are the files for the primary databases smaller than the corresponding files on the log shipped secondary?  Obviously it needed to have something to do with either Growth or Shrink operations, so I set up a test to find the answer.

First a recap on what log shipping does.  Basically it is the process of backing up your databases and then restoring them automatically on another server, with a few bits or monitoring thrown in via management studio to make it a more manageable and versatile solution.  So any question around differences between the primary and secondary databases come down to the following question “Is it a logged operation?”  In this case the DBA that drew my attention to the question had already investigated shrinking the log file, and after the next backup,  copy and restore the log duly shrunk on the secondary server – so it appears that shrink is logged.  So that leaves growth.  It’s possible that the original database had a larger autogrowth which was later adjusted, and this change wasn’t transferred to production.  Let’s test.

Create the database with a stupid autogrowth setting:

CREATE DATABASE [Growtest]
ON PRIMARY
( NAME = N’Growtest’, FILENAME = N’D:\SQLData\growtest\Growtest.mdf’ , SIZE = 25600KB , FILEGROWTH = 50%)
LOG ON
( NAME = N’Growtest_log’, FILENAME = N’D:\SQLData\growtest\Growtest_log.ldf’ , SIZE = 18432KB , FILEGROWTH = 10%)
GO

And set it up for log shipping to Growtest_DR.  For brevity sake I won’t include the instructions on that, but if you need them there’s a pretty good step by step guide here.  Take a backup, copy and restore to make sure it’s all running properly, and check the file sizes:

LS Sizes 1

As we expect both files are the same size.  Now let’s reset the autogrowth on Growtest from 50% to 10%:

ALTER DATABASE [Growtest] MODIFY FILE ( NAME = N’Growtest’, FILEGROWTH = 10%)
GO

And create a table and fill it with enough data to trigger an autogrowth:

CREATE TABLE [dbo].[SpamTable](

[randomspam] [char](8000) NOT NULL
) ON [PRIMARY]

 

BEGIN TRAN
DECLARE @counter INT
SET @counter = 1
WHILE (@counter <3000)
BEGIN
Set @counter=@counter+1
INSERT into dbo.spamtable values (@counter)
END

COMMIT TRAN

 

Then do a backup, copy, restore and check tables again:

LS Sizes 2

The files on DR and production are still the same size.  What happens if we run the transaction but then roll it back instead of comitting it.  Does SQL bother to apply and rollback the transaction, or does it just remove the transaction from the logfile?

BEGIN TRAN
DECLARE @counter INT
SET @counter = 1
WHILE (@counter <500)
BEGIN
Set @counter=@counter+1
INSERT into dbo.spamtable values (@counter)
END
ROLLBACK TRAN

LS Sizes 3

After a backup, copy, restore we still have the same file sizes on DR and Prod, so there goes our growth theory.  So let’s forget about what we were told when the problem was handed to us and do some shrinking.  First the test that was already performed:

DBCC SHRINKFILE (N’Growtest_log’ , 0, TRUNCATEONLY)

Sure enough the log file truncation is transferred across to the DR server:

LS Sizes 4

Running out of ideas, we’ll try a shrink of the data file:

DBCC SHRINKFILE (N’Growtest’ , 0, TRUNCATEONLY)

And after a backup, copy and restore:

LS Sizes 5

Finally, we have an answer.  While autogrowth changes, and log file shrinks all appear to be logged, the shrink of the data file is not.  So at some point someone has shrunk the data file on production, which is why the production data files take less space than the DR data files.  So where  tables are archived in production to free space, and the data file is shrunk(either with shrinkfile or shrink database), we need to be aware that this shrink is not transferred to the DR server.  If we expect to reclaim that space we need to refresh the logshipping by taking a full backup from Production and restoring it over the DR database.