How To Get a Free Azure Database

I’ve been doing some testing on Azure database over the past few weeks. Obviously when you move to databases being used as a service you need to pay to get this service, but I did find a nice little workaround that unlocks the “free tier” for SQL Database.

The process is outlined below, but the TLDR version is – create a “WebApp + SQL” in the free tier and the database becomes free. There are some downsides to this of course – the free tier is limited to 5DTU and 32MB size, and it vanishes after a year. But for playing around those values should be fine – and if you need more than that it gives you a good marker for how much Azure Database will cost you, because don’t forget what you are paying based on in Azure DB.

To get your free Azure Database Click on “Create a Resource” and search for “WebApp + SQL”. Select it then click create and you’ll see something like this:

The elusive “Free” tier.

The free tier can be selected from within the Dev\Test toolset:

Free is good.

Then when you select the information for your database the super secret free Azure Database Tier has been unlocked.

Oh look…a secret thing.

Please remember the limits on that free database:

It expires after 365 days

It has only 5DTU

It cannot be larger than 32MB.

Easily enough to run my new CRM on…

How Microsoft Protect Your Azure Databases

Microsoft have made the claim that you don’t need to run consistency checks on Azure SQL Databases because they take care of it.  That’s nice, one less thing to worry about right?  Of course to a DBA a claim like that falls into the same category as your backups being reliable without verifying them, or…I dunno…internet privacy not being a problem for people with nothing to hide.  In short, unless I know HOW they are achieving this I am not going to consider my data safe, and even then I’m probably going to be dubious.

So it’s refreshing to see the blog post yesterday from Peter Carlin which outlines the lengths that Microsoft is going to in order to make sure Azure Database users are not affected by data corruption issues.  Check it out here:  https://azure.microsoft.com/en-us/blog/data-integrity-in-azure-sql-database/

The bit I particularly like is listed under evolving methodologies.  It says “The Azure SQL Database engineering team regularly reviews and continues to enhance the data integrity issue detection capabilities of the software and hardware used in the service.” which is a nice acknowledgement that even with all the proactive work they are doing in this space, data corruption is a case of something going wrong, and there is always room for improving the method of dealing with it.

Thanks Microsoft.

Robotic Workforce

Google IO and a Robotic Future

Last Friday I had the very great pleasure of attending the Google IO extended presentation organised by Dhruv Valera.  Dhruv put on a great event and there was a lot of interesting conversation forthcoming afterwards (along with a slight hangover).  I found the content of what Google is doing absolutely fascinating, especially the concept of moving from a mobile first to an AI first approach.  But the one thing that was missing was a coherent idea of what the future will look like.  Google is making staggering progress, but it is incremental progress on where we are now, and I like to think the other way.  Where do we want to end up, and how do we get there.

My own role is currently increasing the level of automation within our service offerings.  When I get to do that it’s alot of fun.  In one recent POC I automated 40 hours of work down to ten minutes, and even that 10 minutes was running time for automated processes, so the human interaction was around 30 seconds.  That’s nice for a business, because they can charge 40 hours and pay for 30 seconds.  And sure, there’s a bunch of development work that goes into that, and stuff can always go wrong with automation and all the rest of it, but one thing that is a bit scary is that 40 hours is a significant number.  It’s a work week.  Which means I can think about things in two ways.  I can kick off this process and not bother showing up to work this week, log my hours, work bills the client and everyone is happy.  Or alternatively, my employer can get someone else to kick off this process 20 times, pay them for 10 minutes work and there’s me and 19 of my co-workers out of a job.Robotic Workforce

When we talk about automation we don’t tend to talk about job loss, and in my particular business I don’t think that’s an immediate concern.  I’m trying to fit everyones work tasks back into the 40 hour work week we actually get paid for rather than squeeze people out of jobs.  There’s plenty of work to go around.  But taken to it’s natural conclusion where do we end up?  Back when I ran the internet cafe I used to joke that I could probably write a pretty simple program to do most of what I did.  But what if that’s not a joke?  Family aside, it’s quite possible for me to go through an entire week without interacting with another human being.  I can pay at the pump with a credit card so I don’t need to go and see the service station employee, I can go through an automated checkout at the supermarket, I can order whatever I want online and have it just turn up.  Where Internet Cafe’s do exist they probably have been fully automated by now.  How difficult would it be to fully automate a cafe, or a bar, or a restaurant?  Not too difficult.  And even if the machinery to do that cost $50,000, or $150,000 – that doesn’t actually break down to that many annual wages.  Just think if you owned a cafe and could spend $150,000 to never have to pay wages again… you wouldn’t need to think too hard.  With minimum wage for a full time salary in NZ being a shade over $30K – That’s 5 employee/years and the equipment pays for itself.  And you don’t have to worry about staff being sick, taking holidays, changing jobs, needing training, wanting raises and so on.  It’s actually a pretty easy equation .  Then you can add in the advantages – you could actually be open 24×7, because your staff never need to go home and sleep.  You’d never need to recruit or train new employees, and you could scale your business up like crazy!

So when I see Google’s natural language processing advances, and see a phone that you can hold up to a plant and have it identified a bunch of stuff happens.  The geek in me says “That’s so cool!”, but the philosopher in me starts to ask where it stops and how the world looks even 10 years in the future.  For anyone who ponders the same questions I recommend you read Marshall Brain’s “Manna” – it’s free on his website and details two possible futures for mankind.  The first is depressingly likely, the second euphorically enticing.  Fair warning, if your brain works like mine you will also want to devour everything else on his website.  It’s all fantastic stuff.

I am, often in the face of overwhelming evidence to the contrary, an optimist.  I believe that fundamentally humans are good.  I believe that technology should make us happier, smarter and more free.  We can do amazing things if we set our minds to them, and the technology we have is going to make that easier and easier.  So thank you Dhruv for organising the Google IO extended event in Nelson, and thank you Google for continuing to push the limits of what we are doing and can do.

The Accidental DBA – First things to master

Because I work in IT I quite regularly get asked to do any number of things that I have never done before on the basis that it involves computers(or increasingly phones) or the internet, or something….geeky. The idea is that because we work in one area of IT, our skills should neatly transfer to other areas.

This is an annoyance in your personal life. Of course I have no experience in how to set up photo sync on your iphone, I have an android, but actually, I can probably figure out how it’s done faster than the average user. Hook up your new playstation and transfer your accounts? I mean…sure…I don’t have an xbox, but I can read things on a screen and guess which button to press. But that’s quite low stakes stuff. The workplace equivalent is when a sysadmin or a web developer is expected to check in on the databases. In the former case they are probably looking at the database server as a bunch of files on disk, in the latter they are thinking of it as a bunch of tables to access to server data. They have become an accidental DBA and the expectation that has been placed on them is, quite simply, just not fair.

Databases are incredibly complex beasties, and they shouldn’t be treated as a bunch of tables, or a bunch of files. They are databases, residing on database servers and the skillset required to properly administer them is very specific and quite hard to define. But in this case I’m going to carve off the first 3 things to master if you find yourself lumped with the DBA tasks with one of those lovely little throwaway lines like “Hey, seeing as you have sysadmin, can you just make sure the sql servers are okay.” If you accidentally don’t say no to that question it will be the first of many questions that start with “Can you just….” Because everything about databases SEEMS like it should be quite simple to someone who doesn’t know how they work. Worse still it may seem quite simple to you too meaning you can do incredible damage very quickly if you are not fully aware of the implications of the actions you’ll be taking.

Number One: Backups

Okay… so when you said “I guess” to that you just made yourself responsible for the availability of data within your organisation. And you have to do that around your real job? Here’s the first thing that can go wrong – someone just accidentally overwrote the database that controls all the company orders – you are about to get one of those “Can you just?” questions. “Can you just restore that back to this morning at 10:36?”

Well can you?

The first skill you need to master is setting up SQL backups. That’s so much more than just running a backup command. You are not just creating a rollback point prior to a deployment, you now need to understand what the business expectations are around recovering data and how to implement them. I’ve already dedicated a bunch of posts on this blog to how you go about doing this, but I’ve barely scratched the surface. The key bits:

  1. Get someone to agree to the requirements.
  2. Get the disk and archive resource available to meet the requirements.
  3. Implement a plan that makes sense and meets the requirements.
  4. Test your restores.

Which of those is the most important? The next one on the list. You need to have done ALL of these before you should feel comfortable with your ability to restore data in your environment.

Number Two: Monitoring

What do you do if one of those backups you just spent 3 months getting everything in place for fails? Do you even know it happened?

SQL Server has a wonderful selection of built in tools to help you out. You want to make sure if something goes wrong you know about it. Here’s a selection of SQL components you may want to look into to make that happen.

  1. Database Mail – this allows you to send emails direct from the server. These can be set up on job failure, fired from alerts, set up as steps in maintenance plans or dropped directly into TSQL code.
  2. SQL Alerts – These can be configured as triggers on certain error conditions, and then set up to perform a response action – for instance kicking off a SQL agent job or sending a Database Mail message.
  3. Maintenance Plans – These often get a bad wrap in the SQL community, because they are quite limited in the options available for the main tasks they were designed for(SQL Maintenance – Backups, Optimisation of Indexes and Database Consistency Checks). But they are actually incredibly powerful because of the logic flow built into them. You have the ability to trigger an action, and depending on if it succeeds or fails fire off a different response. A series of maintenance plans consisting of TSQL tasks can be an incredibly powerful addition to your toolbelt.

Now rolling your own monitoring can be a time consuming process and somewhat prone to error. My advise is never set up alerts that come to yourself. Set them up to go to a generic address like DBAalerts@yourcompany.com. It’s fine to have that simply forward everything on to you, but you want to be able to let whoever takes over from you to simply be able to start picking up alerts, not have to go through and manually change everything you have set up.

Number Three: Automation

I’ve already talked about a number of processes and tools that you can set up once and then only need to revisit if something goes wrong. The old fashioned DBA would log into each server and work through a checklist of tasks, and that just sucks. One of the things you will find very quickly as an Accidental DBA is once people know you are the one who can make things happen on the SQL server you will be hit from all sides with all sorts of requests. Your time will be too scarce and valuable to waste on something you could have scripted once and set to run on an appropriate schedule.

Learn the power of SQL Agent for scheduling tasks. Add powershell and command line tools to take care of the associated operating system tasks and make your life easier by overseeing an army of robots set up to complete your tasks and make you look good.

Number Four: Meet the Community

SQL Server has, without doubt, the most motivated and engaged tech community I have every been involved with. Go and meet them. Most reasonably sized cities have a SQL user group. Microsoft has regular events in large cities. But even if you can’t engage in these in person, the online community is vibrant and full of people who are willing and able to help.

Sometimes that help is in the form of answering questions(I think you are more likely to get answers quickly to SQL Server questions than most any other technology I have been engaged with), sometimes it is detailed blogposts\instructions on how to perform more complex tasks, and sometimes it is the perfect community tool for the task you are working on. If you haven’t already go and check out sp_whoisactive, Ola’s awesome maintenance scripts or DBATools. Every one of these is commercial grade brilliant and they are all available, along with a community that will tell you how to use them….for free!

SQL Saturday South Island is going to be awesome!

It’s 8 days until SQL Saturday South Island in Christchurch, and apart from this guy who still has to write his presentation – the event is going to be awesome.  SQL Saturday South Island always punches above it’s weight for a SQL Saturday in a small city, and this year is no exception with 4 streams chock full of quality speakers.  We’ve got a good mix including a bunch of MCM\MVP’s, subject matter experts and seasoned international speakers.  But with quality like that comes a dilemma or two…

Case in point…which session do you go to first?  In the Microsoft Room Craig Ryan is taking a must see session on “Tuning SQL Server Standard Edition”.  You wouldn’t want to miss this if you are in an  Enterprise that is cost conscious – that is….pretty much all of them.  With the release of SQL 2016 SP1 Standard Edition is no longer the poor cousin, and even in SQL 2014 with the doubling of RAM limits to 128GB it was capable of handling some pretty impressive workloads.  Not everyone needs Enterprise Edition features, and if you can get the performance levels you need out of an edition that is a third the price then why wouldn’t you?  Craig is one of the smartest SQL guys I know and this session is going to be full of take home advise that will help people tune up their environments and get immediate gains from their Saturday spent learning SQL.  You simply have to go to this session.

Unless…..unless you look in the SQL Services room next door, where you’ll see Reza Rad talking about Security in Power BI.  I’m excited about this session, because Security is usually the last thing that BI folk like to talk talk about.  If I had a dollar for every time I found a linked server some BI guy had set up on a test server that gave sysadmin access to everyone on the production box I’d be a rich man.  Or at least able to afford a couple of decent rounds of drinks.  Sure, some people will tell you that BI is just about pretty colors and pictures(I’m looking at you Victor), but those pretty colors and pictures are increasingly leading business decisions that effect everyone!  BI is so often a race to a deliverable that it’s going to be really interesting to hear Reza’s take on how to factor in security.  Then you take into account that this session is being presented by Reza Rad who is a speaker who’s presented on pretty much every SQL stage in the world and is in demand wherever you go and you’ve got a difficult choice of session starting to form.  You’ll definitely not want to miss Reza’s session…..

…unless you look behind door number 3.  We originally had Nabeel Derhem lined up for a session on “A Practical Introduction to Machine Learning”.  Nabeel is a former Microsoft MVP who now works for Microsoft and was going to take us through the skillsets needed to get going with interesting Machine Learning.  Unfortunately, due to some travel issues which are probably Trumps fault Nabeel wasn’t able to make it, but we’ve been lucky enough to have Nick Burns pick up exactly the same topic.  I work with Nick and can say without any doubt that this session is going to be brilliant.  Why?  Machine learning is just an awesome topic anyway, and I love Nick’s take on it’s practical applications…and also his healthy skepticism about it’s use.  It’s easy to get caught up in the glossy sales pitches around what Machine Learning can offer, but like everything it’s also important to step back and consider the implications of what happens if it’s implemented poorly.  Nick’s going to talk about how easy it is to get started with Machine learning, and what things you need to understand to apply it in a practical, useful manner.  He’s got a brain the size of a planet, and a grin that would put any Jahovas Witness to shame.  You simply cannot afford not to go to this session.

Unfortunately I’ll miss all 3 of those sessions because I’ll be sitting by myself in the Jade Software room wondering if nobody actually cares about “The things you absolutely must be doing in your SQL environment!”  I’ve been asked a few times why I usually submit beginner level sessions fr SQL Saturdays.  My answer is pretty simple.  Most of the people who attend a SQL Saturday are beginners.  It does pain me to hear SQL getting blamed for poor performance when 10 seconds of investigation pulls out half a dozen queries written by people who clearly don’t know how a Relational Database works.  It pains me when I do a healthcheck and discover a database running a critical application that has had a corrupt table for 3 years.  It pains me even more when it takes 3 months before I get approval to fix that.  It actually physically hurts when I see backup software that is sold and used around the world that not only takes a transaction log backup to the NUL device at midnight every night but also sells this as a good thing!  In short, with all the information that is out there, it makes me a little sad to see people managing SQL Servers so badly, and worse still not realizing that it is actually their oversights which are causing their applications to run badly, not any fault of the SQL product itself.

So I don’t expect a whole bunch of people to turn up to my session next Saturday.  I know it would be choice number 4 of the 4 that are available for me.  But a smaller group might just work for this session.  Bring your questions and we can have a discussion rather than just listen to me drone on up the front about stuff you should be doing in your SQL environments.  And when I’m finished I’ve got 4 more tough choices between 16 more great sessions.  It’s going to be a good weekend.

 

 

SQLSAT582: Resolving the “WinRM firewall exception will not work since one of the network connection types on this machine is set to Public” Error.

At SQL Saturday Melbourne, an otherwise excellent event, my demo failed on me because the DBAtools module failed to load with the following error:

Set-WSManQuickConfig : <f:WSManFault xmlns:f=”http://schemas.microsoft.com/wbem/wsman/1/wsmanfault”
Code=”2150859113″ Machine=”localhost”><f:Message><f:ProviderFault provider=”Config provider”
path=”%systemroot%\system32\WsmSvc.dll”><f:WSManFault
xmlns:f=”http://schemas.microsoft.com/wbem/wsman/1/wsmanfault” Code=”2150859113″
Machine=”<machinename>”><f:Message>WinRM firewall exception will not work since one of the
network connection types on this machine is set to Public. Change the network connection type to either Domain
or Private and try again. </f:Message></f:WSManFault></f:ProviderFault></f:Message></f:WSManFault>
At line:116 char:17
+ Set-WSManQuickConfig -force
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Set-WSManQuickConfig], InvalidOperationException
+ FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.SetWSManQuickConfigCommand

For a piece of code that had run successfully a dozen times this was of course a VERY annoying error, not least because I set it to run in another window and carried on with my talk, so didn’t really have time to either troubleshoot it or re-run it once I was back.  I had initially believed the problem was due to an earlier deviation I had made from the order I was going to do things and therefore related to execution policy.  But I hadn’t even read the error message properly during the demo because I just wanted to curl up in a corner and cry.

But running it again later I remembered the other change that I had made to my laptop from every other time I had run the code.  I had adjusted my network settings from a static IP address to allow me to connect to the university wireless.  That was a dumb rookie mistake, and I really had no excuse having not re-run my code in the new environment.  My session was right after lunch, so I had all lunchtime to step through my code, and had done so with everything but the particular script that failed.  Lesson learnt the hard way I guess.  Speakers be warned.

Anyway, the error is obviously related to the change in wireless settings.  I was quickly pointed in the correct direction by this post from Matt Wrock who provides the powershell commands needed to set all your network connections to private:

$networkListManager = [Activator]::CreateInstance([Type]::GetTypeFromCLSID([Guid]"{DCB00C01-570F-4A9B-8D69-199FDBA5723B}")) 
$connections = $networkListManager.GetNetworkConnections()
# Set network location to Private for all networks 
$connections | % {$_.GetNetwork().SetCategory(1)}

As the article referenced above explain the implications and manual process better than I could I won’t add additional junk to the internet, except to say that the failure of the import of the DBA tools module appears to be due to a failure of Set-WSManQuickConfig  in the PS-Remoting module.  Also not that making your networks private actually has less security associated with it than having them “Public” which is a bit counter intuitive.  A “Public” network setting means you are on a public network, don’t trust it so security is high, whereas a private network means you trust the network and are prepared to let traffic do a few extra things – like ps-remoting for instance….ahem.

This is the first session which has been so heavily reliant on demo code, normally I do a lot of speaking from a slide deck.  The turnout was really good, but I felt my presentation in general was pretty poor.  It’s much easier to deviate from what you planned when you are talking from a slide deck, because the slide deck brings you back to the intended presentation.  When you are just typing code into a powershell console and someone asks you a question that takes you away on a tangent it’s harder to come back on topic.  In general I think the presentation went a little bit all over the place, which is exactly what people don’t need when they are trying to grasp the fundamentals of using a programming language.  The structure I had in place was good, but my execution on the day was poor.  So apologies to the audience on the day, but hopefully the core message of the usefullness of powershell still got through.  You can run through my demo scripts by downloading them from the SQL Saturday site at:  http://www.sqlsaturday.com/582/Sessions/Schedule.aspx

 

 

Resolved – A job step received an error at line 3 in a PowerShell script. The corresponding line is ‘$space.ForegroundColor

I was working on some Powershell automation for a weekly process for one of my colleagues and set up the powershell script to run via a SQL 2014 agent job.  After some initial permissions errors I ran into this error:

Executed as user: <Agent Account>. A job step received an error at line 3 in a PowerShell script. The corresponding line is ‘$space.ForegroundColor = $host.ui.rawui.ForegroundColor ‘. Correct the script and reschedule the job. The error information returned by PowerShell is: ‘Exception setting “ForegroundColor”: “Cannot convert null to type “System.ConsoleColor” due to enumeration values that are not valid. Specify one of the following enumeration values and try again. The possible enumeration values are “Black,DarkBlue,DarkGreen,DarkCyan,DarkRed, DarkMagenta,DarkYellow,Gray,DarkGray,Blue,Green,Cyan,Red,Magenta,Yellow,White”.”  ‘.  Process Exit Code -1.  The step failed.

  This was really confusing at first because there was certainly no code that manipulated the console output – I was running from an agent job.  After a little testing I found the error to be with a piece of code within my script which DOES manipulate the console output – specifically the native CLS function.  As that was obviously part of the process of me testing the code and not anything I was going to need for the piece of automation I was working on removing the “CLS” from my script resolved the problem.

The error message was really misleading here, and I hope that it is addressed by Microsoft in the future.  It references a specific line of code in an agent job step, but for me that line corresponded with an echo statement.  At the least it should indicate that the error is in a called piece of code, not in the source script, and preferably shouldn’t generate an error at all, as all the output from the job step is in plain text.