Starting the year thinking about Security

Okay, so last year it happened. A client got their entire infrastructure Cryptolocked and recovery has been a dog. This was not a SQL Server problem nor a SQL breach, but it brings into perspective so much of what I’ve been talking about in this blog of how SQL is part of a wider environment, and how everyone has to be on the same page when it comes to securing and, if worst comes to worst, recovering that environment.

My focus during the start of 2025 is going to be the development of our DataAssure Security module. We’ve got some pretty neat stuff in our managed service already in the security side, but we have developed based on a minimum threshold we want everyone to be at. So we capture your failed logins, we keep track of your system administrators and we highlight some glaring security holes(like weak passwords and incorrectly configured linked servers), but the security module will be taking that to the next level for our clients who are after a more complete solution.

This is going to mean a refresher for me on many of the security features of SQL and we’ll build out a bunch of the below:

  • TDE and Backup Encryption
  • Certificate Rotation
  • Monitoring Encrypted Connections
  • SQL Audit and Extended Events
  • Server and Database level permissions changes
  • Inactive and over-privileged logins.

Security is hard enough when you are just thinking about your own servers, but designing solutions that are flexible enough to cover everything from hardware stores to financial institutions is especially difficult. Still, no point in starting the year off with easy tasks is there?

Let’s go.

Optimizing Azure SQL Database Costs

Best Practices for Performance and Affordability

Losing control of your spend in Azure is ridiculously easy – like absolutely crazy easy. A few months ago, I set up a test environment in Azure to experiment with new features and test workloads. I created several Azure SQL databases with generous resource allocations, which were pretty dumb backends to some Azure App Services which were the thing I was actually testing. Over the next few weeks, I worked through different tests, the databases continued to run, and every now and then I hit a bottle neck and bounced the resource up. One day, everything stopped. I was using a test account which had a $250\month budget, and I had spent the lot in a few days.

I had underestimated the potential costs, and not kept an eye on it, never scaling back down once I’d worked around my bottleneck. I had done, in short, what so many people do in Azure. This situation serves as a classic example of how easy it can be to overspend in Azure if you’re not actively managing and optimizing your database resources. In my case it didn’t matter too much, it just meant I had to wait until the end of the month for my budget to reset. I was also working at a pretty small scale, but I have seen, and heard of people who hit this problem at alarmingly large scales.

Azure SQL Database is a powerful, scalable solution, but if not managed properly, the costs can quickly spiral out of control. So I thought I would share a few strategies for maintaining high performance while keeping your spending in check.

1. Choose the Right Service Tier

Azure SQL Database offers multiple service tiers and performance levels, such as Basic, Standard, and Premium, each designed for different workloads. The key to cost optimization is selecting the right tier for your workload. For test and development environments, avoid Premium or Business Critical tiers unless you truly need the high performance. Instead, opt for Basic or Standard tiers, which offer more cost-effective solutions for lower workloads.

In my example above, I had chosen a lower-tier service but then scaled it up. By understanding the needs of your application, you can select the appropriate service tier that ensures performance without inflating your costs, but if you do choose to scale up, work out that new cost and be sure that you are happy keeping paying it. If not think about when you have higher needs and scale only then, which brings us to our next point.

2. Scale Resources Based on Demand

Azure SQL Database allows you to scale resources up or down depending on the workload. For example, you can scale up during high-demand periods (like running large queries or processing data) and scale down when the demand is low (such as during off-hours or when the database is idle).

In the case of the test account, I didn’t scale down after tests were completed. Instead of manually resizing the databases, I could have automated the scaling process using Azure Automation or scheduled tasks to reduce resources when the system wasn’t actively being used. The Serverless Tier is actually really great for this if you don’t want to go through the manual process of doing it yourself.

3. Utilize Reserved Instances

For long-term projects, using Azure’s Reserved Instances can help reduce costs significantly. By committing to a one- or three-year term, you can lock in discounted rates compared to pay-as-you-go pricing. Reserved Instances are an excellent choice for production databases that run continuously, as the cost savings can be substantial over time.

4. Monitor Usage and Set Alerts

Azure provides built-in tools like Azure Cost Management and Azure Advisor that can help you monitor usage and track costs in real-time. Setting up alerts when usage reaches certain thresholds can prevent you from overspending unexpectedly. I’ve now taken to setting up budget alerts on my test accounts because honestly – the chance of me getting called onto something else 2 seconds after I scale up a database I fully intend to scale back down are pretty high.

5. Cleanup Unused Databases – Automatically?

Lastly, it’s important to manage resources that are no longer in use. For temporary or test environments, make sure to regularly clean up unused databases. You can automate this cleanup process to prevent old, idle databases from consuming unnecessary resources and driving up costs.

Conclusion

I know it sounds simple – but the biggest thing you can do to optimize costs is to look at the costs. It’s honestly the not paying attention that is the biggest danger. If you are looking you are very likely to see places that you can trim expense, but the biggest danger of a cloud spend is that you don’t look at the numbers as you go – only when you get the bill at the end of the month.

How to Implement Row-Level Security in SQL Server

Imagine a financial firm with a single database housing information for thousands of customers. Each regional manager should only access data relevant to their territory, but instead, they see every record. This not only complicates their workflows but also poses a significant security risk. If they make a single mistake—or worse, a malicious actor compromises one account – it could expose sensitive data.

Today I want to talk about Row-Level Security, a SQL Server feature designed to enforce granular access control directly at the database level. It’s a feature I don’t normally get to play with because this type of problem usually is handled by the application rather than at the database tier. But when I told a client about this recently they got excited. It was exactly what they were after, so we moved ahead with a prototype and….I had so much fun.

Row-Level Security allows DBAs to define policies that restrict access to rows of data based on a user’s identity or role. For the financial firm, this means regional managers can only view the customer records relevant to their assigned territory, while executives might have full access. Implementing RLS ensures data remains secure, streamlined, and compliant with regulations.

Setting up RLS involves creating security policies and predicates. First, define a predicate function—typically an inline table-valued function—that determines the access logic. For example, you could create a function that checks if the user’s region matches the row’s region.

CREATE FUNCTION dbo.fnFilterByPrimaryRegion (@Region AS NVARCHAR(50))  
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS accessGranted
WHERE @Region = SYSTEM_USER;

Next, bind this function to your table using a security policy:

CREATE SECURITY POLICY dbo.RegionSecurityPolicy  
ADD FILTER PREDICATE dbo.fnFilterByPrimaryRegion(Region)
ON dbo.SalesData
WITH (STATE = ON);

Once enabled, RLS automatically filters queries based on the predicate. Even if someone tries to bypass it with a direct SQL query, they’ll only see rows they’re authorized to view.

Returning to the financial firm, implementing RLS transforms how regional managers interact with the database. Each manager now has a focused view of their data, reducing confusion and minimizing the risk of exposing unrelated customer records. Executives can still access a complete dataset, and compliance audits show that sensitive data is protected by role-based policies.

Auditing and Monitoring SQL Server: Keeping Tabs on Database Access

As a SQL Server DBA, maintaining visibility into who accessed your data, what they did, and when they did it can be crucial for security and compliance. Auditing and monitoring are your database’s security cameras, logging and alerting you to potential risks and suspicious activity. Audit is a feature I’ve not used much in my career, but we have had some interesting setups amongst our client base, so it’s worth a reminder of what we can do with this tool.

SQL Server’s Audit feature is a powerful tool for tracking activity. By creating a server audit object, you define where logs will be stored—such as the Windows Security log or a local file. Then, you set up audit specifications to record specific events, like SELECT, INSERT, or UPDATE actions on sensitive tables. For example, if your organization needs to monitor access to financial records, you can create an audit that tracks who reads or modifies that data. This provides a reliable trail for compliance or forensic investigations.

Real-time monitoring can be equally critical if you need to go beyond a retrospective look at what happened and try and get in front of a treat. While auditing captures a historical record, monitoring tools like Extended Events or SQL Server Profiler help you identify and respond to issues as they happen. For instance, monitoring can alert you to unusual login patterns or excessive data exports, giving you the chance to intervene before serious damage occurs.

The tools we’ve been given for audit purposes are actually really powerful, but as with all things they don’t come for free. Efficient auditing and monitoring require a balance. Logging every action can overwhelm you with data, so focus on high-risk activities—like failed logins, changes to critical tables, or permission alterations. Lower-risk logs, such as simple SELECT queries, can be archived for periodic review if they are required at all. We have one client where audit of server access is a critical requirement and the windows event logs these are written to is many multiples the size of the database itself.

Clients will frequently start with ‘everything’ as a requirement, but when you drill into what ‘everything’ actually is, and how much it will cost, this needs to be scaled back. It’s usually not just about catching bad actors—it’s about demonstrating that your are taking appropriate steps to protect your data and detect unauthorised attempts to access or manipulate it.

AI is coming for your job….but…..you’ll be fine.

This months TSQL Tuesday is hosted by Pinal Dave who asks the question “Has AI helped you with your SQL Server Job?“. It’s a fascinating question because the marketing on numerous products gets me very excited, but then I use them and…well….underwhelming.

First of all, the term AI is a buzzword, and as such it gets picked up by a whole bunch of people who don’t really know what it is, how it works and what it’s current limitations are and chucked into any presentation they are doing. I’ve heard formulas in spreadsheets, threshold based alerting and indexed searching described as AI. Quite honestly if you say your product has AI in it you seem more likely to get your project greenlit of funded. Sometimes even the promise that you will use AI later is enough to get investors and decision makers excited, because for them they hear “AI = Computer doing work = human not doing work = don’t have to pay wages anymore”. Even big companies who should know better fall into this pattern, such as Amazon’s whose just walk out technology, supposedly powered by leading edge vision technology, actually turned out to be powered by hundreds of workers in India watching your shopping session on video.

So far I’ve seen the AI-oversell pattern repeat over and over, and in my own work place I do have to say that the integration of AI has been time-consuming and painful. I was lucky enough to attend PASS summit last year and some of the demo’s of AI in the Azure Portal, and in Vendor products were quite impressive. I couldn’t wait to get back home and try some of them out. But the thing about demo’s are they are easy to get to behave exactly how you want – especially if you control all the inputs and record the demo. I was able to duplicate some, but not all of the demonstrations I saw, and those imperfectly. Some of the features were preview and just not available yet.

How Code is generated and used will be another interesting area to watch. The old meme that “For AI to take over programmers jobs, managers will need to come up with accurate requirements” is very true. Writing code is quite a small part of a programmers job. But I’m at a level when it comes to my code that AI works well at. I will ask for a small specific thing, and I will get back a specific answer that I can plug into a wider application. For the most part that means I don’t have to remember syntax in languages I might use monthly rather than daily. And then add in intellisense (Itself a type of AI) to make sure the code includes keywords that actually exist and I can get things working reasonably quickly.

I was also particularly keen to get my hands on co-pilot for Office 365. It’s been good for drafting emails, but really only for getting from a blank page to ‘something’ and then shaping in the information I want to include. I’ve dropped the use of CoPilot for analyzing my outgoing emails though. My emails tend to answer questions and I try and do this quickly and concisely. CoPilot would prefer I spend more time with cuddly language. Maybe I should but the emails don’t sound like me when I send them.

What I really wanted to be able to do with CoPilot was take a document file that I’d prepared for a client, such as a site review and quickly turn it into a PowerPoint presentation. This can sometimes take up to a day manually depending on the volume and depth of the source document. I had hoped to cut that down to an hour or two, but alas CoPilot has not been very good at picking out key pieces of information, and formatting has also been quite disappointing. With experimentation I’ve found that presenting the source document in a specific format assists greatly with the conversion process and accuracy of the information presented. But I want the PowerPoint to be a summary of the word document – It shouldn’t have to dictate the format of the document in order to do that.

And this leads me to the title of my post today. If AI can get these things right they can shave hours off manual work processes, and obviously that means the same work can be achieved by fewer staff. AI will cost people jobs, but how many people and how many jobs. There’s no doubt that AI will just keep getting smarter and will be capable of achieving more and more tasks, so the real question is how standardized is your job. If you are doing the same thing every day and your decisions are fairly simple, you can expect someone somewhere to start looking at whether AI(or simply well written automation) can do that more cost effectively. If you have a reasonably complex job, and particularly if your job involves complex discussions with other actual human beings, AI is not the threat you should be worried about. The thing that will take your job is another human being who has figured out how to use AI to cut out the time consuming bits of the role and manage the human interactions well.

Microsoft did a very smart thing when they named their AI family of products CoPilot. The not-so-subtle implication is that AI is something that supports you in your job, rather than comes and takes it off you. In the short term at least I see that as being how things will play out. For me AI has been things like CoPilot and chatGPT getting me off a blank page. Asking a question and seeing if the answer tracks. The Search functionality of providing it’s sources is a great addition because I can fact check the source before I read too much into the actual answer.

The Documentation Problem with AI

Because I’m lazy I’m going to say that this post is a response to Brent Ozars TSQL Tuesday invite. It’s wasn’t originally, it’s a week laste, but it kinda fits so let’s roll with it and not ask too many questions?

Brent asks “What was the last ticket you closed?” Unlike most of the respondents I do work in a system where tickets are a real thing and they do largely drive me day. Everything gets turned into tasks, these are tracked against budgets and deadlines and the ticketing system RULES OUR LIVES!

But the last ticket I closed wasn’t actually a ticket, it was a task in Microsoft to-do, or planner, or whatever it’s being called this month. I have a recurring task for several pieces of our documentation that prompts me to go and review what is written, and if necessary to change it.

Why would I do that? Well put simply, I hate documentation, but I hate documentation slightly less than I hate ‘lack-of-documentation’. I’m primarily a technical person and I like to have a list of things or a process to follow for repeatable tasks. Ideally I like to automate them, but that’s something you do after you have a reliable process, not before. I also like to make sure that any process my colleagues have to work through has a nicely documented set of steps too, but the problem is that sometimes that changes as you change your infrastructure, or SaaS products, or software changes or a bunch of other things. The problem is that documentation is seldom scoped as part of any project, and if it is it’s rare that it’s scoped sufficiently to actually do the documentation justice.

And why is hunting down outdated documentation particularly important right at this point of time? Because the world is changing, and we are unleashing AI on our systems, and it’s pulling recommendations from documentation we wrote 5 years ago and never updated or removed from the system. I want to work in an organization that is AI ready, and unfortunately that means a certain amount of legwork to work through all the different boring bits of source material that AI is going to be using to create it’s responses.

There’s a huge amount of buzz about Artificial Intelligence at the moment, and rightly so. But not enough discussion about the groundwork that’s needed for good results. Put simply the businesses that have prioritised structure, process and documentation will reap the benefits of that investment as they integrate AI into their workflows. The businesses who haven’t are the ones that will find AI giving them the strangest answers and have the lowest level of trust in what AI is doing for them. Let’s face it, there’s a lot of these businesses out there, and they are the most likley to jump in feet first to the AI world looking for a miracle that they simple haven’t earned yet.

So, closed? Temporarily. I’ve completed the key bits, but this is a ticket that is never really closed. If we want to get best value from AI we need to make sure it’s source data is as accurate as possible and that’s an ongoing process. Maybe with the right prompting it’s a task AI can eventually take over itself, but until then…. Garbage in-garbage out. Funny how working with data just keeps coming back to the same concepts isn’t it?

Defending Against SQL Injection Attacks: Best Practices for SQL Server

It’s 2024 – and SQL Injection is still a thing. It’s hard to believe that people aren’t aware of the possibility of a SQL injection attack in the current environment, but here’s a thought – it’s going to get worse. Why do I say that? Because instead of having a sensible developer with 30 years of dealing with idiots attacking their code and trying to make it do things it’s not meant to, you are going to have code going into production written by AI responding to a prompt like “Hey CHAT GTP – how do I write a form that people can send me comments about their order through?”. **SIGH** Let’s take a step back.

Imagine leaving your front door wide open, with a sign saying, “Welcome, come in!” That’s what a poorly secured SQL Server feels like to a malicious actor exploiting SQL injection vulnerabilities. SQL injection is a technique where attackers inject malicious SQL code into input fields, tricking the database into executing commands it shouldn’t. It’s one of the most common and devastating attack methods, and as a SQL Server DBA, it’s your job to ensure the door stays locked.

SQL injection doesn’t discriminate—it targets any system that relies on improperly sanitized user inputs. Picture a login form that asks for a username and password. If the application inserts these inputs directly into a SQL query without validation, an attacker could manipulate the input to bypass authentication, exfiltrate data, or even gain administrative control. For example, entering ' OR 1=1 -- as a password might turn the SQL statement into something always true, effectively allowing the attacker in without knowing any credentials.

As I mentioned above, we are going to get an increase in the number of inexperienced people putting code into the world. We simply can’t rely on the frontend being secure. It may be…and that would be great, but if our SQL Server is being breached because someone wrote a poor app it’s as much the DBA’s fault for not closing the door as it is the developers for opening it.

Defending against SQL injection requires more than locking down your SQL Server—it’s about creating multiple layers of defense, much like securing a fortress. The first line of defense is always input validation and parameterized queries. Instead of allowing raw input to dictate how your queries are constructed, parameterized queries use placeholders for inputs, ensuring the database treats them as data rather than executable code. Think of it as giving someone a safe deposit box key that only works for one specific box, rather than granting them access to the entire vault. We can look for that as DBA’s, and we can talk to our developers about the queries that they are writing.

Moving on from a parameterized query we can enforce input via stored procedures and now the code sits in the DBA realm. By encapsulating SQL code in a controlled environment, you reduce the risk of input tampering. However, don’t fall into the trap of thinking all stored procedures are inherently safe. If you dynamically construct SQL strings within a stored procedure, the risk of injection remains. And you want to know when stored procedures are failing. If you write it in such a way that it cannot fail so long as the input is valid – when it fails what does that tell you?

Another crucial defense mechanism is principle of least privilege. Every application user doesn’t need administrative access to the database. Instead, grant permissions based on necessity. If a user only needs to read data, don’t let them modify it. This limits the potential damage of a successful SQL injection attack. What’s more, SQL Server has a wonderful feature called roles that everybody uses like this: Do you need read access – I’ll give you db-reader, do you need write access – I’ll give you db_writer, oh you need both – I’ll give you db_owner. It’s crazy. SQL Server access is so much more granular than that. It you are exposing your database to a web app – what access does that app actually need? Oh it only writes to one table? So give it access to just that table. I know it’s more work and fiddlier – but it protects your data. So do it.

Encryption also plays a role. While it won’t stop an injection attack, encrypting sensitive data—like passwords—adds another barrier. Even if attackers gain access, encrypted data is far less useful without the decryption key.

Finally, don’t underestimate the power of monitoring and auditing. Tools like SQL Server Audit can track login attempts and query execution, providing early warning signs of unusual activity. Regular penetration testing is also invaluable; it’s like hiring a locksmith to find weaknesses in your security setup before a burglar does.

SQL injection is a persistent threat, and it’s probably going to get worse rather than better, but it’s not an inevitable one. By treating your SQL Server like a fortress and layering your defenses, you can ensure that even the most determined attacker finds nothing but locked doors. After all, security isn’t just about keeping bad actors out; it’s about building trust with those who rely on you to safeguard their data. And in this case, trust is the most valuable currency of all.