Skip to content

Maintaining your larger databases better Part 1

Reading Time: 4 minutes

Introduction

In this post I cover part 1 of a series about maintaining your larger databases better if you use SQL Server. In reality, it’s based on things that I have seen over the years.

One reason I’ve decided to do this is because I have noticed that some teams try to use one solution for all. And that solution involves having the same settings for all databases.

Now, this usually works for smaller databases. However, when it comes to larger databases you have to take more into consideration.

In fact, I’ve encountered various issues where people have attempted a one size fits all solution. Which is fine for a lot of environments as long as you use some dynamic logic to change settings.

With this in mind, I thought I would address how to get the most out of maintaining your larger databases here.

However, I will try to avoid the more obvious items like using multiple filegroups. I also recommend that you test any recommendations in a test environment first, before using them in Production.

In fact, this post is first in a series and covers the more general items. If you want to see the other posts in this series as well, click on the links below.

Maintenance windows

One thing in particular I must stress is that maintenance windows are critical. Database users can get upset if they discover that you are doing maintenance work against the database during the working day.

Especially if they are doing something like a large import at the time and discover you are trying to do index maintenance as well.

My advice is to schedule the maintenance work to be done during the evenings and weekends. In addition, actively monitor it to avoid it running during the working hours of your business.

Also, avoid doing maintenance when large imports or other ETL jobs are being done against the databases.

Of course, there is another option. You can also work with the team to improve the performance of their workloads.

CheckDB

Now, if you’re using older versions of SQL Server then you will probably want to look at an older post of mine. Which discusses using CheckDB with databases that have filtered indexes and indexes that use computed columns.

In fact, this can make a big difference to CheckDB running on larger databases. You can read that post in detail here.

Anti-Virus exclusions for CheckDB

Now hopefully you all know you should have anti-virus exclusions in place for database files. In fact, Microsoft have their own documentation on what should be excluded. Which you can read in detail here.

However, there is one in particular you may need for larger database which is not that well known. It is an exclusion for the snapshot files that CheckDB creates for those databases.

These snapshot files are usually in the format of mdf_MSSQL_DBCCxx and ndf_MSSQL_DBCCxx. I thought where xx was version number of the databases, however the Microsoft documentation suggests differently.

For example, Symantec anti-virus uses a question mark for their wildcard character in their exclusions list. Therefore, you might have to put an filetype exclusion in such as ‘.mdf_MSSQL_DBCC??’.

Test your backups

Now, I did say I’d avoid the more obvious items in this post. However, because of issues in the past I’m still going to mention testing restoring your backups here.

Never take it for granted backups are being done just because it says so in the log. Otherwise, you might end up being caught out and having some explaining to do.

Of course, there is another good thing about restoring your backups frequently. By doing them often everybody knows what to do to restore your larger databases in the event of an emergency.

Backup timing

If you are using a traditional backup method with your SQL databases, consider the timing of your backups.

For example, if you do your full backups before reindexing has completed those changes will be in each differential backup until you do your next full backup.

Partitioning

One way to improve performance of queries in larger tables is to implement partitioning. However, if you do look to use this make sure you define your partition scheme right though.

Otherwise, you could end up with poor performance or having to scan your entire table every time you go to create a new partition.

Statistics

I know sp_updatestats is still heavily used in places. However, the problem with this is that it only updates using the last used sample rate. Which is usually still the default sample value for SQL Server.

If you’re encountering performance issues caused by this, it’s worth thinking about writing your own job to update statistics based on the size of your tables instead. Especially since these days a lot of people deem statistics maintenance more important than index maintenance.

Monitoring

Make sure at the very least your critical instances within your business that contain larger databases are being monitored all the time.

In addition, if using SQL Server 2016 or above on-premises I recommend turning on Query Store. Of course, the same goes for larger databases in Azure.

It is very important to take this proactive approach. Because troubleshooting performance issues retrospectively on larger databases takes longer.

Final word

I hope you enjoyed part one in the series about maintaining your larger databases better.

In reality, there’s a whole load of other things to take into consideration as well when dealing with larger databases.

Part two here follows this post and covers indexes. Of course, feel free to comment below.

Maintaining your larger databases better Part 1
Published inSQL Performance Tuning

7 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *