Skip to content

Maintaining your larger databases better Part 2 – Indexes

Reading Time: 4 minutes

In this post I cover part 2 of a series about maintaining your larger databases better if you use SQL Server. It covers indexes.

In my last post I covered some ways you can maintain your larger databases better. This post is the next in this series and covers indexes in more detail. So, here’s the next items to keep in mind.

To see the other posts in the series, click on the below links.

Reindexing rowstore indexes

Now this is an interesting one. Generally speaking, most people just put an indexing solution in place with the defaults and leave it to it.

However, you have to put a bit more though into this for larger databases. If you don’t, your reindexing jobs can run over into the working day.

In fact, what a lot of people don’t realise is that most reindexing solutions that are put in place only scan the leaf level of indexes to check how fragmented they are. They do this for performance reasons.

Otherwise, it can take a long time to scan all the levels of an index if they are a reasonable size. Which means it would take a long time before any index maintenance starts.

Which means that if you have a large table that is GB’s in size, and you leave your reindexing solution at the default values then it is always going to reorganize the index. Due to the fact it scans the leaf level pages of an index and realizes it’s less then the percentage required to do a rebuild instead.

It might be good for a while, especially if you are updating your statistics well. But eventually it does mean your larger indexes can become fragmented and larger than they need to be.

Additional parameters

One good thing about some of the solutions out there like Ola Hallengrens solution is that you can add additional parameters. For example, you can enter a setting for a lower fragmentation rate on tables that have many pages.

Another key point about solutions like Ola’s is that you can set a time limit. Doing this reduces your job running for longer than you want it to.

One thing to remember if using Ola’s solution is that when you set a time limit it does not mean it will stop at that exact time. In fact, it will stop when it has finished with the index it is currently working on.

You can find out more about Ola’s solution in detail here.

Online rebuilds

Now, there is one thing I will add whilst we are talking about rowstore indexes within large databases. If you want your reindexing done as fast as possible you might want to consider doing offline index rebuilds instead of online ones for larger indexes.

Of course, this can have its own implications are far as concurrency is concerned. So, recommend to only do this is the database is reasonably unused.

Because this works particularly well if there are no queries running against the database at the time. For example, databases that have just completed a large import that nothing is currently querying against.

Reindexing Columnstore indexes

Another key point if you are going to use a popular reindexing solution is that most of them do not cater for Columnstore indexes. Luckily there are some alternatives out there that you can use instead.

First one which I have seen in use is Greg Low’s script. You can read more about that in detail here

Another version is from the Microsoft CAT team. You can read about that one in detail here

Of course, there are other posts out there with recommendations on how to do them as well. Therefore, the best thing I can advise is to do your homework for your particular Columnstore indexes.

Potentially unused indexes

Potentially unused indexes can be a space hogger in your databases. In addition, it also means that tasks like updating statistics and CheckDB take longer than they need to.

With this in mind, it’s a good idea to keep an eye on the user values in the sys.dm_db_index_usage_stats dmv.

If a non-clustered index appears to be unused for a long amount of time, then I recommend scripting out the schema for it and then dropping it.

Be warned not to be too eager to do this within a few days of your last server restart. Because it can cause queries to slow right down if it turns out the index was still used.

Also, if you’re concerned about user_lookups incrementing in the sys.dm_db_index_usage_stats dmv, check to see if it increases after your task to update statistics.

Usually this just means you’re updating the statistics for an unused index for some reason. Something I have had to help people identify a couple of times now when they have asked using the SQLHelp hashtag on Twitter.

Clustered Columnstore indexes

From SQL Server 2016 onwards there’s something else you can use to your benefit.

If a large clustered rowstore index is being only scanned all the time you should consider converting it to a clustered Columnstore index instead.

For example, if you query the sys.dm_db_index_usage_stats DMV and you notice the clustered index is only being scanned it can be a potential candidate.

Databases that contain archive data are usually good candidates for this as well. Furthermore, they can be usually be converted to use the columnstore_archive compression level to compress them even further.

You can see some examples on how to do this here.

Final word

I hope you enjoyed part two in the series about maintaining your larger databases better.

Of course, there’s a whole load of other things to take into consideration as well when dealing with larger databases. For instance, backup methods, filegroup locations, etc.

Feel free to comment on this post below.

Maintaining your larger databases better Part 2 - Indexes
Published inSQL Performance Tuning

7 Comments

Leave a Reply

Your email address will not be published.