Skip to content

Maintaining your larger databases better Part 2 – Indexes

Reading Time: 3 minutes

In my last post I covered some ways you can maintain your larger databases better. This post is the next in this series and covers indexes in more detail. So, here’s the next items to keep in mind.

To see the other posts in the series click on the below links.

Reindexing rowstore indexes

Now this is an interesting one. Most people just put an indexing solution in place with the defaults and leave it to it. However, you have to put a bit more though into this for larger databases. If you don’t you could see your reindexing jobs running into the working day.

What a lot of people don’t realise is that most reindexing solutions that are put in place only scan the leaf level of indexes for performance reasons. If you have a large table that is GB’s in size and you leave your reindexing solution at the default values, it is always going to reorganize the index. It might be good for a while but eventually it does mean your larger indexes can become fragmented and larger then they need to be.

The good thing about some of the solutions out there like Ola Hallengrens solution is that you can add additional parameters. For example, you can enter a setting for a lower fragmentation rate on tables that have many pages.

Another good thing about solutions like Ola’s is that you can set a time limit. Doing this reduces your job running for longer then you want it to. Note that in Ola’s solution when you set a time limit it does not mean it will stop straight away. It will stop when it has finished with the index it is currently working on. You can find Ola’s solution here.

Online rebuilds

There is one thing I will add whilst we are talking about rowstore indexes for large databases. If you want your reindexing done as fast as possible you might want to consider not doing online index rebuilds for larger indexes. Note that this works best if there are no queries running against the database at the time. 

A good example of this are databases that have just had a large import that nothing is currently querying against.

Reindexing Columnstore indexes

Something you need to realise if you are going to use a popular reindexing solution is that most of them do not cater for Columnstore indexes. Luckily there are some alternatives out there you can use.

First one which I have seen in use is Greg Low’s script which can be found at here. Another is from the Microsoft CAT team that can be found here. I know there are other posts out there with recommendations on how to do it as well. Best thing I can advise is to do your homework for your particular columnstore indexes.

Potentially unused indexes

Potentially unused indexes can be a space hogger in your databases. It also means that tasks like update statistics and checkdb take longer then they need to. It’s a good idea to keep an eye on the user values in the sys.dm_db_index_usage_stats dmv.

If a non-clustered index appears to be unused for a long amount of time, then I recommend scripting out the schema for it and then dropping it.

Be warned not to be too eager to do this within a few days of your last server restart, as it can cause queries to slow right down if it turns out the index was still used.

Also, if you’re concerned about user_lookups incrementing, check to see if it ties in with your update statistics task. Usually this just means you’re updating the statistics for an unused index for some reason.

Clustered columnstore indexes

From SQL Server 2016 onwards there’s something else you can use to your benefit. You can see how often an index has had an index seek or an index scan against it by querying the sys.dm_db_index_usage_stats DMV.

If a large clustered rowstore index is being only scanned all the time you should consider converting it to a clustered columnstore index.

Databases that contain archive data are usually good candidates for this as well. Furthermore, they can be usually be converted to use the columnstore_archive compression level to compress them even further.

Final word

There’s a whole load of other things to take into consideration as well when dealing with larger databases. For instance backup methods, filegroup locations, etc. Maybe in the future I will expand on this thread more, we will see. Feel free to comment on this post below.

Published inSQL Performance Tuning

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *