In my last post I covered some ways you can maintain your larger databases better. This post is the next in this series and covers indexes in more detail. So, here’s the next items to keep in mind.
To see the other posts in the series click on the below links.
- Maintaining your larger databases better Part 1
- Part 3 – Recommended DMV’s
- Part 4 – Recommended FREE tools for the job
Reindexing rowstore indexes
Now this is an interesting one. Generally speaking, most people just put an indexing solution in place with the defaults and leave it to it.
However, you have to put a bit more though into this for larger databases. In reality, if you don’t you could see your reindexing jobs running into the working day.
In fact, what a lot of people don’t realise is that most reindexing solutions that are put in place only scan the leaf level of indexes for performance reasons.
For example, if you have a large table that is GB’s in size and you leave your reindexing solution at the default values, it is always going to reorganize the index. It might be good for a while, but eventually it does mean your larger indexes can become fragmented and larger then they need to be.
The good thing about some of the solutions out there like Ola Hallengrens solution is that you can add additional parameters. For example, you can enter a setting for a lower fragmentation rate on tables that have many pages.
Another key point about solutions like Ola’s is that you can set a time limit. Doing this reduces your job running for longer then you want it to.
What must be remembered is that in Ola’s solution when you set a time limit it does not mean it will stop straight away. In fact, it will stop when it has finished with the index it is currently working on.
You can find out more about Ola’s solution in detail here.
Now, there is one thing I will add whilst we are talking about rowstore indexes for large databases. If you want your reindexing done as fast as possible you might want to consider doing offline index rebuilds instead of online ones for larger indexes.
Note that this works particulary well if there are no queries running against the database at the time. For example, databases that have just had a large import that nothing is currently querying against.
Reindexing Columnstore indexes
Another key point if you are going to use a popular reindexing solution is that most of them do not cater for Columnstore indexes. Luckily there are some alternatives out there that you can use instead.
Of course, there are other posts out there with recommendations on how to do it as well. Therefore, the best thing I can advise is to do your homework for your particular columnstore indexes.
Potentially unused indexes
Potentially unused indexes can be a space hogger in your databases. In addition, it also means that tasks like update statistics and checkdb take longer then they need to.
With this in mind, it’s a good idea to keep an eye on the user values in the sys.dm_db_index_usage_stats dmv.
If a non-clustered index appears to be unused for a long amount of time, then I recommend scripting out the schema for it and then dropping it.
Be warned not to be too eager to do this within a few days of your last server restart, as it can cause queries to slow right down if it turns out the index was still used.
Also, if you’re concerned about user_lookups incrementing, check to see if it increases after your update statistics task. Usually this just means you’re updating the statistics for an unused index for some reason.
Clustered columnstore indexes
From SQL Server 2016 onwards there’s something else you can use to your benefit.
For example, you can see how often an index has had an index seek or an index scan against it by querying the sys.dm_db_index_usage_stats DMV.
If a large clustered rowstore index is being only scanned all the time you should consider converting it to a clustered columnstore index instead.
Databases that contain archive data are usually good candidates for this as well. Furthermore, they can be usually be converted to use the columnstore_archive compression level to compress them even further.
There’s a whole load of other things to take into consideration as well when dealing with larger databases. For instance backup methods, filegroup locations, etc.
Maybe in the future I will expand on this thread more, we will see. Feel free to comment on this post below.