Welcome to the third instalment of this series about maintaining your larger databases better. Part 3 is all about recommended DMV’s that come with SQL Server, which can help you maintain your larger databases better.
If you’ve yet to read the the other parts of this series then you can follow the links below:
- Maintaining your larger databases better Part 1
- Part 2 – Indexes
- Part 4 – Recommended FREE tools for the job
In truth, these are ones that I have used and recommend. However, if you are going to do anything complex with them I do recommend you test it in a non-production environment first. Now, lets dive into them.
If you’ve just inherited a large database with lots of big tables then this DMV can be very useful.
In addition, if you have a set of tables that are large compared to the others then it’s worth using this DMV one of the smaller tables first. After you get an idea of how long it takes to run this query on a smaller table and you can then work your way up to the larger tables.
Why would you do this? Because, as I mentioned in a previous post most reindexing solutions only do a limited scan of your indexes for performance reasons. You can read that post in detail here.
For instance, you can check what commands are running on SQL Server whilst your index maintenance is starting up on a reasonable large database.
After you get the results back you will notice that it uses this DMV with the ‘Limited’ option as the last parameter.
Now, doing using the ‘Limited’option can have some interesting consequences for your indexes. Something that can happen after a while is that it reports that the larger indexes have a low level of fragmentation.
So, unless the reindexing solution has been customized to cater for this, the larger indexes can end up being constantly reorganized.
You can verify this by selecting one of the larger tables and running the below SQL, replacing TABLENAME with the name of the table:
SELECT * FROM SYS.DM_DB_INDEX_PHYSICAL_STATS(DB_ID(),OBJECT_ID(‘TABLENAME’),NULL,NULL,’DETAILED’);
It’s important to realize that the index_level column of the returned results shows the various levels of the index. Another key point is that the row that states index_level 0 is the leaf level of the index.
Now, a common thing to happen here is that this level is low in fragmentation, however the higher levels are shown as more fragmented.
If the avg_fragmentation_in_percentage number is high for some of these columns, and they are not the highest index levels shown, then you probably should rebuild the indexes concerned.
In addition, another thing you might want to test is rebuilding the larger indexes using page compression. Which can reduce the size of them even further.
In fact, some recommend this as good practice for databases in the cloud as it can improve IOPS.
In fact, there is a quick way to check if page compression is used which I mentioned above. You can query the sys.partitions DMV and join the sys.indexes DMV to it using the index_id column.
As you can probably guess this DMV is an alternative of the above one for Column Store indexes. Microsoft have their own guidelines for dealing with Columnstore fragmentation, which you can view in detail here.
On the topic of columnstore indexes, this DMV can be useful for identifying indexes that can be converted to columnstore indexes. Now, in general tables that are used for Datawarehouse loads are good candidates for this. However, sometimes you want some certainty.
For instance, you can run the below SQL after the instance has been running for a reasonable amount of time (say a few weeks, including over a month end).
SELECT * FROM SYS.DM_DB_INDEX_USAGE_STATS WHERE DATABASE_ID=DB_ID() AND OBJECT_ID = OBJECT_ID(‘TABLENAME’);
If you find all the indexes for the table are being constantly scanned then you than can conclude the table itself might be a good candidate to convert to a clustered columnstore index.
As I said in a previous post you can monitor system_scans and see if they only increase after maintenance tasks. If so it means they are probably updating due to statistics updates and the index can be removed if nothing else is being done on the index.
Like a lot of other DBA’s, I once wrote customized scripts for updating statistics in SQL Server 2000. In the past I used the rowmodctr value in the sysindexes table to help decide if statistics required updating.
However, as SQL versions changed we were warned not to use this column in later versions of SQL.
Now you can use the above DMV to decide which statistics should be updated if writing your own customized scripts. For example, there are some good samples on how you can read in detail here.
If you have foreign keys within your large database then you can use sys.foreign_keys to check them.
For instance, you can use this DMV to check the consistency of your data you got to make sure that foreign keys are trustworthy. Because you can also miss out on potential performance gains against queries on larger database tables when your foreign keys are not trustworthy.
This is due to the fact that SQL is able to avoid having to use indexes in certain situations.
You can check if foreign keys are trustwoirthy by checking that the is_not_trusted column is not set to 1.
Afterwards, you can try resolving this by running syntax. Replace TABLENAME with the table name and FK_Name with the name of the foreign key:
ALTER TABLE TABLENAME WITH CHECK CHECK CONSTRAINT FK_NAME
Now if that does not work you will have to check the data in the tables is consistent. Otherwise, you will have to resolve and then try the above again.
All the DMV’s relating to missing indexes
Last but by no means least are the DMV’s relating to missing indexes.
So far I have listed the DMV’s above individually. However, in this section I have decided to mentions all the DMV’s relating to missing indexes as a group.
Because when use these DMV’s together they can be very powerful in helping to identify missing indexes.
There are multiple versions of scripts online you can use to see which indexes are potentially missing. My favourite scripts has helped me make dramatic improvements on multiple systems over the years.
Which script that identifies potentially missing indexes do I recommend? Well to leave this post with a cliffhanger you can read my next post to find that one out.