Skip to content

Maintaining your larger databases better Part 3 – Recommended DMV’s

Reading Time: 4 minutes

Welcome to the third instalment of this series about maintaining your larger databases better. Part 3 is all about recommended DMV’s that come with SQL Server which I think helps you maintain your larger databases better.

If you’ve yet to see the the other parts of this series you can follow the links below:

It’s worth noting that these are ones I have used and recommend. Ideally if you are going to do anything complex with them you should test it in a non-production environment first. Now, lets dive into them.

sys.dm_db_index_physical_stats

If you’ve just inherited a large database with lots of big tables this DMV will prove to be very handy. If you have a set of tables that are large compared to the others, it’s worth using this DMV on the smallest of those tables first and then working your way up.

Why would you do this? Well as I mentioned in a previous post most reindexing solutions only do a limited scan of your indexes for performance reasons.

If you don’t believe that, check what commands are running whilst your index maintenance is starting on larger databases. You will notice at the start that it uses this DMV with the ‘Limited’ option as the last parameter.

This means that after a while it reports that the larger indexes have a low level of fragmentation. So, unless the reindexing solution has been customized to cater for this, the larger indexes end up being constantly reorganized.

You can verify this by selecting one of the larger tables and running the below SQL. Replace TABLENAME with the actual name of the table:

SELECT * FROM SYS.DM_DB_INDEX_PHYSICAL_STATS(DB_ID(),OBJECT_ID(‘TABLENAME’),NULL,NULL,’DETAILED’);

The index_level column of the returned results shows the various levels of the index. Where index_level 0 is the leaf level of the index.

Now a common thing to happen is that this level is low in fragmentation, but the higher levels are shown as more fragmented. If the avg_fragmentation_in_percentage number is high for some of these columns, and they are not the highest index levels shown, then you probably want to rebuild the indexes concerned.

One thing you might want to test is rebuilding the larger indexes using page compression to reduce the size of them even further. In fact, some recommend this as good practice for databases in the cloud to improve IOPS.

sys.partitions

There is a quick way to check if page compression is used. Query the sys.partitions DMV and join the sys.indexes DMV to it using the index_id column.

sys.dm_db_column_store_row_group_physical_stats

As you can probably guess this DMV is an alternative of the above one for Column Store indexes. Microsoft have their own guidelines for dealing with Columnstore fragmentation which you can view here.

sys.dm_db_index_usage_stats

On the topic of columnstore indexes, this DMV is useful for identifying indexes that can be converted to columnstore indexes. Now we know that tables that are used for Datawarehouse loads are good candidates for this. However, sometimes you want some certainty.

For instance, you can run the below SQL after the instance has been running for a reasonable amount of time (say a few weeks, including over a month end).

SELECT * FROM SYS.DM_DB_INDEX_USAGE_STATS WHERE DATABASE_ID=DB_ID() AND OBJECT_ID = OBJECT_ID(‘TABLENAME’);

If you find all the indexes for the table are being constantly scanned. You than can conclude the table itself might be a good candidate to have a clustered columnstore index on it:

As I said in a previous post you should monitor system_scans and see if they only increase after maintenance tasks. If so it means they are probably updating due to statistics updates and the index can be removed if nothing else is being done on the index.

sys.dm_db_stats_properties

Like a lot of other DBA’s, I wrote customized scripts for updating statistics in SQL Server 2000. At one stage I used to use the rowmodctr value in the sysindexes table to help decide if statistics required updating.

However, as SQL versions changed we were warned not to use this column in later versions of SQL.

Now you can use the above DMV to decide which statistics should be updated if writing your own customized scripts. There are some good samples on how you can use it here.

sys.foreign_keys

If you have foreign keys within your large database you’re going to want to use sys.foreign_keys.

To guarantee the consistency of your data you got to make sure the foreign keys are trustworthy. You will also miss out on potential performance gains against queries on larger database tables when your foreign keys are not trustworthy.

This is due to SQL’s being able to avoid having to use indexes in certain situations.

This DMV will give you the opportunity to check the foreign keys are trustworthy. To do so make sure the is_not_trusted column is not set to 1.

You can try resolving this by running syntax. Replace TABLENAME with the table name and FK_Name with the name of the foreign key:

ALTER TABLE TABLENAME WITH CHECK CHECK CONSTRAINT FK_NAME

If that does not work you will have to check the data in the tables is consistent. Otherwise, you will have to resolve and then try the above again.

All the DMV’s relating to missing indexes

Last but by no means least. So far I have listed the DMV’s individually. But I must cover all the DMV’s relating to missing indexes. So I must mention that when you use these DMV’s together they can be very powerful.

Final word

There are multiple versions of scripts online you can use to see which indexes are potentially missing. My favourite script has helped me make dramatic improvements on multiple systems over the years.

Which version of the potentially missing indexes script do I recommend? Well to leave this post with a cliffhanger you will have to wait until my next post to find that one out.

Published inSQL Performance Tuning

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *