Paul van Oordt SQL Services
Freelance SQL Server specialist and troubleshooter, Utrechten
+31 627400408 - WhatsApp - info@vanoordt.nl
LinkedIn - curriculum vitae
vanoordt.nl - Nederlands
newsletter
For a newer and more complete survey see what to know about SQL Server performance.
Below you find a large number of things to monitor and tips that you can use to improve performance of Microsoft SQL Server. If you want to gain thorough knowledge and successfully deal with performance problems, have a look at the possibilities for a in-house SQL Server crash course.
In my experience the biggest improvements in performance can usually be obtained by identifying and carefully optimizing a few problem queries. This is where improvements of a factor 10, 100 or even more can be obtained, both in cpu usage, I/O as in respons times. At least in half of all contracts I did such gains occur. Obviously the whole system benefits, not only the queries that were tuned. Also it should be clear that these sorts of problems cannot be dealt with just by adding hardware.
It is not only important to know what can cause performance problems, is is also important to know what doesn't (or need not). For instance, an adequate backup strategy is not a performance killer. Quite often I find production databases with a simple recovery model apparantly for better performance. This can lead to hours of lost work in case of emergency. Therefore: do not sacrifice essential functionality for performance. Especially not when the system hasn't been tuned by a good specialist.
To what extend the performance problems are caused by SQL Server, and what role play other processes?
Can the performance problems be reproduced?
Is a server side trace, extended events session and/or performance log configured to record the problems?
Is there a benchmark for the performance of the system?
Is there memory pressure? Is the page life expectancy acceptable?
Is there I/O contention? What files show the most sever waits? Is this a real problem, or are these waits mainly during for instance a checkdb or reindex operation?
Is the processor load too heavy? Is the usage of the processors by the SQL Server proces consistently high?
Is the tempdb load too heavy, and if so what is the nature of the load and is the tempdb well equiped to deal with it? Should multiple files be used? On seperate spindles? Or on SSD?
Is SQL Server memory usage limited to prevent other processes from being memory starved?
Is there too much parallel processing of relatively simple queries, for instance in an OLTP system? Has the maxdop server setting been adjusted to hyper threading, numa architecture and workload?
Is there significant waiting for memory grants?
Is there significant waiting for the availability of a processor?
Have the database files been placed optimally on adequate phisical disks?
Is the growth of database files during normal operation of the system limited to a minimum? Are auto grow settings optimal? Is instant file initialization configured? Is auto shrink of database files disabled?
Is the SAN read-write caching adequately configured?
Is auto create and auto update of statistics enabled? Are statistics adequately updated?
Could datacompressie be used to improve memory usage and reduce I/O?
Do no useless database snapshots exist?
Are the main queries adequately supported by indexes? In particular important for updates en deletes, because in addition to an inefficient query plan there may be large amounts of locks involved.
Are there significant missing indexes?
Are there double or otherwise redundant indexes?
Are there unused indexes, or indexes that add little to other existing indexes?
Is a clustered index defined for (almost) every table, and is it on the right columns? In particular where queries benefit from an ordered scan on the index.
Is the fill factor for all or specific indexes appropriately set and are indexes defragmented in a timely manner?
Is the possibility to create covering indexes using included columns well used?
Is the possibility to create filtered indexes well used, especially for columnms that have values with very different cardinalities? Are the right recompile options set in queries that should use the filtered index?
Have columnstore indexes, indexed views, XML indexes and spatial indexes been well utilized?
Are the indexes not too big a burden on the system during modifications?
Is the code set based? Are cursors, other loops and multi statement functions only used when really necessary?
Do heavy nested loops occur? Can join clauses be rewritten so that the more efficient merge or hash joins are used?
Is the usage of temporary tables or table variables limited? Can these be replaced by common table expressions?
Are union and union all adequately used?
Are the expressions preventing the usage of indexes? Can the query be rewritten so that the index can be used?
Do variables occur in code that have a datatype different from the columns that are referenced and therefore prevent the usage of statistics?
Do query plans show big differences between actual and estimated number or rows?
Do queries lead to significant spilling to the tempdb of hash or sort resuls?
Do too much (re)compilations occur, resulting in high CPU usage? If so, can queries be rewritten so that they use plans from the procedure cache?
Is there significant blocking?
Is the usage of 'read committed snapshot' a solution for concurrency problems?
Is the right transaction isolation level used for every connection? Some clients use per default the most expensive 'serializable' level.
Does the code use transactions is the right way? Are there no long running transactions?
Can databases, partitions or filegroups be set to read only, limitting the administration of locks?
Does a server side trace or extended events session run, or is traceflag 1222 used to capture possible deadlocks?
Is the database design really clean and logical? A well designed database is a usually a well performing database, at least it is well tunable.
Is there efficient usage of datatypes? So, extreme case, not a 72 byte nchar(36) to hold a 16 byte uniqueidentifier.
Has uniqueness been enforced where possible using constraints or indexes?
Are foreign keys used where possible?
Are computed columns, persisted or not, used where appropriate?
Are triggers, constraints and possibly indexes disabled and enabled again during data transfer?
Is there the possibility to do data transfer as a metadata only operation using table partitions?
Can the deletion of huge amounts of data possibly be done as a metadata only operation using table partitions?
Is a separation desirable between transaction processing and reporting?
Is controled denormalisation desirable?
Can the query governor be used to limit the resources for connections or queries?
(c) Paul of Oordt, vanoordt.nl