Developing Matt

My Technical Journal

Sql Server Performance

leave a comment »

I attended a sswug online class taught by Stephen Wynkoop on sql server performance.  Here are my takeaways:

Root causes of performance issues

  1. Indexing (missing, over indexing, wrong indexes)
  2. Lack of maintenance (stats, fragmentation (should be automated, easy to automate), growth issues)
  3. T-Sql
  4. Storage Subsystems (san, disk, etc) ..disk contention issues,
  5. Hardware (ram, etc)

Three DMVs to show you missing indexes

  • Sys.dm_db_missing_index_details
  • Sys.dm_db_missing_index_groups
  • Sys.dm_db_missing_index_group_stats

Don’t index values that aren’t largely unique (for example a sex column or a country column where all records are in the same country)

Don’t index on 3rd party apps unless you have permission (and keep that documentation close)

He recommended keeping a running log of changes (index additions, removals) and watch indexes closely for fragmentation and utilization. (how do you watch for utilization levels on an index?)

Covering indexes are great and should include the columns you want to return.  If you are pulling back lastname, firstname, city, state, zip on a consistent bases.  Sql server can return the data back from the index, reducing one i/o.  The catch, of course, is that sql server will have to maintain those indexes (which will hurt inserts and updates).

Autoupdate stats is fired when 20% of the data has changed.

Index order should be on selectivity with the most selective first


  • The easy way is to use DBCC SHOWCONTIG WITH TABLERESULTS, ALL_INDEXES (or DBCC SHOWCONTIG (tblName) for one table) .  The closer the scan  density to 100 percent the better the health of the indexes.  Also look for avg. pages per extent to be close to 8.  I like to place my index stats into a temp table and query with the worst offenders on top

   ObjectName CHAR (255),
   ObjectId INT,
   IndexName CHAR (255),
   IndexId INT,
   Lvl INT,
   CountPages INT,
   CountRows INT,
   MinRecSize INT,
   MaxRecSize INT,
   AvgRecSize INT,
   ForRecCount INT,
   Extents INT,
   ExtentSwitches INT,
   AvgFreeBytes INT,
   AvgPageDensity INT,
   ScanDensity DECIMAL,
   BestCount INT,
   ActualCount INT,
   LogicalFrag DECIMAL,
   ExtentFrag DECIMAL)

insert into #frag exec (

select *, cast(CountPages as numeric)/cast(Extents as numeric) as ‘Avg. Pages per Extent’
from #frag  order by ScanDensity, CountPages /Extents  desc

  • Look for avg_fragmentation_in_prercent from Select * from sys.dm_db_index_physical_stats(null,null,null,null,null).  If more than 10% then you should defrag your indexes.  The general recommendation is to rebuild over 30% fragmented.  Defrag if 30% or less fragmented.  Defragging is less effective but it doesn’t lock the table and recreate the index from scratch (which is what the rebuild does).  For high impact systems you can also defrag over and over again until it gets to the level of fragmentation that you are comfortable with. 
  • Other options: rebuild with online= on and reorganize (to defragment)  
  • Review indexes quarterly
  • Check out brent ozar’s discussion on identifying which databases have index fragmentation (


  • Watch data type matching between schema and query (convert your parameter early to match column)
  • Watch transactions. Don’t leave them wide open. It blocks other operations and creates contention
  • Avoid temp tables (tempdb works very hard to keep up). You can replace it with working tables that are static and you can put indexes on them
  • Watch naming convention. If it starts with sp_ it will actually query the master database first to try to find the system procedure
  • Most of the time you don’t need hints….it is usually better to let sql serer do it


  • Logical disks does not equal performance. You want physical spindles. You need more spindles to get performance because they re different access points to the drive.
  • Disk speed is still the biggest contributor to performance issues


  • Over virtualization: more than one instance of sql server can impact performance
  • Nic saturation (the network card is being over utilized…with a bad router, etc)
  • Cabinet saturation: if your cabinet is joined with other servers that are spewing out tons of data this can cause problems as well
  • Linked servers: performance takes a hit..there is a network between the two. connection speeds, resource utilization, check indexing on other server, etc.
  • Database Compatibility Levels: if you are running a 2000 database on a 2008 server then performance will be hit (doesn’t have all the access to the new a native database with native tools and capabilities)
  • Virtualization: multiple virtual machines that are fighting for resources.

Don’t forget the Resource Governor in SQL 2008 tool where you can tap down certain access to different applications (you can restrict how much memory it can use)


Written by matt

November 25, 2009 at 3:08 pm

Posted in Sql Performance

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: