Skip to content

Instantly share code, notes, and snippets.

@woehrl01
Last active March 12, 2020 10:50
Show Gist options
  • Save woehrl01/0288dbcfe9da0a9c3ad9d23ee40ac67a to your computer and use it in GitHub Desktop.
Save woehrl01/0288dbcfe9da0a9c3ad9d23ee40ac67a to your computer and use it in GitHub Desktop.
MIT Licence
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and 
associated documentation files (the "Software"), to deal in the Software without restriction, including 
without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 
copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to 
the following conditions: The above copyright notice and this permission notice shall be included in all 
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT 
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. 
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

The following links provide you with some useful resources to do SQL Server performance tunings.

You can use the following links to access some great stored procedures for troubleshouting. Please see there own license agreements:

The following statements are from multiple sources, I'm not exactly sure where each one is coming from. Created it a few years ago, and refound it deep on my hd, and are from multiple books and websites (see the following links).

Awesome web resources for SQL Performance Tuning are:

I can highly recommend to read the following books:

Please take all the information aggregated by the queries with a grain of salt. Use them only as a hints to performance improvements, no guarantee.

-- Ratio of CPU Waittime (signal_wait_time) and IO Time (resource_wait_time)
-- Can be useful to see if there are some hardware limitations which needs to be tackled
select 
  SUM(signal_wait_time_ms) as total_signal_wait_times_ms,
  SUM(wait_time_ms - signal_wait_time_ms) as resource_wait_times_ms,
  SUM(signal_wait_time_ms) * 1.0 / SUM(wait_time_ms) * 100 as signal_wait_percent,
  SUM(wait_time_ms - signal_wait_time_ms) * 1.0 / SUM(wait_time_ms) * 100 as resource_wait_percent
 from sys.dm_os_wait_stats
-- Memory usage of the plan cache
select 
   objtype as 'Cached Object Type', COUNT(*) as 'Number of plans',
   SUM(cast(size_in_bytes as BIGINT)) / 1024 / 1024 as 'Plan Cache Size (MB)', 
   AVG(usecounts) as 'Avg Use Count'
 from sys.dm_exec_cached_plans
 group by objtype
-- Memory usage of the memory caches
select 
   [name], [type], (pages_kb) / 1024 as total_mb,
   entries_count
 from sys.dm_os_memory_cache_counters
 order by total_mb desc
-- RAM usage of each database
-- Can be useful to decide if some database should be moved to a dedicated server
select 
  COUNT(*) * 8/ 1024 as 'cached size (MB)', 
  case database_id
  when 32767 then 'resourcedb'
  else DB_NAME(database_id)
  end as 'database'
from sys.dm_os_buffer_descriptors
group by DB_NAME(database_id), database_id
order by 'cached size (MB)' desc
-- page life expectancy should be more than 300! (Warning: the query shows only the current value)
-- Use the performance monitor (perfmon), to track the value over a long period of time
-- Can be useful to see how long each page is kept in memory before it gets thrown out by other queries
SELECT [object_name],
[counter_name],
[cntr_value]
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Manager%'
and counter_name = 'Page life expectancy'
-- Unused indicies
SELECT i.name, u.*
FROM [sys].[indexes] i
INNER JOIN [sys].[objects] o ON (i.OBJECT_ID = o.OBJECT_ID)
LEFT JOIN [sys].[dm_db_index_usage_stats] u ON (i.OBJECT_ID = u.OBJECT_ID)
    AND i.[index_id] = u.[index_id]
    AND u.[database_id] = DB_ID() --returning the database ID of the current database
WHERE o.[type] <> 'S' --shouldn't be a system base table
    AND i.[type_desc] <> 'HEAP'
    AND u.[user_seeks] + u.[user_scans] + u.[user_lookups] = 0
ORDER BY 1 ASC
-- full scans pro sekunde
declare @scanns BIGINT;

SELECT @scanns =
[cntr_value]
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Access%' and counter_name = 'Full scans/sec'

waitfor delay '00:00:01';

SELECT [object_name],
[counter_name],
([cntr_value] - @scanns)
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Access%' and counter_name = 'Full scans/sec'
-- Shows the 10 worst queries of the current database
-- Original source: https://debadattaparida.wordpress.com/2016/04/20/find-most-expensive-queries-using-dmv/

SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(qt.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.total_worker_time,
qs.last_worker_time,
qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
qs.last_execution_time,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
where qp.dbid = DB_ID()
-- ORDER BY qs.total_logical_reads DESC -- logical reads
-- ORDER BY qs.total_logical_writes DESC -- logical writes
ORDER BY qs.total_worker_time DESC -- CPU time
-- index sizes
SELECT
OBJECT_NAME(i.OBJECT_IDAS TableName,
i.name AS IndexName,
i.index_id AS IndexID,
8 * SUM(a.used_pagesAS 'Indexsize(KB)'
FROM sys.indexes AS i
JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id
JOIN sys.allocation_units AS a ON a.container_id = p.partition_id
GROUP BY i.OBJECT_ID,i.index_id,i.name
order by 4 desc
--ORDER BY OBJECT_NAME(i.OBJECT_ID),i.index_id 
 -- trigger stats
select top 10 object_name(s.object_id), qp.query_plan, * from sys.dm_exec_trigger_stats s
cross apply sys.dm_exec_query_plan (s.plan_handle) qp
where database_id = db_id()
order by s.total_worker_time desc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment