Skip to content

Instantly share code, notes, and snippets.

@evenkiel
Created February 12, 2015 22:55
Show Gist options
  • Save evenkiel/e218c595f76794dcc022 to your computer and use it in GitHub Desktop.
Save evenkiel/e218c595f76794dcc022 to your computer and use it in GitHub Desktop.
SQL Server Performance Tuning Class - Day 4

SQL Server Performance Tuning Class - Day 4

How to Measure Your Server

  • 3 Key Numbers
    • How busy is your server
    • How hard is it working
    • Ho wmuch data do you have

How Busy is Your Server

  • Perfmon SQLServer:SQL Statistics - Batch Requests/sec. Trend this on an hourly basis and break it out by Weekday/Weekend
    • Do NOT look at transactions per/sec, since not all statements are transactions
    • 0 - 1,000 easy to handle with commodity hardware. MAY be possible to run this load on a VM
    • 1,0000 - 5,000 be careful, because a table lock or bad query can knock the server over
    • 5,0000 - 25,0000 You should have several full time DBAs performance tuning. Every index matters crticially
    • 25,000 + requires lots of attention, but you need to have always on availability groups.
    • Record is 150,000

How hard is it working

  • Hours of wait time per hour
    • brentozar.com/go/getwaits - trend this on an hourly baiss
    • The more the server works the more it waits
    • 0 - Server isn't doing anything
    • 1 hour of waits - still not doing much
    • 1 hour of waits X # of cores - working hard, look at tuning

How much data do you have

  • All numbers below assumce 'commodity' hardware which is 2CPU and 250GB + of RAM
  • 1 - 150GB Std Edition
  • 150 - 500 GB Enterprise
  • 500GB + OLTP vs. Analytical?
  • 1TB OLTP data - very challenging

What If It's Hardware

  • http://cpuid.com can be used to look at exactly what the CPU is doing
  • Get a faster CPU. Pay attention to the number of cores
  • 7k per core with Enterpise. 2k per core with standard
  • PAGEIOLATCH - make storage faster. Add more memory
  • Perfmon Phsical Disk: Avg Sec/Read. 20 - 100 ms is good. > 100ms means you have a storage issue
  • Don't ever change storage in an existing server. Buy new hardware and test/tune there
  • WRITELOG
    • Commit is only completed when the transaction log is written to.
    • Perfmon Physical Disk: Avg Sec/Write. 3-20ms is good
    • Perfmon Physical Disk: Reads/Sec, Writes/Sec

Reporting in Production

  • You need more RAM than data. If you have 128GB of data then you should have roughly 192GB of RAM
  • If you can fit data in memory, then most of your tuning problems go away
  • By default, SQL Server will allow one query to allocate up to 25% of available memory. 4 users with aweful reports could swamp a server
  • Can use filtered indexes to get best performance for recent history
  • Indexes views are a view with a where clause - has a clustered index
  • DBCC TRACEON(610) minimal logging - useful for DW transfers to significantly drop logging
    • Almost impossible to run if you are running FULL recovery. Need to be running in SIMPLE recovery mode
  • Use SIMPLE recovery model for datawarehouse databases. Pair this with DBCC TRACEON(610) to get super minimal logging and better speed for writes
  • WITH INDEX(1) will force the clustered index to be used

How to Tell When tempdb is a Problem

  • The 'Version Store' uses tempdb
  • Temp tables and temp variables live here as well
  • SQL server can round robin and distribute amongst the available temp db files
  • For best results, have equally sized files
  • For lots of small allocations, like you have with tempdb, having multiple files helps because you get multiple PFS (page free space) and SGAM (shared global allocation map)
  • PAGELATCH_UP - tempdb waits. means the page is in memory
  • PAGE_IO* - means that page is pulling from disk
  • You can see this with the sp_WhoIsActive script
  • Look at wait stats to see if adding tempdb files can help distribute workload
  • SQL 2014 has greatly reduced IO in tempdb
  • Latency thresholds for concern
    • Read latency - 30ms for data files, 5ms for log files
    • Write latency - 30ms for data files, 2ms for log files
  • Microsoft's guidelines - start with the # of tempdb files as you have physical cores, up to 8. From there you need to measure waits and test
  • They like to pre-grow out tempdb and fill up the drive
  • Simple approach - start with 4 equally sized tempdb data files and then watch
  • Don't need to necessarily put tempdb on a separate physical drive unless you're seeing phpyiscal IO waits. Kendra recommended creating a separate logical volume to contain tempdb so that you can control it's growth. Also allows you to move it around easily
  • PREEMPTIVE_OS_* means that it's a wait on something outside of SQL Server. Could also be related to encryption.

Scale Out for Success

  • Content Addressable Storage as a replacement for storing images/blobs in a SQL Server. http://www.emc.com/data-protection/centera.htm
  • ElasticSearch - full text search solution in use by Stack Overflow
  • Redis for an open source caching layer

AlwaysOn Availability Groups: Solution, Not Problem

  • Not for us

Identifying Configuration Problems That Burn Performance

  • ostress.exe - simple tool for scripting queries to hit a dev SQL Server. Similar to hammerdb
  • Set autoshrink to off in sys.databases
  • Stats updates happen synchronously when the next read comes through after writes cross the registered threshold
  • THREADPOOL waits are an example of a poison wait. Shouldn't ever really have these.
  • SQL Server memory ools:
    • Execution Plan Cache
    • Buffer Pool Cache (aka Data Cache)
    • Query Workspace Memory (aka Memory Grant)
  • RESOURCE_SEMAPHORE - Queries are asking for workspace memory and SQL server is unable to grant
    • Scanning large tables can lead to big memory grants
    • For 2012/2014 you need to set memory higher than max memory of 128 on Standard.
    • Make sure that SQL Server is under external memory pressure
  • sp_Blitz checks for the poison wait types
  • The memory limits for SQL Server apply to the Data Cache only. So, add 15% of so to the max edition limit and set the max sql server memory to this level.
  • Leave 10% of 4GB (whichever is higher) for the OS
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment