Skip to content

Instantly share code, notes, and snippets.

@mrvdb
Last active December 23, 2015 21:59
Show Gist options
  • Save mrvdb/6700133 to your computer and use it in GitHub Desktop.
Save mrvdb/6700133 to your computer and use it in GitHub Desktop.
- nr of workers := 1 + #cores*2
- each process can handle only one request at a time
- what is the cost of having a worker?
- additional step is using a load balancer to distribute requests
- each worker has 2 connections to a database
- each worker can service multiple databases (but not at once)
- http://wiki.postgresql.org/wiki/Tuning_Your_PostgresSQL_Server
- suggested tool for performance monitoring: munin
- munin has postgres plugins
- postgresql.conf:
- log_min_duration_statement = 50 (everything longer than 50ms gets
logged)
- analyze with pgBadger or pgFouine
- lc_messages="C" (make sure that logging is done in english)
- look at pg statistics tables
- strong advice to put attachments not in the database but in the filestore
- normal values: RPC 200ms, SQL: 100ms
- one transaction: 100-300 heavyweight locks
- Common problems:
- stored functions
cascading triggering is the risk
- slow queries
anything over 500ms should be looked at
- lock contention
- custom locking mechanisms
- _auto_join may help for field definitions; this tries to construct
a join for the database, instated of combining id IN (...)
constructs;
- rewriting in sql may help in some situations
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment