Skip to content

Instantly share code, notes, and snippets.

@michaelkarrer81
Last active September 6, 2023 09:37
Show Gist options
  • Save michaelkarrer81/7f760abdcaccc947c5c25e5e84b88dbc to your computer and use it in GitHub Desktop.
Save michaelkarrer81/7f760abdcaccc947c5c25e5e84b88dbc to your computer and use it in GitHub Desktop.
[postgresql tuning] How to analyze and tune the postgresql database #database #postgresql
# RESOURCES
# https://blog.codeship.com/tuning-postgresql-with-pgbench/
# http://www.linux-magazin.de/ausgaben/2013/07/postgresql-tuning/5/
# https://blog.timescale.com/timescaledb-vs-6a696248104e
# https://heapanalytics.com/blog/engineering/analyzing-performance-millions-sql-queries-one-special-snowflake
# https://www.citusdata.com/blog/2018/02/15/when-postgresql-blocks/
# https://amplitude.engineering/how-a-single-postgresql-config-change-improved-slow-query-performance-by-50x-85593b8991b0
# https://devcenter.heroku.com/articles/postgresql-indexes
# http://www.craigkerstiens.com/2013/01/10/more-on-postgres-performance/
# http://www.craigkerstiens.com/2012/10/01/understanding-postgres-performance/
# Check Hits from Cache vs Hits from Disk
# ---------------------------------------
# https://www.citusdata.com/blog/2017/09/29/what-performance-can-you-expect-from-postgres
# Check performance with pgbench
# ------------------------------
# Install
apt-get install postgresql-contrib
# Setup DB
su - postgres
psql
# CREATE DATABASE example;
# \q
# Initialize DB with 5.000.000 records (800MB)
pgbench -i -s 50 example
# Testing the database
pgbench -c 10 -j 2 -t 10000 example
# Result
# ------
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
tps = 972.158707 (including connections establishing)
tps = 972.357240 (excluding connections establishing)
# Tuning postgres.conf
# --------------------
# ToDo
# Optimizing a select
# -------------------
# Mostly run select (1400 to 5000ms for 100 records = 14 to 50ms per record)
# https://stackoverflow.com/questions/13234812/improving-query-speed-simple-select-in-big-postgres-table
select
*
from
sync_table
where
job_to_fso_can_sync = true
and (job_to_fso_sync_version is null or job_to_fso_sync_version <> job_last_change)
order by
job_last_change desc
limit 100;
SELECT * FROM sync_table WHERE job_to_fso_can_sync = true AND (job_to_fso_sync_version is null or job_to_fso_sync_version <> job_last_change) ORDER BY job_last_change desc LIMIT 100;
# https://dba.stackexchange.com/questions/42290/configuring-postgresql-for-read-performance
explain analyze SELECT * FROM sync_table WHERE job_to_fso_can_sync = true AND (job_to_fso_sync_version is null or job_to_fso_sync_version <> job_last_change) ORDER BY job_last_change desc LIMIT 1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..1.24 rows=1 width=2809) (actual time=63.330..63.331 rows=1 loops=1)
-> Index Scan Backward using syncjob_sync_index3 on sync_table (cost=0.42..80985.76 rows=99060 width=2809) (actual time=63.328..63.328 rows=1 loops=1)
Filter: (job_to_fso_can_sync AND ((job_to_fso_sync_version IS NULL) OR (job_to_fso_sync_version <> job_last_change)))
Rows Removed by Filter: 52615
Total runtime: 63.378 ms
(5 rows)
demo=# explain analyze SELECT * FROM sync_table WHERE job_to_fso_can_sync = true AND (job_to_fso_sync_version is null or job_to_fso_sync_version <> job_last_change) LIMIT 1;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.26 rows=1 width=2809) (actual time=7.353..7.354 rows=1 loops=1)
-> Seq Scan on sync_table (cost=0.00..25325.12 rows=99060 width=2809) (actual time=7.351..7.351 rows=1 loops=1)
Filter: (job_to_fso_can_sync AND ((job_to_fso_sync_version IS NULL) OR (job_to_fso_sync_version <> job_last_change)))
Rows Removed by Filter: 15855
Total runtime: 7.396 ms
(5 rows)
diaf=# explain analyze SELECT * FROM sync_table WHERE job_to_fso_can_sync = true AND (job_to_fso_sync_version is null or job_to_fso_sync_version <> job_last_change) ORDER BY job_last_change desc LIMIT 1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..0.80 rows=1 width=3219) (actual time=0.034..0.034 rows=1 loops=1)
-> Index Scan Backward using syncjob_sync_index3 on sync_table (cost=0.42..147564.91 rows=392416 width=3219) (actual time=0.032..0.032 rows=1 loops=1)
Filter: (job_to_fso_can_sync AND ((job_to_fso_sync_version IS NULL) OR (job_to_fso_sync_version <> job_last_change)))
Total runtime: 0.089 ms
(4 rows)
diaf=# explain analyze SELECT * FROM sync_table WHERE job_to_fso_can_sync = true AND (job_to_fso_sync_version is null or job_to_fso_sync_version <> job_last_change) LIMIT 1;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.18 rows=1 width=3219) (actual time=18.912..18.913 rows=1 loops=1)
-> Seq Scan on sync_table (cost=0.00..69653.46 rows=392416 width=3219) (actual time=18.910..18.910 rows=1 loops=1)
Filter: (job_to_fso_can_sync AND ((job_to_fso_sync_version IS NULL) OR (job_to_fso_sync_version <> job_last_change)))
Rows Removed by Filter: 26686
Total runtime: 18.952 ms
(5 rows)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment