This gist summarises a way to simulate point-in-time recovery (PITR) using WAL-G. Most of the material is adapted from Creston's tutorial.
First we initialize a database cluster
pg_ctl init -D cluster
| #!/bin/bash | |
| if [ "$EUID" -ne 0 ] | |
| then echo "Please run as root" | |
| exit | |
| fi | |
| warn='!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!' | |
| echo -e "$warn\n$warn\n$warn" | |
| echo " WARNING" |
This gist summarises a way to simulate point-in-time recovery (PITR) using WAL-G. Most of the material is adapted from Creston's tutorial.
First we initialize a database cluster
pg_ctl init -D cluster
| # Your account access key - must have read access to your S3 Bucket | |
| $accessKey = "YOUR-ACCESS-KEY" | |
| # Your account secret access key | |
| $secretKey = "YOUR-SECRET-KEY" | |
| # The region associated with your bucket e.g. eu-west-1, us-east-1 etc. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions) | |
| $region = "eu-west-1" | |
| # The name of your S3 Bucket | |
| $bucket = "my-test-bucket" | |
| # The folder in your bucket to copy, including trailing slash. Leave blank to copy the entire bucket | |
| $keyPrefix = "my-folder/" |
| --- PSQL queries which also duplicated from https://github.com/anvk/AwesomePSQLList/blob/master/README.md | |
| --- some of them taken from https://www.slideshare.net/alexeylesovsky/deep-dive-into-postgresql-statistics-54594192 | |
| -- I'm not an expert in PSQL. Just a developer who is trying to accumulate useful stat queries which could potentially explain problems in your Postgres DB. | |
| ------------ | |
| -- Basics -- | |
| ------------ | |
| -- Get indexes of tables |
| #!/bin/bash | |
| echo ' (pre) script declarations' | |
| IP6TABLES='/sbin/ip6tables' | |
| IP4TABLES='/sbin/iptables' | |
| LAN_IF='ens+' | |
| TUN_IF='tun+' | |
| INNER_GLOBAL_UNICAST='2001:0db8:ffff:ffff::/48' | |
| INNER_IPV4_UNICAST='10.8.0.0/24' | |
| IPV4_LINK_LOCAL='169.254.0.0/16' #RFC 3927 | |
| IPV6_LINK_LOCAL='fe80::/10' #RFC 4291 |
fs.file-max = 1000000
net.ipv4.tcp_max_syn_backlog = 3240000
net.core.somaxconn = 3240000
| package main | |
| import ( | |
| "fmt" | |
| "reflect" | |
| "github.com/coreos/go-iptables/iptables" | |
| ) | |
| func contains(list []string, value string) bool { |
This is a simple little Python script to let you query EC2 metadata from consul-template. It's only requirement is boto. It uses the EC2 internal metadata service so it does not require any API keys or even a region. The only caveat is that this can only be run on a machine on EC2.
You can give no arguments for full dictionary output or one or more arguments to get specific key(s). Put it somewhere on your machine, chmod +x it and give the full path to consul-template.
###Single hop tunelling:
ssh -f -N -L 9906:127.0.0.1:3306 [email protected]
where,
-f puts ssh in background-N makes it not execute a remote commandWAL-E needs to be installed on all machines, masters and slaves.
Only one machine, the master, writes WAL segments via continuous archiving. The configuration for the master postgresql.conf is:
archive_mode = on
archive_command = 'envdir /etc/wal-e.d/env wal-e wal-push %p'
archive_timeout = 60