Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save FloorD/8bca06f332c35de2d114e574b0ab1351 to your computer and use it in GitHub Desktop.
Save FloorD/8bca06f332c35de2d114e574b0ab1351 to your computer and use it in GitHub Desktop.
Cloud Native Days Bologna - Workshop

Links

CNPG Playground

GitHub

CloudNativePG

Website
GitHub
Docs
Main Features
Release 1.26
Kubectl Plugin
Quick Start
Services
Backups
Backup Plugin
Failure
Failover
Reading Logs
Logs Plugin
Rolling Upgrades
PostgreSQL Upgrades
Recovery
Replica Cluster
Distributed Topology

Connect wiht us

Github Discussions
Blog
Slack
Linkedin

bash_aliases.sh

kc(){
        ## Custom Kubectl for different regions
        if [[ $# -lt 1 ]]
        then
                echo "kc <REGION> <command>"
                return 1
        fi
        REGION=${1}
        shift

        kubectl --context=kind-k8s-${REGION} $@
}

kcnpg(){
        ## Custom Kubectl for different regions
        if [[ $# -lt 1 ]]
        then
                echo "kcnpg <REGION> <command>"
                return 1
        fi
        REGION=${1}
        shift

        kubectl cnpg --context=kind-k8s-${REGION} $@
}

for region in {eu,us}
do
    eval alias k${region}=\"kc \${region}\"
    eval alias kcnpg${region}=\"kcnpg \${region}\"
done

Commands

CNPG Workshop for Cloud Native Days Italy 2025

Clone CNPG Playground repository

git clone [email protected]:cloudnative-pg/cnpg-playground.git

Set kernel limits

sudo sysctl fs.inotify.max_user_watches=524288 fs.inotify.max_user_instances=512

Create Kubernetes Clusters

cd cnpg-playground
./scripts/setup.sh

Export Kube config file and use EU region context

export KUBECONFIG=<path-to>/cnpg-playground/k8s/kube-config.yaml

kubectl config use-context kind-k8s-eu

Create the kubectl aliases

. ./bash_aliases.sh

Install the Kubectl Plugin for CNPG

curl -sSfL \
  https://github.com/cloudnative-pg/cloudnative-pg/raw/main/hack/install-cnpg-plugin.sh | \
  sudo sh -s -- -b /usr/local/bin

Install the CNPG Operator using the CNPG Plugin

kubectl cnpg install generate --control-plane --version 1.25.1 \
  | kubectl apply -f - --server-side

Watch resources in a different terminal

kubectl get pods -w

Create a Cluster YAML file

cat <<EOF > ./cluster-example.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cluster-example
spec:
  instances: 3

  storage:
    size: 1Gi
EOF

Deploy the first CNPG cluster

kubectl apply -f ./cluster-example.yaml

Analyze resources

kubectl get clusters,pods,pvc,svc,ep,secrets

Create a Cluster YAML file with backup

cat <<EOF > ./cluster-example-backup.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cluster-example-backup
spec:
  instances: 3
  storage:
    size: 1Gi
  backup:
    barmanObjectStore:
      destinationPath: s3://backups/
      endpointURL: http://minio-eu:9000
      s3Credentials:
        accessKeyId:
          name: minio-eu
          key: ACCESS_KEY_ID
        secretAccessKey:
          name: minio-eu
          key: ACCESS_SECRET_KEY
      wal:
        compression: gzip
EOF

Deploy cluster with backup

kubectl apply -f ./cluster-example-backup.yaml

Check status of the Cluster

kubectl cnpg status cluster-example-backup

Connect to the DB app

kubectl cnpg psql cluster-example-backup -- app

Create table and insert data

CREATE TABLE numbers(x int);
INSERT INTO numbers (SELECT generate_series(1,1000000));
\q

Create first backup with CNPG plugin

kubectl cnpg backup cluster-example-backup

Verify backup and status

kubectl get backup

kubectl get backup -o yaml

kubectl cnpg status cluster-example-backup

Incident simulation #1

# find primary
kubectl get cluster cluster-example-backup

Watch the resource in a separate terminal

kubectl get pods -w

Delete Pod

kubectl delete pod cluster-example-backup-#

Check the cluster status

kubectl cnpg status cluster-example-backup

Incident simulation #2

# find primary
kubectl get cluster cluster-example-backup

Watch the resource in a separate terminal

kubectl get pods -w

Delete Pod and PVC

kubectl delete pod,pvc cluster-example-backup-#

Check the cluster status

kubectl cnpg status cluster-example-backup

Read the logs

kubectl logs cluster-example-backup-#

Read the logs using CNPG plugin

kubectl cnpg logs cluster cluster-example-backup \
  | kubectl cnpg logs pretty

Operator Upgrade

# Monitor resources
kubectl get pod -n cnpg-system

Install the new CNPG Operator v1.26.0

kubectl cnpg install generate --control-plane  --version 1.26.0 \
  | kubectl apply -f - --server-side

PostgreSQL Upgrade

# Create cluster example YAML file
cat <<EOF > ./cluster-example.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cluster-example
spec:
  imageName: ghcr.io/cloudnative-pg/postgresql:16.3
  instances: 3
  storage:
    size: 1Gi
EOF

Monitor the pods in a separate terminal

kubectl get pods -w

Deploy cluster

kubectl apply -f ./cluster-example.yaml

Edit Cluster manifest file and verify

sed -i 's/16\.3/16\.9/' cluster-example.yaml

cat cluster-example.yaml

Apply manifest again and verify with plugin

kubectl apply -f ./cluster-example.yaml \
&& kubectl cnpg status cluster-example

PostgreSQL Major offline in-place upgrade

# Edit Cluster manifest file and verify
sed -i 's/16\.9/17\.5/' cluster-example.yaml

cat cluster-example.yaml

Apply manifest again and verify with plugin

kubectl apply -f ./cluster-example.yaml \
&& kubectl cnpg status cluster-example

PostgreSQL Recovery with CNPG

# Create a cluster manifest with bootstrap method: recovery
cat <<EOF > ./cluster-recovery.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cluster-recovery
spec:
  instances: 3
  storage:
    size: 1Gi

  bootstrap:
    recovery:
      source: origin

  externalClusters:
  - name: origin
    barmanObjectStore:
      serverName: cluster-example-backup
      destinationPath: s3://backups/
      endpointURL: http://minio-eu:9000
      s3Credentials:
        accessKeyId:
          name: minio-eu
          key: ACCESS_KEY_ID
        secretAccessKey:
          name: minio-eu
          key: ACCESS_SECRET_KEY
      wal:
        compression: gzip
EOF

Monitor resources

kubectl get pods -w

Deploy recovery

kubectl apply -f ./cluster-recovery.yaml

Check the cluster status

kubectl cnpg status cluster-recovery

Check the data in the app DB

kubectl cnpg psql cluster-recovery -- app
SELECT COUNT(*) numbers;

Replica Cluster

# Ensure to connect to the k8s-eu cluster
kubectl config use-context kind-k8s-eu

Deploy the pg-eu CNPG cluster in k8s-eu

kubectl apply -f <path-to>/cnpg-playground/demo/yaml/eu/pg-eu-legacy.yaml

Check the cluster status

kubectl cnpg status pg-eu

Take the first backup

kubectl cnpg backup pg-eu

Setup CNPG in k8s-us context

# Get context and set it
./scripts/info.sh

kubectl config use-context kind-k8s-us

Verify is the correct empty cluster

kubectl get pods -n cnpg-system

kubectl config current-context

Install CNPG Operator in k8s-us cluster

kubectl cnpg install generate \
  --control-plane | \
  kubectl apply -f - --server-side

Verify CNPG deployment and resources

kubectl get deployment -n cnpg-system

kubectl get crd | grep cnpg

Monitor resources

kubectl get pods -w

Deploy the pg-us CNPG cluster

kubectl apply -f <path-to>/cnpg-playground/demo/yaml/us/pg-us-legacy.yaml

Check the cluster status

kubectl cnpg status pg-us

Analyze the pg-us cluster manifest and compare it with pg-eu one

kubectl get cluster pg-us -o yaml

kubectl --context kind-k8s-eu get cluster pg-eu -o yaml

Switchvoer to Replica Cluster

# Monitor pods in both clusters
kubectl --context kind-k8s-eu get pods -w

kubectl --context kind-k8s-us get pods -w

Edit cluster pg-eu

kubectl --context kind-k8s-eu edit cluster pg-eu

Get the demotion token

kubectl --context kind-k8s-eu get cluster pg-eu -o jsonpath='{.status.demotionToken}'

Edit the cluster pg-us

kubectl --context kind-k8s-us edit cluster pg-us
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment