Skip to content

Instantly share code, notes, and snippets.

@dkeightley
dkeightley / userdata.sh
Last active December 4, 2023 13:33
RKE2 AWS cloud controller manager
#!/bin/sh
PUBLIC_IP=$(curl ifconfig.io)
# export INSTALL_RKE2_VERSION="v1.20.5+rke2r1"
curl -sfL https://get.rke2.io | sh -
provider_id="$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)/$(curl -s http://169.254.169.254/latest/meta-data/instance-id)"
@dkeightley
dkeightley / recover-control-plane.md
Created May 12, 2022 04:53
how-to-recover-a-cluster-when-all-control-plane-nodes-have-failed

Task

In a disaster recovery scenario, the control plane and etcd nodes managed by Rancher in a downstream cluster may no longer be available or functioning. The cluster can be rebuilt by adding control plane and etcd nodes again, followed by restoring from an available snapshot.

Pre-requisites

  • A cluster built by Rancher v2.x or the Rancher Kubernetes Engine CLI (RKE)
  • Nodes to add to the cluster with control plane and etcd roles with adequate resources
  • An offline copy of a snapshot to be used as the recovery point, often stored in S3 or copied off node filesystems to a backup location
@dkeightley
dkeightley / istio-overlay.yml
Last active May 10, 2022 03:38
Istio overlay for HPA minReplicas
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- enabled: true
name: istio-ingressgateway
k8s:
hpaSpec:
maxReplicas: 10 # ... default 5
@dkeightley
dkeightley / readme.md
Last active December 14, 2021 12:29
Rancher with ALB Controller
@dkeightley
dkeightley / rke2-single-install.md
Created September 16, 2021 03:41
rke2-single-install

Install

curl -sfL https://get.rke2.io | sh -
systemctl enable rke2-server.service
systemctl start rke2-server.service

Env setup

@dkeightley
dkeightley / rke-calicoctl.yaml
Created September 13, 2021 00:41
Run calicoctl container in RKE
# Calico Version v3.20.0
# https://docs.projectcalico.org/releases#v3.20.0
# This manifest includes the following component versions:
# calico/ctl:v3.20.0
apiVersion: v1
kind: ServiceAccount
metadata:
name: calicoctl
namespace: kube-system
@dkeightley
dkeightley / ingress-to-pod.sh
Created July 30, 2021 02:52
Test pods in a service from every ingress-nginx pod
SERVICE=my-nginx
NAMESPACE=default
PORT=80
for ingresspod in $(kubectl -n ingress-nginx get pods -l app=ingress-nginx --template '{{range.items}}{{.metadata.name}}{{"\n"}}{{end}}')
do
echo $ingresspod
for svcep in $(kubectl -n $NAMESPACE get ep $SERVICE -o json | jq -r '.subsets[].addresses[].ip')
do
echo "=> ${svcep}"
@dkeightley
dkeightley / prometheus-migrate.md
Last active July 27, 2021 21:56
Migrate prometheus data between cluster monitoring v1/v2

Using pv-migrate, prometheus monitoring data can be migrated between PV/PVCs when migrating to monitoring v2

This assumes persistent storage is used with monitoring (ie, a PV/PVC exists) and is intended only for cluster monitoring (not project monitoring).

Pre-work

  • Monitoring v1 apps (in Cluster Manager) should be disabled (Tools > Monitoring)
  • Ensure the monitoring v1 apps are uninstalled, more details here
  • Monitoring v2 (in Cluster Explorer) should be installed
  • Install pv-migrate, steps available here
  • Configure a kubeconfig for the cluster
@dkeightley
dkeightley / object-count-size.md
Last active July 27, 2024 06:00
etcd object counts and sizes

Exec into the etcd container

RKE1

docker exec -it etcd sh

RKE2

@dkeightley
dkeightley / k3s-rancher-userdata.sh
Last active June 3, 2022 04:47
k3s-rancher-userdata
#!/bin/sh
PUBLIC_IP=$(curl ifconfig.io)
echo "Installing K3S"
# export INSTALL_K3S_VERSION="v1.19.5+k3s2"
curl -sfL https://get.k3s.io | sh -s - --tls-san ${PUBLIC_IP}
echo "Downlading cert-manager CRDs"
wget -q -P /var/lib/rancher/k3s/server/manifests/ https://github.com/jetstack/cert-manager/releases/download/v1.5.1/cert-manager.crds.yaml