Skip to content

Instantly share code, notes, and snippets.

@criscola
Last active April 5, 2023 13:51
Show Gist options
  • Save criscola/68556091467b5376c439608f58b1a492 to your computer and use it in GitHub Desktop.
Save criscola/68556091467b5376c439608f58b1a492 to your computer and use it in GitHub Desktop.

With containerd (favoured)

Notes for containerd on ubuntu 22.04 (works with 20.04 too): https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/ Note ip forwarding must be enabled (see below)

Disabling swap

sudo swapoff -a
sudo nano /etc/fstab
# Comment out swap line

Install Docker Engine

Install following: https://docs.docker.com/engine/install/ubuntu/

Install Docker CRI

Container Runtimes

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

Install CRI: https://github.com/Mirantis/cri-dockerd

###Install GO###
wget https://storage.googleapis.com/golang/getgo/installer_linux
chmod +x ./installer_linux
./installer_linux
source ~/.bash_profile

git clone https://github.com/Mirantis/cri-dockerd.git
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
sudo mkdir -p /usr/local/bin
sudo install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
sudo cp -a packaging/systemd/* /etc/systemd/system
sudo sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
sudo systemctl daemon-reload
sudo systemctl enable cri-docker.service
sudo systemctl enable --now cri-docker.socket

Check if cri-docker.socket and cri-docker.service are active and running, check if binary cri-dockerd is in /usr/local/bin and not in /usr/local/bin/cri-dockerd (todo: fix in gist)

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Note you might want to configure the cgroup driver, it’s not clear if there are defaults or not.

It's critical that the kubelet and the container runtime uses the same cgroup driver and are configured the same.

If you configure systemd (recommended) as the cgroup driver for the kubelet, you must also configure systemd as the cgroup driver for the container runtime.

sudo nano /etc/docker/daemon.json
# add:
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

then restart sudo systemctl restart docker.

Explicitly set it for kubeadm init config (see kubeadm-config.yaml below)

configure iptables with this https://kubernetes.io/docs/setup/production-environment/container-runtimes/#forwarding-ipv4-and-letting-iptables-see-bridged-traffic

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

Note that if we want multiple masters (HA) we should add --control-plane-endpoint.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

sudo kubeadm init --config kubeadm-config.yaml

These configs can be passed in the config file above:

# kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  criSocket: "unix:///var/run/cri-dockerd.sock"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.26.1
controlPlaneEndpoint: "10.10.70.2:6443"
networking:
  podSubnet: "10.10.70.0/24"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

now get the kubeconfig

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

to join the worker nodes (we will not need it with metal3 and CAPI: use for testing only)

sudo kubeadm join 10.10.70.2:6443 --token ryobhu.ea5juagcnrkrxhem --discovery-token-ca-cert-hash sha256:316b82590a1aa694a1331338c8329dd60af28fa7585ca62d31f52ac19d75fc5d --cri-socket unix:///var/run/cri-dockerd.sock

Now install Calico CNI. Note this is suitable for <50 nodes. See https://docs.tigera.io/calico/3.25/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
kubectl apply -f calico.yaml

untaint the node

kubectl taint nodes --all node-role.kubernetes.io/control-plane-
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment