UPDATED FOR UBUNTU 22.04
Update all packages and install CRI-O container runtime
sudo apt update
sudo apt upgrade -y
# Configure persistent loading of modules
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
# Ensure you load modules
sudo modprobe overlay
sudo modprobe br_netfilter
# Set up required sysctl params
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload sysctl
sudo sysctl --system
# Add Cri-o repo
OS="xUbuntu_22.04"
VERSION=1.26
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | sudo tee -a /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" | sudo tee -a /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key add -
# Install CRI-O
sudo apt update
sudo apt install cri-o cri-o-runc
# Start and enable Service
sudo systemctl daemon-reload
sudo systemctl restart crio
sudo systemctl enable crio
systemctl status crio
Remove swap:
sudo swapoff -a
sudo sed -i 's|.*swap.*||' /etc/fstab
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo kubeadm config images pull --cri-socket unix://var/run/crio/crio.sock
On the master node, we want to run:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
The --pod-network-cidr=10.244.0.0/16
option is a requirement for Flannel - don't change that network address!
Save the command it gives you to join nodes in the cluster, but we don't want to do that just yet. You should see a message like
You can now join any number of machines by running the following on each node as root:
kubeadm join --token <token> <IP>:6443
Start the cluster as a normal user. This part, I realized, was pretty important as it doesn't like to play well when you do it as root.
mkdir -p ~/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
We need to install the pod network before the cluster can come up. As such we want to install the latest yaml file that flannel provides. Most installations will use the following:
kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
At this point, give it about a minute, and have a look at the status of the cluster. run kubectl get pods --all-namespaces
and see what it comes back with. If everything shows running, then you're in business!
Upto this point, we haven't really touched the worker nodes (other than installing the prerequisites), but now you can join the worker nodes by running the command that was given to us when we created the cluster
sudo kubeadm join --token <token> <ip>:6443
We'll see more services spinning up on other services:
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
...
kube-system kube-flannel-ds-fldtn 0/2 Pending 0 3s
...
kube-system kube-proxy-c8s32 0/1 Pending 0 3s
And to confirm, when we do a kubectl get nodes
, we should see something like:
NAME STATUS AGE VERSION
server1 Ready 46m v1.7.0
server2 Ready 3m v1.7.0
server3 Ready 2m v1.7.0
By default, no workloads will run on the master node. You usually want this in a production environment. In my case, since I'm using it for development and testing, I want to allow containers to run on the master node as well. This is done by a process called "tainting" the host.
On the master, we can run the command kubectl taint nodes --all node-role.kubernetes.io/master-
and allow the master to run workloads as well.