- Your have three nodes running ubuntu 24.04 LTS made available to you.
- You have an account to all your nodes with sudo access.
- You have IP addresses for your nodes
Here’s a quick 4-step guide to adding your node entries to /etc/hosts
with the Nano editor:
-
Open the hosts file
sudo [ nano | pico | vim ] /etc/hosts
This opens
/etc/hosts
in Nano with root privileges. -
Add your node lines Use the arrow keys to move the cursor to the bottom (or wherever you like) and type, for example note that a letter at the end is not valid but a placeholder to flag you to use a correct ip:
10.20.30.a node1 10.20.30.b node2 10.20.30.c node3 10.20.30.d node4
-
Save your changes
- Press
Ctrl + O
(to “Write Out” the file) - Press
Enter
(to confirm the filename/etc/hosts
)
- Press
-
Exit Nano
- Press
Ctrl + X
(to quit Nano)
- Press
You can now test name resolution with:
ping -c1 node1
…and you should see replies from 10.20.30.a
or whatever IP addresse you have put
To get passwordless SSH and passwordless sudo on all three of your Ubuntu nodes (replace test
with whatever username you have):
# If you already have ~/.ssh/id_ed25519 and ~/.ssh/id_ed25519.pub you can skip this.
ssh-keygen -t ed25519 -C "deploy key" -f ~/.ssh/id_ed25519
Just hit Enter at each prompt to accept defaults (and leave the passphrase blank).
for N in node1 node2 node3; do
ssh-copy-id -i ~/.ssh/id_ed25519.pub test@"$N"
done
You’ll be prompted for your test user’s password once per host; afterwards you can ssh test@node1
(etc.) without a password.
On each node, run the command below
sudo echo "test ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/test
-
From your laptop:
ssh test@node2 uptime # should connect without asking password
-
On the node:
sudo whoami # should print “root” without asking for sudo password
That’s it—now your test
user can SSH into any node and run sudo
without ever typing a password.
First things first, we are building a distributed system and you need your node to agree on one crucial value ..TIME.
So do the following tasks on each node:
- Set the system timezone to Pretoria (Africa/Johannesburg).
- Install and enable NTP time-sync with Chrony.
- Verify that they’re all synchronized.
On all your three nodes and control node run
sudo apt update
sudo apt install -y chrony
sudo timedatectl set-timezone Africa/Johannesburg
This makes the OS run on South African Standard Time (UTC+2) and drops in the Chrony NTP daemon.
sudo systemctl enable --now chrony
This ensures Chrony will start at boot and begin polling NTP servers right away.
# Should show “yes” and a healthy offset
timedatectl status
# More detail on NTP peers and offsets
chronyc sources -v
You should see something like
Local time: Sat 2025-06-21 18:47:13 SAST
Universal time: Sat 2025-06-21 16:47:13 UTC
RTC time: Sat 2025-06-21 16:47:13
Time zone: Africa/Johannesburg (SAST, +0200)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
Look for a “*” next to one of the NTP servers and small offsets (<50 ms). for the chronyc sources -v
apt list --installed apparmor
If you see apparmor
in the output, the package is installed; if not, you’re missing it.
This will drop in the parser binary in the correct location and register it with dpkg
:
sudo apt update
sudo apt install --reinstall apparmor
which apparmor_parser
You should now see something like apparmor: /usr/sbin/apparmor_parser
.
On Ubuntu 24.04 (or any Linux host) you need to do a few one-time prep steps on each of your three nodes before you install RKE2:
-
Update & install core tooling
sudo apt update sudo apt install -y \ curl \ apt-transport-https \ ca-certificates \ conntrack \ iptables \ ebtables \ socat
-
Disable swap
sudo swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab
Kubernetes (and thus RKE2) requires swap to be off so the kubelet can correctly track memory.
-
Load required kernel modules
sudo modprobe overlay sudo modprobe br_netfilter cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF
These modules are needed for container networking (including Cilium’s eBPF datapath).
-
Configure sysctl for networking
cat <<EOF | sudo tee /etc/sysctl.d/99-k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system
This ensures that bridged pod traffic is visible to iptables and enables IPv4 forwarding.
-
[OPTIONAL]Tell NetworkManager to ignore CNI interfaces Ubuntu 24.04 will not come with this, and you can confirm using the command below
You can check if you have NM with this command
systemctl is-active NetworkManager && echo "NM is running" || echo "No NM"
if you get the lines below, then you are good to go
inactive No NM
However, if you get an Ubuntu image install that has NetworkManager enabled then consider this step:
cat <<EOF | sudo tee /etc/NetworkManager/conf.d/99-rke2-cni.conf [keyfile] unmanaged-devices=interface-name:cni*;interface-name:flannel*;interface-name:calico* EOF sudo systemctl reload NetworkManager
This prevents NM from “helpfully” reconfiguring the veth/tun interfaces used by your CNI.
-
Open (or disable) your firewall for required ports Either disable UFW (
sudo ufw disable
) for NOW ( Recommended) or allow at least:- TCP 6443 – Kubernetes API
- TCP 9345 – RKE2 server registration
- UDP 8472 – VXLAN (Cilium may use other ports)
- TCP 10250 – kubelet metrics
- TCP 2379-2381 – etcd client/peer/metrics
- TCP 30000-32767 – NodePort range