Skip to content

Instantly share code, notes, and snippets.

@boniface
Last active June 28, 2025 21:41
Show Gist options
  • Save boniface/27b0dae7f294fbcbe96ca2b946e99d6a to your computer and use it in GitHub Desktop.
Save boniface/27b0dae7f294fbcbe96ca2b946e99d6a to your computer and use it in GitHub Desktop.
Creating a 3 Node cluster of K8s with RKE2 with Cillium CNI

Assumptions

  • Your have three nodes running ubuntu 24.04 LTS made available to you.
  • You have an account to all your nodes with sudo access.
  • You have IP addresses for your nodes

Here’s a quick 4-step guide to adding your node entries to /etc/hosts with the Nano editor:

  1. Open the hosts file

    sudo [ nano | pico | vim ] /etc/hosts

    This opens /etc/hosts in Nano with root privileges.

  2. Add your node lines Use the arrow keys to move the cursor to the bottom (or wherever you like) and type, for example note that a letter at the end is not valid but a placeholder to flag you to use a correct ip:

    10.20.30.a   node1
    10.20.30.b   node2
    10.20.30.c   node3
    10.20.30.d   node4
    
  3. Save your changes

    • Press Ctrl + O (to “Write Out” the file)
    • Press Enter (to confirm the filename /etc/hosts)
  4. Exit Nano

    • Press Ctrl + X (to quit Nano)

You can now test name resolution with:

ping -c1 node1

…and you should see replies from 10.20.30.a or whatever IP addresse you have put

Passwordless account

To get passwordless SSH and passwordless sudo on all three of your Ubuntu nodes (replace test with whatever username you have):


1. Generate an SSH keypair on your laptop or node 4 (if you don’t already have one)

# If you already have ~/.ssh/id_ed25519 and ~/.ssh/id_ed25519.pub you can skip this.
ssh-keygen -t ed25519 -C "deploy key" -f ~/.ssh/id_ed25519

Just hit Enter at each prompt to accept defaults (and leave the passphrase blank).


2. Copy your public key to each node

for N in node1 node2 node3; do
  ssh-copy-id -i ~/.ssh/id_ed25519.pub test@"$N"
done

You’ll be prompted for your test user’s password once per host; afterwards you can ssh test@node1 (etc.) without a password.


3. Enable passwordless sudo on each node

On each node, run the command below

sudo echo "test ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/test

4. Verify

  1. From your laptop:

    ssh test@node2 uptime     # should connect without asking password
  2. On the node:

    sudo whoami               # should print “root” without asking for sudo password

That’s it—now your test user can SSH into any node and run sudo without ever typing a password.


Time Syncronisation

First things first, we are building a distributed system and you need your node to agree on one crucial value ..TIME.

So do the following tasks on each node:

  1. Set the system timezone to Pretoria (Africa/Johannesburg).
  2. Install and enable NTP time-sync with Chrony.
  3. Verify that they’re all synchronized.

1. Install Chrony and set the timezone

On all your three nodes and control node run

sudo apt update
sudo apt install -y chrony
sudo timedatectl set-timezone Africa/Johannesburg

This makes the OS run on South African Standard Time (UTC+2) and drops in the Chrony NTP daemon.


2. Enable and start Chrony

sudo systemctl enable --now chrony

This ensures Chrony will start at boot and begin polling NTP servers right away.


3. Verify synchronization

# Should show “yes” and a healthy offset
timedatectl status

# More detail on NTP peers and offsets
chronyc sources -v

You should see something like

                 Local time: Sat 2025-06-21 18:47:13 SAST
             Universal time: Sat 2025-06-21 16:47:13 UTC
                   RTC time: Sat 2025-06-21 16:47:13
                  Time zone: Africa/Johannesburg (SAST, +0200)
  System clock synchronized: yes
                NTP service: active
            RTC in local TZ: no

Look for a “*” next to one of the NTP servers and small offsets (<50 ms). for the chronyc sources -v


AppArmor

1. Check whether the AppArmor package is installed

apt list --installed apparmor

If you see apparmor in the output, the package is installed; if not, you’re missing it.


2. Re-install (or install) the AppArmor package

This will drop in the parser binary in the correct location and register it with dpkg:

sudo apt update
sudo apt install --reinstall apparmor

3. Verify it is installed

which apparmor_parser   

You should now see something like apparmor: /usr/sbin/apparmor_parser.


Needed Tooling

1 Install tooling

On Ubuntu 24.04 (or any Linux host) you need to do a few one-time prep steps on each of your three nodes before you install RKE2:

  1. Update & install core tooling

    sudo apt update
    sudo apt install -y \
      curl \
      apt-transport-https \
      ca-certificates \
      conntrack \
      iptables \
      ebtables \
      socat 
  2. Disable swap

    sudo swapoff -a
    sudo sed -i '/ swap / s/^/#/' /etc/fstab

    Kubernetes (and thus RKE2) requires swap to be off so the kubelet can correctly track memory.

  3. Load required kernel modules

    sudo modprobe overlay
    sudo modprobe br_netfilter
    
    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    overlay
    br_netfilter
    EOF

    These modules are needed for container networking (including Cilium’s eBPF datapath).

  4. Configure sysctl for networking

    cat <<EOF | sudo tee /etc/sysctl.d/99-k8s.conf
    net.bridge.bridge-nf-call-iptables  = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.ipv4.ip_forward                 = 1
    EOF
    sudo sysctl --system

    This ensures that bridged pod traffic is visible to iptables and enables IPv4 forwarding.

  5. [OPTIONAL]Tell NetworkManager to ignore CNI interfaces Ubuntu 24.04 will not come with this, and you can confirm using the command below

    You can check if you have NM with this command

    systemctl is-active NetworkManager && echo "NM is running" || echo "No NM"

    if you get the lines below, then you are good to go

    inactive
    No NM

    However, if you get an Ubuntu image install that has NetworkManager enabled then consider this step:

    cat <<EOF | sudo tee /etc/NetworkManager/conf.d/99-rke2-cni.conf
    [keyfile]
    unmanaged-devices=interface-name:cni*;interface-name:flannel*;interface-name:calico*
    EOF
    sudo systemctl reload NetworkManager

    This prevents NM from “helpfully” reconfiguring the veth/tun interfaces used by your CNI.

  6. Open (or disable) your firewall for required ports Either disable UFW (sudo ufw disable) for NOW ( Recommended) or allow at least:

    • TCP 6443 – Kubernetes API
    • TCP 9345 – RKE2 server registration
    • UDP 8472 – VXLAN (Cilium may use other ports)
    • TCP 10250 – kubelet metrics
    • TCP 2379-2381 – etcd client/peer/metrics
    • TCP 30000-32767 – NodePort range

Create script for RKE2 Installation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment