Permalink x.djmeph.net/k8s
I purchased three 8GB Raspberry Pi 5s with 128GB 96mb/s microSD cards to build my first Kubernetes cluster, and as expected, I ran into a lot of problems with the newness of the Pi 5. I have been working with Kubernentes professionally for exactly one year now, so for me this was about learning how to create my own cluster from the ground up to fill in the gaps of knowledge of the infrastructure that has been provided to me.
I have since upgraded this cluster to 7 nodes with 512GB storage each.
Many tutorials and examples exist on building a Raspberry Pi K8S cluster. The problems I ran into were mostly in two categories:
- Lack of support for Raspberry Pi 5
- Examples with deprecated features
A lot of what I did here was taking from many different tutorials and guides and then using the documentation for each component to figure out how the deprecated changes could work in the same or similar way. Keep in mind that most of these steps should still work for any ARM64 Raspberry Pis, and there's no reason why you can't mix and match. However, this guide is designed to leverage the current support for Raspberry Pi 5.
This tutorial uses the following components:
- Ubuntu Server 24.04.2 LTS for the OS (64-bit required)
- K3s for Kubernetes (1.31) distribution
- MetalLB for load balancer
- nginx for ingress
- cert-manager for certificate issuer
- Rancher for management
K3s only requires one instance at minimum, however I recommend at least three Raspberry Pis to provide a base level of resiliency. The primary Pi will be the control plane running K3s server, and the secondary Pis will be the worker nodes running the K3s agent. I highly recommend setting up each Pi with an SSH connection that you can authenticate with through your local network. However, that will not be covered in this guide, and using a monitor and keyboard is still valid.
- Flash the microSD card for each Raspberry Pi with Ubuntu 24.04.2 Server 64-bit
- Assign a static or reserved IP address for the control plane
- Use Snap to install helm on the control plane
- Give each node a unique hostname. For this example I labeled them as
ubuntu-k8s-01
,ubuntu-k8s-02
,ubuntu-k8s-03
- Install the prerequisite helm charts on the control plane:
helm repo add metallb https://metallb.github.io/metallb
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo update
Once these steps are complete, ensure that you are able to connect to the control plane from the worker nodes by pinging it.
On the control plane run the following commands:
export K3S_KUBECONFIG_MODE="644"
export INSTALL_K3S_EXEC="--disable traefik --disable servicelb"
export INSTALL_K3S_VERSION="v1.31.2+k3s1"
export K3S_NODE_NAME="<HOSTNAME>" # ie. ubuntu-k8s-01
curl -sfL https://get.k3s.io | sh -s -
# Get the token from the control node and save this for the next steps
sudo cat /var/lib/rancher/k3s/server/node-token
On the worker nodes run the following commands:
export K3S_TOKEN="<TOKEN>"
export K3S_URL="https://<CONTROL_NODE_IP_ADDRESS>:6443"
export K3S_NODE_NAME="<HOSTNAME>" # ie. ubuntu-k8s-02, ubuntu-k8s-03 ...
export INSTALL_K3S_VERSION="v1.31.2+k3s1"
curl -sfL https://get.k3s.io | sh -s -
Go back to the control plane and check on the progress. It may take a minute or two for the worker nodes to propagate. Kubectl will be automatically installed by the k3s script.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-k8s-01 Ready control-plane,master 2m53s v1.31.2+k3s1
ubuntu-k8s-02 Ready <none> 26s v1.31.2+k3s1
ubuntu-k8s-03 Ready <none> 11s v1.31.2+k3s1
Once all of your nodes have the ready status, use the following command for each worker node to label them:
kubectl label nodes <HOSTNAME> kubernetes.io/role=worker
Run kubectl get nodes
again to verify the roles:
NAME STATUS ROLES AGE VERSION
ubuntu-k8s-01 Ready control-plane,master 5m31s v1.31.2+k3s1
ubuntu-k8s-02 Ready worker 3m4s v1.31.2+k3s1
ubuntu-k8s-03 Ready worker 2m49s v1.31.2+k3s1
At this point you can log out of the worker nodes. Everything from here will be done on the control plane.
You will need to reserve a range of IPs that are outside of the DHCP pool for this step. For my simple home router setup I chose the range 192.168.0.200-192.168.0.250 which gives me a Load Balancer pool of 50 IP addresses.
Create a file named metallb-ip-range.yaml
and populate it with the following YAML:
# metallb-ip-range.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- <IP_RANGE_LOWER>-<IP_RANGE_UPPER> # ie. 192.168.0.200-192.168.0.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
Install MetalLB with the following commands:
helm upgrade metallb metallb/metallb --install --create-namespace --namespace metallb-system --wait
Deploy the IP Address Pool:
kubectl apply -f metallb-ip-range.yaml
Run the following command:
helm upgrade ingress-nginx ingress-nginx/ingress-nginx --install --create-namespace --namespace ingress-nginx --wait
Once this is finished, your LoadBalancer resource should be populated with an external IP from the IP range in the last section:
Run kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller-admission ClusterIP 10.43.211.147 <none> 443/TCP 28s
ingress-nginx-controller LoadBalancer 10.43.50.249 192.168.0.200 80:31799/TCP,443:32238/TCP 28s
Now you will want to setup port forwarding on your router to expose your WAN IP to the Load Balancer EXTERNAL-IP from the output of the last step. In this case, my LoadBalancer IP is 192.168.0.200.
- Port 80 -> 192.168.0.200:80
- Port 443 -> 192.168.0.200:443
Create a DNS record that points to your public WAN IP address. This is usually an A Record (IP address) or a CNAME record (pointing to the A Record of the public IP address)
- A rancher.example.com <PUBLIC_IP_ADDRESS>
- CNAME rancher.example.com wan.example.com
Install cert-manager CRDs and controller in one step
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.17.0/cert-manager.yaml
Deploy the production and staging ClusterIssuers. Create two yaml files certmanager-clusterissuer-staging.yaml
and certmanager-clusterissuer-production.yaml
# certmanager-clusterissuer-staging.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
ingressClassName: nginx
# certmanager-clusterissuer-production.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-production
solvers:
- http01:
ingress:
ingressClassName: nginx
Deploy the ClusterIssuers:
kubectl apply -f certmanager-clusterissuer-staging.yaml
kubectl apply -f certmanager-clusterissuer-production.yaml
Create a file called rancher-values.yaml
and populate with the following code:
# rancher-values.yaml
hostname: rancher.example.com # From the DNS record created in previous steps
ingress:
tls:
source: secret
extraAnnotations:
cert-manager.io/cluster-issuer: letsencrypt-production
ingressClassName: nginx
letsEncrypt:
email: [email protected] # Use the email address you wish to receive certificate expiration alerts
ingress:
class: nginx
Run the following command:
helm upgrade rancher rancher-latest/rancher --install --create-namespace --namespace cattle-system --values rancher-values.yaml
Here are some commands to use to check on the status of your certificate request:
kubectl -n cattle-system get certificate
kubectl -n cattle-system get issuer
kubectl -n cattle-system get certificaterequest
kubectl -n cattle-system describe certificaterequest tls-rancher-ingress-1 # from the output of the last command
This may take a while. Eventually the certificate will be in the ready state:
kubectl -n cattle-system get certificate
NAME READY SECRET AGE
ingress-nginx-tls True ingress-nginx-tls 7m23s
Now you can generate a URL to authenticate with the Rancher Dashboard:
echo https://rancher.example.com/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')
This will output the full URL. Paste it in your browser and configure your Rancher inital setup.
Once you are setup with Rancher, go to Cluster -> Tools and install some helpful utilities.
- Monitoring - Get valuable insights on resource usage and allocation, network activity, and more with Prometheus and Grafana.
- Longhorn - Creates a storage class for managed persistent volumes. Out of the box, this will create three replicas for every persistent volume claim, so if you lose a worker node, Longhorn will automatically restore the persistent volumes from a replica.
- Rancher Backups - Save backups of your Rancher that can be restored in a catostrophic failure of the control plane.
hi, i tried you'r guide but i'm stuck with the install of metallb
when i use:
helm upgrade --install metallb metallb/metallb --create-namespace --namespace metallb-system --wait
i get:
can you help me solve this problem?