The below currently does not work on Raspbian Buster Lite (Debian 10). The kernel has been compiled without CONFIG_CFS_BANDWIDTH
and pods will fail to spawn due to runc trying to write into cpu.cfs_period_us
in the cgroup of the pod. This file does not exist and trying to create it yields permission denied
.
Example:
open /sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a1fe9eafc113856b2d4d409800ef99f.slice/crio-211c0bcc45f43e085415cff3736e38a552ee92657d879d4235f02a7d4dee097f.scope/cpu.cfs_period_us: permission denied
This gist describes a possible IoT/Edge computing setup using Kubernetes-style declarative management. It utilizes standalone kubelet + CRI-O + CNI on a Raspberry Pi running Raspbian 10 (Debian Buster). The goal is to place a Kubernetes Pod manifest on an single node and access the application from the network.
apt-get update && apt-get -y upgrade && reboot
apt-get install -y software-properties-common
Add the Project Atomic PPA manually since there is no distribution template for Debian Buster yet
cat > /etc/apt/sources.list.d/projectatomics.list <<EOF
deb http://ppa.launchpad.net/projectatomic/ppa/ubuntu xenial main
deb-src http://ppa.launchpad.net/projectatomic/ppa/ubuntu xenial main
EOF
Add the public keys of Ubuntu Launchpad and load the index:
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8BECF1637AD8C79D
apt-get update
Load the required kernel modules for CRI-O
modprobe overlay
modprobe br_netfilter
Enable auto-loading of required kernel modules and configure kernel tunables to allow
cat > /etc/modules-load.d/crio.conf <<EOF
overlay
br_netfilter
EOF
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
Eventually, apply the kernel tunables:
sysctl --system
Install CRI-O
apt-get -y install cri-o-1.15
Update crio.conf
to point to conmon
in the correct location:
@@ -88,7 +88,7 @@
no_pivot = false
# Path to the conmon binary, used for monitoring the OCI runtime.
-conmon = "/usr/libexec/crio/conmon"
+conmon = "/usr/bin/conmon"
# Environment variable list for the conmon process, used for passing necessary
# environment variables to conmon or the runtime.
Add registries to the CRI-O configuration:
@@ -255,11 +255,10 @@
# compatibility reasons. Depending on your workload and usecase you may add more
# registries (e.g., "quay.io", "registry.fedoraproject.org",
# "registry.opensuse.org", etc.).
-registries = [
- "quay.io",
- "docker.io",
- "registry.access.redhat.com"
-]
+#registries = [
+# "quay.io",
+# "docker.io",
+#]
Install crictl
from GitHub releases:
curl -Ls https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-arm.tar.gz | tar xvz
mv crictl /usr/sbin/
Delete the default CNI config:
rm /etc/cni/net.d/*.conf
Place the following file into /etc/cni/net.d/100-crio-bridge.conflist
{
"cniVersion": "0.3.1",
"name": "bridge-firewalld",
"plugins": [
{
"type": "bridge",
"bridge": "cni0",
"isDefaultGateway": true,
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.88.0.0/16",
"routes": [
{
"dst": "0.0.0.0/0"
}
]
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
The above configuration will cause CNI to create Linux bridge cni0
and attach veth-pairs between the host and the container. The containers will receive IPs in the 10.88.0.0/16
in the process. The bridge will act as gateway and IP masquerading will be configured to allow containers to networks external to the host (e.g. internet). Portmapping and firewalld rule manipulation will be conducted.
Install the signing keys for the Kubernetes repository
apt install curl -y && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Enable the Kubernetes repository as a source in /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
Eventually, install the kubelet
apt-get update && apt-get -y install kubelet
The kubelet
systemd service will start immediately but fail, since it has not default config. This will be done in the next step.
Create the following /etc/default/kubelet
in order to:
- use systemd to manage cgroups
- not fail on swap space enabled
- enable static pod manifests stored on disk
- enable the use of runc through CRIO as the container runtime (details)
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd --fail-swap-on=false --pod-manifest-path=/etc/kubernetes/manifests --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --runtime-request-timeout=10m
Create the pod manifest directory:
mkdir -p /etc/kubernetes/manifests
Copy the default systemd unit file for the kubelet in the designated drop-in location:
cp /lib/systemd/system/kubelet.service /etc/systemd/system/kubelet.service
Modify /etc/systemd/system/kubelet.service
as follows:
@@ -1,9 +1,12 @@
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
+After=crio.service
+Requires=crio.service
[Service]
-ExecStart=/usr/bin/kubelet
+EnvironmentFile=/etc/default/kubelet
+ExecStart=/usr/bin/kubelet $KUBELET_EXTRA_ARGS
Restart=always
StartLimitInterval=0
RestartSec=10
This add CRI-O as a start-up dependency for the kubelet and reads the KUBELET_EXTRA_ARGS
environment variable from /etc/default/kubelet
.
Reload systemd:
systemctl daemon-reload
And start the kubelet
:
systemctl start kubelet
Verify the kubelet status
systemctl status kubelet
Verify the status of CRI-O
systemctl status crio
Verify that both the runtime and CNI are ready:
crictl info
You should see the following:
{
"status": {
"conditions": [
{
"type": "RuntimeReady",
"status": true,
"reason": "",
"message": ""
},
{
"type": "NetworkReady",
"status": true,
"reason": "",
"message": ""
}
]
}
}
When succcessful, place the following example pod manifest in /etc/kubernetes/manifests/echoserver.yaml
apiVersion: v1
kind: Pod
metadata:
name: echoserver
spec:
containers:
- name: echoserver
image: gcr.io/google-containers/echoserver:1.10
ports:
- name: web
containerPort: 8080
hostPort: 9091
protocol: TCP
resources:
limits:
cpu: "100m"
memory: "50Mi"
Verify that the pod is running:
crictl ps -o table
You should the pod running:
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
99cb17ca96800 365ec60129c5426b4cf160257c06f6ad062c709e0576c8b3d9a5dcc488f5252d 11 minutes ago Running echoserver 2 adfbfd4a31754
You should see the IP address of the container/pod in the source list of the trusted
zone:
trusted (active)
target: ACCEPT
icmp-block-inversion: no
interfaces:
sources: 10.88.0.18/32
services:
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
You should be able to curl
the container on its container port:
curl http://10.88.0.18:8080
You should also be able to curl
the container on its host port from another system:
curl http://<host-ip>:9091