Skip to content

Instantly share code, notes, and snippets.

@jonuwz
Last active September 23, 2019 00:06
Show Gist options
  • Save jonuwz/3ed40756a5924c8a04e9915761e56050 to your computer and use it in GitHub Desktop.
Save jonuwz/3ed40756a5924c8a04e9915761e56050 to your computer and use it in GitHub Desktop.

Single (soon to be multi) node kubernetes backed by ceph storage on Raspberry Pi 4

Hardware

4 * Raspberry Pi 4B 4GB
1 * Sandisk Extreme A1 32GB
3 * Sandisk Ultra A1 32GB
4 * TOPK Type C Cable QC 3.0 Fast Charge
4 * CSL - Flat Ethernet Cable Cat6 0.25m
1 * Anker PowerPort 60 W 6-Port USB Charger with PowerIQ
1 * Edimax ES-5500G V3 Gigabit Ethernet 5 Port
3 * Sabrent M.2 SSD USB 3.0 Enclosure UASP Support (EC-M2MC)
3 * Kingston SSD A400 M.2 Solid State Drive, 120 GB

All from Amazon.co.uk, apart from the pi 4's ~ £380

Voltages
unscientific, but you get under-voltage warnings in dmesg.
under 100% load, and writing to SSD, 0 under-volt warnings

System Prep

Kernel

Enable stuff needed for rdb (ceph) + docker

sudo apt-get install git bc bison flex libssl-dev
git clone --depth=1 https://github.com/raspberrypi/linux
cd linux
sudo modprobe configs
gunzip -c /proc/config.gz > .config
KERNEL=kernel7l
echo "CONFIG_BLK_DEV_RBD=y" >> .config
echo "CONFIG_CGROUP_NET_PRIO=y" >> .config
echo "CONFIG_CEPH_LIB=y" >> .config
make -j4 zImage modules dtbs
sudo make modules_install
sudo cp arch/arm/boot/dts/*.dtb /boot/
sudo cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/
sudo cp arch/arm/boot/dts/overlays/README /boot/overlays/
sudo cp arch/arm/boot/zImage /boot/$KERNEL.img

networking

Create a second static IP on eth0. hostname/IP changes break everything. 1st node is 10.0.0.2
Eventually we'll have a controller host with dhcp and PXE when its supported on Pi

cat<<EOF | sudo tee /etc/network/interfaces.d/management
auto eth0:0
iface eth0:0 inet static
address 10.0.0.2
netmask 255.255.0.0
EOF
sudo ifup eth0:0
echo -e "\n10.0.0.2\t node1" | sudo tee -a /etc/hosts
sudo hostname node1
echo node1 | sudo tee /etc/hostname

Disable Swap

sudo dphys-swapfile swapoff
sudo systemctl --now disable dphys-swapfile

enable cgroups

sudo sed -i 's/$/ cgroup_enable=cpuset cgroup_enable=memory/' /boot/cmdline.txt

reboot

sudo reboot

Ceph

Raspbian buster ships with ceph luminous.

single node test

create hockey block devices NOT NEEDED WITH SSDS

sudo mkdir -p /srv/ceph/osd
for i in {0..2};do 
  sudo dd if=/dev/zero of=/srv/ceph/osd/$1 bs=$((1024*1024)) count=2048 #2gb
  sudo losetup /dev/loop$i /srv/ceph/osd/$i
  sudo pvcreate /dev/loop$i
  sudo vgcreate vg_osd$i /dev/loop$i
  sudo lvcreate -l 100%FREE -n lv_osd$i vg_osd$i
  sudo sed -i "/exit 0/i losetup /dev/loop$i /srv/ceph/osd/$i" /etc/rc.local
done
sudo lvdisplay

setup ceph

Canonical http://docs.ceph.com/docs/luminous/start/quick-ceph-deploy/

errata
This just makes deploying to the same node easier

ssh-keygen -b 2048 -t rsa -f $HOME/.ssh/id_rsa -q -N ""
sudo groupadd -g 10030 ceph
sudo useradd -u 10030 -g ceph -m -d /var/lib/ceph -s /bin/bash ceph
sudo mkdir -p /var/lib/ceph/.ssh
cat $HOME/.ssh/id_rsa.pub | sudo tee -a /var/lib/ceph/.ssh/authorized_keys
sudo chown -R ceph:ceph /var/lib/ceph
sudo chmod -R g-rwx,o-rwx /var/lib/ceph/.ssh

install

sudo apt install ceph-deploy lvm2 -y
ceph-deploy new node1
echo "osd pool default size = 1" >> ceph.conf
echo "osd crush chooseleaf type = 0" >>ceph.conf
echo "public network = 10.0.0.0/16" >>ceph.conf
ceph-deploy install node1
ceph-deploy mon create-initial
ceph-deploy admin node1
ceph-deploy mgr create node1
#sudo ceph-volume lvm zap /dev/ceph/ceph
sudo pvcreate /dev/sda2
sudo vcgreate ceph /dev/sda2
sudo lvcreate -l 100%FREE -n ceph ceph
ceph-deploy osd create --data /dev/ceph/ceph node1
ceph-deploy mds create node1
ceph-deploy rgw create node1

Kubernetes

Errata

inspiration : https://medium.com/nycdev/k8s-on-pi-9cc14843d43

Install tools

# Add repo list and install kubeadm
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -q
sudo apt-get install -qy kubeadm

init

sudo apt install docker.io -y
sudo kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Add node, taint if you want to install workers on the controllers

kubectl taint nodes --all node-role.kubernetes.io/master-

Run this in the master to get command for joining other machines

echo sudo kubeadm join 10.0.0.2:6443 \
--token $(kubeadm token create) \
--discovery-token-ca-cert-hash $(openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //')

Networking

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

AT THIS POINT k8s should be ready to use

kubectl get nodes

Ingress Nginx (contour doesn't ship arm containers :( )

I like my services available on regular ports, with meaninful URLs
Assuming locally you belong the domain app.local, and you control DNS, the plan would be to point *.k8s.app.local at the ingress controller ip addresses, then you can just go to https://gitlab.k8s.app.local (for example)
We deploy nginx as a daemonset and use host networking to achieve this.
Caveats Via host network
This is just the mandatory.yaml altered to use daemon sets and host networking

cat <<'EOF' | kubectl apply -f - 
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:dev
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io
            - --report-node-internal-ip-address
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              hostPort: 80
            - name: https
              containerPort: 443
              hostPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
---
EOF

Getting kubernetes to talk to ceph

Build arm version of rbd provisioner

Assumes you have set up a golang env.

sudo apt install golang -y
bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
source $HOME/.gvm/scripts/gvm
gvm install go1.12.6 # binary install installs x86 hrmm
gvm use go 1.12.6 --default

Then

mkdir -p $GOPATH/src/github.com/kubernetes-incubator
git clone https://github.com/kubernetes-incubator/external-storage $GOPATH/src/github.com/kubernetes-incubator/external-storage
cd $GOPATH/src/github.com/kubernetes-incubator/external-storage/ceph/rbd
git checkout v5.2.0 # later versions have a flag collision, which I CBA to figure out atm.
make build
./rbd-provisioner  # should not crash

Store docker image (Todo - we need to figure out where to make this accessible, prob need a docker registry)

cat<<EOF>Dockerfile
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

FROM debian:buster-slim

RUN apt update && \
  apt install -y ceph-common

COPY rbd-provisioner /usr/local/bin/rbd-provisioner
ENTRYPOINT ["/usr/local/bin/rbd-provisioner"]
EOF

docker build -t local/rbd-provisioner:latest .

deploy provisioner

Inspiration : https://akomljen.com/using-existing-ceph-cluster-for-kubernetes-persistent-storage/
Note the container image is the one we just built, imagepullpolicy is never (to stop it looking on t'internet)

cat <<EOF | kubectl -n kube-system apply -f -
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: rbd-provisioner
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rbd-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        imagePullPolicy: Never
        image: "local/rbd-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner
EOF

inject ceph secrets into kubernetes

kubectl create secret generic ceph-secret \
    --type="kubernetes.io/rbd" \
    --from-literal=key="$(sudo ceph --cluster ceph auth get-key client.admin)" \
    --namespace=kube-system

create kube pool

sudo ceph --cluster ceph osd pool create kube 128 128 # 1024 if there's > 5 OSD.
sudo ceph --cluster ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
sudo ceph --cluster ceph auth get-key client.kube
sudo ceph osd pool application enable kube rbd

add new secret to k8s

kubectl create secret generic ceph-secret-kube \
    --type="kubernetes.io/rbd" \
    --from-literal=key="$(sudo ceph --cluster ceph auth get-key client.kube)" \
    --namespace=kube-system

add storage class

cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph
provisioner: ceph.com/rbd
parameters:
  monitors: 10.0.0.2:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-secret-kube
  userSecretNamespace: kube-system
  imageFormat: "2"
  imageFeatures: layering
EOF

test

cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 128Mi
  storageClassName: ceph
EOF

sudo rados -p kube ls  # you should see a PVC
kubectl delete pvc test

Pudding - Test the ingress + persistent storage works

Simple app

cat<<EOF > test.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 128Mi
  storageClassName: ceph
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      volumes:
        - name: persistent
          persistentVolumeClaim:
           claimName: test
      containers:
        - name: nginx
          image: nginx
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: persistent
---
apiVersion: v1
kind: Service
metadata:
  name: test
spec:
  type: ClusterIP
  clusterIP: None
  ports:
  - name: http
    port: 80
  selector:
    app: test
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: test
spec:
  rules:
  - host: test.k8s.app.local
    http:
      paths:
      - path: /
        backend:
          serviceName: test
          servicePort: 80
EOF

kubectl apply -f test.yaml

Check it works

curl localhost:80 -H 'Host: test.k8s.app.local' # willl return 503, then 403 when its serving

create an index file, which will persist pod destruction

echo "Its alive!!" > index.html
kubectl cp ./index.html default/$(k get po -l app=test -o custom-columns=:.metadata.name | grep test):/usr/share/nginx/html/index.html

curl localhost:80 -H 'Host: test.k8s.app.local' # returns 'Its alive!!"

recreate the deployment

kubectl delete deploy/test
kubectl apply -f ./test.yaml

curl localhost:80 -H 'Host: test.k8s.app.local' # still returns 'Its alive!!"

cleanup

k delete -f ./test.yaml

This will also destroy the persistent storage since we're removing the persistent volume claim

FIN

stuff needed to tinker with raspbian image : apt-get install qemu qemu-user-static binfmt-support qemu-user-binfmt systemd-container qemu-user-static

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment