pacman -S curl docker ebtables ethtool wget unzip
Also cfssl
is needed but available on AUR, using pacaur
pacaur -S cfssl
Add --iptables=false
and --ip-masq=false
parameters to the dockerd daemon in the docker systemd service (/usr/lib/systemd/system/docker.service
).
Allow bridged IPV4 traffic to iptables' chains using:
sysctl net.bridge.bridge-nf-call-iptables=1
If Docker was previsously used, clean the iptables rules using:
iptables -F
iptables -t nat -F
Start or restart Docker. systemctl enable docker && systemctl restart docker
export CNI_VERSION="v0.6.0"
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz
export CRICTL_VERSION="v1.11.1"
mkdir -p /opt/bin
curl -L "https://github.com/kubernetes-incubator/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
mkdir -p /opt/bin
cd /opt/bin
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable kubelet && systemctl start kubelet
kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
At this point you should be able to use
$ kubectl get no
NAME STATUS ROLES AGE VERSION
stephen-arch-linux Ready master 31m v1.12.1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
Once Flannel is up, your cluster is up and running.
kubectl taint nodes --all node-role.kubernetes.io/master-
Install an Ingress controller, for instance the NGINX Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
Dont forget the k8s service:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
Using the Ingress
object you will be able to access your services.
Create a storageClass (this object is not namespaced):
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Make it the default one:
kubectl annotate storageclass local-storage storageclass.kubernetes.io/is-default-class=true
For each PersistentVolumeClaim, you will need to manually create a PersistentVolume:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /etc/kubernetes/local
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
Be sure that the spec.local.path exists on the host.
I am also arch user :)