Installing K3s with the external ("out-of-tree") AWS Cloud Provider
Refer to the upstream project's official documentation for the various pre-requisites. You must have an IAM role with the right permissions attached to your K3s instances, and you must also tag your nodes with a clusterid. Refer to the Rancher documentation for how to do this
Install K3s with the following options:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server \
--disable-cloud-controller \
--disable servicelb \
--disable traefik \
--node-name="$(hostname -f)" \
--kubelet-arg="cloud-provider=external" \
--write-kubeconfig-mode=644" sh -
Next, download and extract the AWS Cloud Provider:
wget https://github.com/kubernetes/cloud-provider-aws/archive/master.zip
unzip master.zip
As there's no official Helm Catalogue for the AWS Cloud Provider, we need to create a tarball of the Helm Chart and drop it into a specific location for K3s to be able to serve up:
tar czvf /var/lib/rancher/k3s/server/static/charts/aws-ccm.tgz -C cloud-provider-aws-master/charts/aws-cloud-controller-manager .
Now we need to create a HelmChart resource manifest for the AWS Cloud Provider:
cat > aws-ccm.yaml << EOF
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: aws-cloud-controller-manager
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/aws-ccm.tgz
targetNamespace: kube-system
bootstrap: true
valuesContent: |-
hostNetworking: true
nodeSelector:
node-role.kubernetes.io/master: "true"
EOF
Copy this into place:
cp aws-ccm.yaml /var/lib/rancher/k3s/server/manifests/
After a few seconds, K3s' Helm Controller should deploy the AWS CCM:
kubectl get addon aws-ccm -n kube-system
NAME AGE
aws-ccm 30s
kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
aws-cloud-controller-manager 1 1 1 1 1 node-role.kubernetes.io/master=true 38s
Keep an eye on the CCM's logs to ensure it starts up and synchronises successfully:
kubectl logs -f -l k8s-app=aws-cloud-controller-manager -n kube-system
I0320 11:05:40.704438 1 node_controller.go:390] Initializing node ip-172-31-14-0.eu-west-2.compute.internal with cloud provider
I0320 11:05:40.704612 1 shared_informer.go:247] Caches are synced for service
I0320 11:05:40.837102 1 node_controller.go:492] Adding node label from cloud provider: beta.kubernetes.io/instance-type=t3a.medium
I0320 11:05:40.837137 1 node_controller.go:493] Adding node label from cloud provider: node.kubernetes.io/instance-type=t3a.medium
I0320 11:05:40.837145 1 node_controller.go:504] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=eu-west-2c
I0320 11:05:40.837152 1 node_controller.go:505] Adding node label from cloud provider: topology.kubernetes.io/zone=eu-west-2c
I0320 11:05:40.837157 1 node_controller.go:515] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=eu-west-2
I0320 11:05:40.837162 1 node_controller.go:516] Adding node label from cloud provider: topology.kubernetes.io/region=eu-west-2
I0320 11:05:40.880828 1 node_controller.go:454] Successfully initialized node ip-172-31-14-0.eu-west-2.compute.internal with cloud provider
I0320 11:05:40.881141 1 event.go:291] "Event occurred" object="ip-172-31-14-0.eu-west-2.compute.internal" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"