The cheapest Kubernetes deployment either side of the Mississippi
BEWARE This guide is devoid of binary verification
Generate an ssh key with 4096 bits (if you don't already have one you want to use). I would recommend putting a password on it and using it only for this VM.
ssh-keygen -b 4096
TODO Add info on storing keys in TPM via tpm2-pkcs11
We'll be using a CoreOS Container Linux VM since only because I'm personally inclined to believe that's the most secure option at the moment.
Our VM has 1GB of memory, we'll most assuredly need some swap.
https://coreos.com/os/docs/latest/adding-swap.html
sudo mkdir -p /var/vm
sudo fallocate -l 5G /var/vm/swapfile1
sudo chmod 600 /var/vm/swapfile1
sudo mkswap /var/vm/swapfile1
sudo tee /etc/systemd/system/var-vm-swapfile1.swap > /dev/null <<EOF
[Unit]
Description=Turn on swap
[Swap]
What=/var/vm/swapfile1
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl enable --now var-vm-swapfile1.swap
echo 'vm.swappiness=30' | sudo tee /etc/sysctl.d/80-swappiness.conf
sudo systemctl restart systemd-sysctl
sudo swapon
Create a ~/.local/bin
directory and a ~/.profile
which will add that
directory to your PATH
when you source it.
mkdir -p "${HOME}/.local/bin"
cat >> ~/.profile <<'EOF'
export PATH="${PATH}:${HOME}/.local/bin"
EOF
Whenever you ssh in (and now) you'll want to run
. .profile
To add ~/.local/bin
to your PATH
, which is where we'll install the binaries.
- k3d
k3d
isk3s
in docker
- kubectl
kubectl
is the binary we'll use to interact with our Kubernetes cluster
curl -L -o k3d https://github.com/rancher/k3d/releases/download/v1.3.0-dev.0/k3d-linux-amd64
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod 700 k3d kubectl
mv k3d kubectl ~/.local/bin/
Create a cluster with 3 workers and exposing ports 80 and 443 (HTTP and HTTPS).
At time of writing the Rancher devs have just recently fixed bugs related to Knative deployment. As such we need to specify the k3s image that now works.
k3d create --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6
The k3d get-kubeconfig
may take a second or two before it works, just try
re-running it a few times until it works.
export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
kubectl cluster-info
Grab the latest release from https://github.com/helm/helm/releases and install it.
curl -sSL https://get.helm.sh/helm-v2.14.2-linux-amd64.tar.gz | tar xvz
mv linux-amd64/{helm,tiller} ~/.local/bin/
rm -rf linux-amd64/
Now we need to configure Role Based Access Control (RBAC) and create a Certificate Authority (CA) which will secure our helm/tiller installation.
You should read these guides, but I'll summarize the CLI commands.
- https://helm.sh/docs/using_helm/#securing-your-helm-installation
- https://helm.sh/docs/using_helm/#role-based-access-control
- https://helm.sh/docs/using_helm/#generate-a-certificate-authority
We're going to Deploy Helm in a namespace, talking to Tiller in another namespace
kubectl create namespace helm-world
kubectl create namespace tiller-world
kubectl create serviceaccount tiller --namespace tiller-world
kubectl create << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm
namespace: helm-world
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tiller-user
namespace: tiller-world
rules:
- apiGroups:
- ""
resources:
- pods/portforward
verbs:
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tiller-user-binding
namespace: tiller-world
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tiller-user
subjects:
- kind: ServiceAccount
name: helm
namespace: helm-world
EOF
We're going to use terraform to generate all the certificates.
curl -LO https://releases.hashicorp.com/terraform/0.12.5/terraform_0.12.5_linux_amd64.zip
unzip terraform_0.12.5_linux_amd64.zip
rm terraform_0.12.5_linux_amd64.zip
mv terraform ~/.local/bin/
Now we download the terraform file and run it to create the certs. More info on the terraform file can be found here: https://github.com/jbussdieker/tiller-ssl-terraform
Warning terraform will not regenerate the certs when you re-run the apply command if they exist on disk.
curl -LO https://github.com/jbussdieker/tiller-ssl-terraform/raw/master/tiller_certs.tf
terraform init
terraform apply -auto-approve
helm init \
--tiller-tls \
--tiller-tls-cert ./tiller.cert.pem \
--tiller-tls-key ./tiller.key.pem \
--tiller-tls-verify \
--tls-ca-cert ca.cert.pem \
--tiller-namespace=tiller-world \
--service-account=tiller-user \
Work In Progress When deploying Istio's pods on our cheap VM you'll notice that kubernetes is going to leave some of the Istio pods in Pending state. This is due to our 1GB of RAM. Yes, we gave the VM 5GB of swap, but Kubernetes doesn't add our swap to the total amount of RAM.
Important --set gateways.custom-gateway.type='ClusterIP'
needs to be
--set gateways.custom-gateway.type='NodePort'
.
TODO Enable
global.mtls.enabled
andglobal.controlPlaneSecurityEnabled
export ISTIO_VERSION=1.1.7
curl -L https://git.io/getLatestIstio | sh -
cd istio-${ISTIO_VERSION}
for i in istio-?.?.?/install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: istio-system
labels:
istio-injection: disabled
EOF
helm template --namespace=istio-system \
--set sidecarInjectorWebhook.enabled=true \
--set sidecarInjectorWebhook.enableNamespacesByDefault=true \
--set global.proxy.autoInject=disabled \
--set global.disablePolicyChecks=true \
--set prometheus.enabled=false \
`# Disable mixer prometheus adapter to remove istio default metrics.` \
--set mixer.adapters.prometheus.enabled=false \
`# Disable mixer policy check, since in our template we set no policy.` \
--set global.disablePolicyChecks=true \
`# Set gateway pods to 1 to sidestep eventual consistency / readiness problems.` \
--set gateways.istio-ingressgateway.autoscaleMin=1 \
--set gateways.istio-ingressgateway.autoscaleMax=1 \
--set gateways.istio-ingressgateway.resources.requests.cpu=500m \
--set gateways.istio-ingressgateway.resources.requests.memory=256Mi \
`# Enable SDS in the gateway to allow dynamically configuring TLS of gateway.` \
--set gateways.istio-ingressgateway.sds.enabled=true \
`# More pilot replicas for better scale` \
--set pilot.autoscaleMin=2 \
`# Set pilot trace sampling to 100%` \
--set pilot.traceSampling=100 \
istio-?.?.?/install/kubernetes/helm/istio \
> ./istio.yaml
kubectl apply -f istio.yaml
helm template --namespace=istio-system \
--set gateways.custom-gateway.autoscaleMin=1 \
--set gateways.custom-gateway.autoscaleMax=1 \
--set gateways.custom-gateway.cpu.targetAverageUtilization=60 \
--set gateways.custom-gateway.labels.app='cluster-local-gateway' \
--set gateways.custom-gateway.labels.istio='cluster-local-gateway' \
--set gateways.custom-gateway.type='NodePort' \
--set gateways.istio-ingressgateway.enabled=false \
--set gateways.istio-egressgateway.enabled=false \
--set gateways.istio-ilbgateway.enabled=false \
istio-?.?.?/install/kubernetes/helm/istio \
-f istio-?.?.?/install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml \
| sed -e "s/custom-gateway/cluster-local-gateway/g" -e "s/customgateway/clusterlocalgateway/g" \
> ./istio-local-gateway.yaml
kubectl apply -f istio-local-gateway.yaml
https://knative.dev/docs/serving/using-a-custom-domain/
export DOMAIN=example.com
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
# Default value for domain, for routes that does not have app=prod labels.
# Although it will match all routes, it is the least-specific rule so it
# will only be used if no other domain matches.
${DOMAIN}: ""
EOF
Auto-TLS means our Knative applications will get HTTPS certs from Let's Encrypt without us doing anything (other than setting it up)! Awesome!
First we need to install cert manager which is what talks to Let's Encrypt to get us certificates. We'll need to combine a few guides for this.
https://knative.dev/docs/serving/installing-cert-manager/
export CERT_MANAGER_VERSION=0.6.1
curl -sSL https://github.com/jetstack/cert-manager/archive/v${CERT_MANAGER_VERSION}.tar.gz | tar xz
kubectl apply -f cert-manager-?.?.?/deploy/manifests/00-crds.yaml
kubectl apply -f cert-manager-?.?.?/deploy/manifests/cert-manager.yaml
Now that cert-manager is installed, we need to set up a way to answer the ACME DNS challenge. Since we're on DigitalOcean we'll use the cert-manager plugin for them.
- https://knative.dev/docs/serving/using-cert-manager-on-gcp/#adding-your-service-account-to-cert-manager
- https://docs.cert-manager.io/en/latest/tasks/issuers/setup-acme/dns01/digitalocean.html
Create your DigitalOcean personal access token and export it as an environment
variable, so that the DNS TXT
records can be updated for the ACME challenge.
export DO_PA_TOKEN=9a978d78fe57a9f6760ea
kubectl apply -f - <<EOF
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-issuer
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
# This will register an issuer with LetsEncrypt. Replace
# with your admin email address.
email: [email protected]
privateKeySecretRef:
# Set privateKeySecretRef to any unused secret name.
name: letsencrypt-issuer
dns01:
providers:
- name: digitalocean
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: ${DO_PA_TOKEN}
EOF
kubectl get clusterissuer --namespace cert-manager letsencrypt-issuer --output yaml
https://knative.dev/docs/install/knative-with-iks/#installing-knative
There's an open issue about how there's a race condition on these apply
commands. I can't find it right now but just wait a bit and try re-running them
if they complain.
curl -LO https://github.com/knative/serving/releases/download/v0.7.0/serving.yaml
curl -LO https://github.com/knative/build/releases/download/v0.7.0/build.yaml
curl -LO https://github.com/knative/eventing/releases/download/v0.7.0/release.yaml
curl -LO https://github.com/knative/serving/releases/download/v0.7.0/monitoring.yaml
kubectl apply \
--selector knative.dev/crd-install=true \
--filename serving.yaml \
--filename build.yaml \
--filename release.yaml \
--filename monitoring.yaml
# TODO Auto TLS
kubectl apply \
--filename serving.yaml \
--selector networking.knative.dev/certificate-provider!=cert-manager \
--filename build.yaml \
--filename release.yaml \
--filename monitoring.yaml
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
kubectl get pods --namespace knative-eventing
kubectl get pods --namespace knative-monitoring
kubectl get deploy -n knative-serving --label-columns=serving.knative.dev/release