Skip to content

Instantly share code, notes, and snippets.

@bpmct
Last active February 8, 2021 14:52
Show Gist options
  • Save bpmct/aa8f6e2e616b626f037cba20969187e9 to your computer and use it in GitHub Desktop.
Save bpmct/aa8f6e2e616b626f037cba20969187e9 to your computer and use it in GitHub Desktop.

I started (from /docs on 01.31.20)

Note: The most recent docs on https://coder.com/docs/setup/kubernetes/google include clearer install docs.

PROJECT_ID="MY_PROJECT_ID" CLUSTER_NAME="MY_CLUSTER_NAME" \
  gcloud beta container --project "$PROJECT_ID" \
  clusters create "$CLUSTER_NAME"
    --zone "us-central1-a" \
   --no-enable-basic-auth \
    --cluster-version "1.14.7-gke.14" \
    --machine-type "n1-standard-4" \
    --image-type "COS" \
    --disk-type "pd-standard" \
    --disk-size "100" \
    --metadata disable-legacy-endpoints=true \
    --scopes "https://www.googleapis.com/auth/cloud-platform" \
    --num-nodes "2" \
    --enable-stackdriver-kubernetes \
    --enable-ip-alias \
    --network "projects/${PROJECT_ID}/global/networks/default" \
    --subnetwork \
    "projects/${PROJECT_ID}/regions/us-central1/subnetworks/default" \
    --default-max-pods-per-node "110" \
    --addons HorizontalPodAutoscaling,HttpLoadBalancing \
    --enable-autoupgrade \
    --enable-autorepair \
    --enable-network-policy \
    --enable-autoscaling
    --min-nodes "2"
    --max-nodes "8" \
    --region "us-central1-c"

Installing Coder

I modified:

  • node-version (CVMs)
  • cluster-version (CVMs)
  • image-type (CVMs)
  • disk-size (save $)
  • min-nodes (save $)
  • environment vars didn't work, manually inputted project name and GCP

Based on the GCP CVM config on 01.31.21.

gcloud beta container --project "kubernetes-cluster-302420" \
  clusters create "ben-gcp-coder" \
    --zone "us-central1-a" \
    --no-enable-basic-auth \
    --node-version "latest" \
    --cluster-version "latest" \
    --machine-type "n1-standard-4" \
    --image-type "UBUNTU" \
    --disk-type "pd-standard" \
    --disk-size "50" \
    --metadata disable-legacy-endpoints=true \
    --scopes "https://www.googleapis.com/auth/cloud-platform" \
    --num-nodes "2" \
    --enable-stackdriver-kubernetes \
    --enable-ip-alias \
    --network "projects/kubernetes-cluster-302420/global/networks/default" \
    --subnetwork \
    "projects/kubernetes-cluster-302420/regions/us-central1/subnetworks/default" \
    --default-max-pods-per-node "110" \
    --addons HorizontalPodAutoscaling,HttpLoadBalancing \
    --enable-autoupgrade \
    --enable-autorepair \
    --enable-network-policy \
    --enable-autoscaling \
    --min-nodes "1" \
    --max-nodes "8"

Then I decided I wanted a smaller node pool. DON'T USE THIS MACHINE SIZE, I DISCOVERED IT IS TOO SMALL FOR CODER. I had nothing deployed, but I followed this tutorial.

I also added the --preemptible flag to save some $$. Thanks for the tip @mterhar

gcloud container node-pools create smaller-pool \
  --preemptible \
  --zone "us-central1-a" \
  --cluster "ben-gcp-coder" \
  --num-nodes "2" \
  --enable-autoscaling \
  --node-version "latest" \
  --min-nodes "1" \
  --max-nodes "8" \
  --machine-type "n1-standard-2" \
  --image-type "UBUNTU" \
  --disk-type "pd-standard" \
  --disk-size "50"

gcloud container node-pools delete default-pool --cluster ben-gcp-coder --zone us-central1-a

I created a coder namespace:

kubectl create namespace coder

Then I installed cert-manager:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.1.0/cert-manager.yaml

I use CloudFlare for my domains, so I configured cert-manager with CloudFlare with this tutorial. I used the API Token method.

# SecretAndIssuer.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ben-cloudflare-secret
type: Opaque
stringData:
  api-token: <API Token>
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: bpmctnet-coder-issuer
  namespace: coder
spec:
  acme:
    email: [email protected]
    server: "https://acme-v02.api.letsencrypt.org/directory"
    privateKeySecretRef:
        name: letsencrypt-staging
    solvers:
    - dns01:
        cloudflare:
          email: [email protected]
          apiTokenSecretRef:
            name: ben-cloudflare-secret
            key: api-token
      selector:
        dnsZones:
        - 'bpmct.net'

Then I applied it

kubectl apply -f SecretAndIssuer.yaml

Finally, I deployed Coder with the domain settings (partially clipped from this video)

helm install coder coder/coder --namespace coder \
  --version 1.15.2 \
  --set devurls.host="*.coder.bpmct.net" \
  --set ingress.host="coder.bpmct.net" \
  --set ingress.tls.enable=true \
  --set ingress.tls.hostSecretName=coder-ben-hostcertificate \
  --set ingress.tls.devurlsHostSecretName=coder-ben-devurlcertificate \
  --wait

Finally, I creates the certificates:

# Certificates.yaml

apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: coder-ben-root
  namespace: coder # Your Coder deployment namespace
spec:
  secretName: coder-ben-hostcertificate
  duration: 2160h # 90d
  renewBefore: 360h # 15d
  dnsNames:
    - "coder.bpmct.net" # Your base domain for Coder
  issuerRef:
    name: bpmctnet-coder-issuer
    kind: Issuer
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: coder-ben-devurls
  namespace: coder # Your Coder deployment namespace
spec:
  secretName: coder-ben-devurlcertificate
  duration: 2160h # 90d
  renewBefore: 360h # 15d
  dnsNames:
    - "*.coder.bpmct.net" # Your dev URLs wildcard subdomain
  issuerRef:
    name: bpmctnet-coder-issuer
    kind: Issuer

And applied them:

kubectl apply -f Certificate.yaml

To get the Coder IP, I did:

kubectl describe ingress --namespace coder

And then I pointed my CloudFlare DNS entries:

coder.bpmct.net -> [ingress IP]
*.coder.bpmct.net -> [ingress IP]

To get the Coder username and password, I used:

kubectl logs -n coder -l coder.deployment=cemanager -c cemanager \
 --tail=-1 | grep -A1 -B2 Password

Finally, I could access my Coder install at https://coder.bpmct.net

I tried optimize-utilization:

gcloud beta container clusters update ben-gcp-coder \
   --autoscaling-profile optimize-utilization \
   --zone us-central1-a

Updating to the latest version:

helm search repo coder -l --devel`


helm upgrade --namespace coder --force --install --atomic --wait \
  --version 1.16.0-rc.1 \
  --set devurls.host="*.coder.bpmct.net" \
  --set ingress.host="coder.bpmct.net" \
  --set ingress.tls.enable=true \
  --set ingress.tls.hostSecretName=coder-ben-hostcertificate \
  --set ingress.tls.devurlsHostSecretName=coder-ben-devurlcertificate \
  --wait \
  coder coder/coder

I had to increase file watchers:

# increase-watchers.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: more-fs-watchers
  namespace: kube-system
  labels:
    app: more-fs-watchers
    k8s-app: more-fs-watchers
spec:
  selector:
    matchLabels:
      k8s-app: more-fs-watchers
  template:
    metadata:
      labels:
        name: more-fs-watchers
        k8s-app: more-fs-watchers
    spec:
      hostNetwork: true
      hostPID: true
      hostIPC: true
      initContainers:
        - command:
            - sh
            - -c
            - sysctl -w fs.inotify.max_user_watches=524288;
          image: alpine:3.6
          imagePullPolicy: IfNotPresent
          name: sysctl
          resources: {}
          securityContext:
            privileged: true
          volumeMounts:
            - name: sys
              mountPath: /sys
      containers:
        - resources:
            requests:
              cpu: 0.01
          image: alpine:3.6
          name: sleepforever
          command: ["tail"]
          args: ["-f", "/dev/null"]
      volumes:
        - name: sys
          hostPath:
            path: /sys
kubectl apply -f increase-watchers.yaml

To look into:

  • Do I need more CPUs per node?
    • How do I even go about figuring this out?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment