Skip to content

Instantly share code, notes, and snippets.

@bsord
Last active April 11, 2022 16:24
Show Gist options
  • Save bsord/fbddf7381a51b09c6cd78f6656f241cc to your computer and use it in GitHub Desktop.
Save bsord/fbddf7381a51b09c6cd78f6656f241cc to your computer and use it in GitHub Desktop.
Kubernetes (k3s) deployment using helm to deploy MetalLb, Nginx-Ingress, SSL (cert-mananager), and Rook Ceph, depicted in YAML format.
vms:
OS: Ubuntu 18.04
update, upgrade
static ip
ssh keys
runtime:
- k3s
installCommands:
- Server
note: No traefik or servicelb is deployed with the below commands
note: user needs to be a sudoer
note: some raspberri pis may need "cgroup_memory=1 cgroup_enable=memory" added to /boot/cmdline.txt or /boot/firmware/cmdline.txt for arm64 variants.
cmd: curl -sfL https://get.k3s.io | sh -s - server --no-deploy traefik --no-deploy servicelb --bind-address IPADDRESSHERE; sudo chown $USER:$USER /etc/rancher/k3s/k3s.yaml
verify: sudo k3s kubectl get po --all-namespaces --watch
- Agent
note: get token on master node with sudo cat /var/lib/rancher/k3s/server/node-token
cmd: curl -sfL https://get.k3s.io | K3S_URL=https://K3S-MASTER-NODE-IP-HERE:6443 K3S_TOKEN=PUTYOURTOKENHERE sh -
verify: sudo k3s kubectl get nodes
- Uninstall
cmd: /usr/local/bin/k3s-uninstall.sh
- Copy/TakeOwnership/Update Config
cmd: `sudo chown kubeadmin:kubeadmin /etc/rancher/k3s/k3s.yaml`
cmd: `sudo cp /etc/rancher/k3s/k3s.yaml k3s.yaml`
cmd: `sudo nano k3s.yaml` and change LOCALHOST to the ip of the master node
- Pull config onto managementbox
cmd: sudo scp [email protected]:/home/kubeadmin/k3s.yaml $HOME/.kube/config; chown mgmtadmin:mgmtadmin $HOME/.kube/config;
infrastructureDeployments:
- DNS
package: CoreDNS
note: Already installed by k3s.
- Networking/CNI
package: Flannel
note: Already installed by k3s.
- Deployment Manager
package: Helm/Tiller
reference: https://helm.sh/docs/
note: READ ABOUT RBAC https://helm.sh/docs/using_helm/#special-note-for-rbac-users not needed for simple install
install:
- Install Client
cmd: sudo snap install helm --classic
- Create service account
cmd: kubectl -n kube-system create serviceaccount tiller
- Create RBAC roles
cmd: kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
- Deploy
cmd: helm init --service-account tiller
- LoadBalancer
package: MetalLb
reference: https://metallb.universe.tf/
install:
- Deploy
cmd: kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
- Configure
note: replace ip address space in below config
apply:
- Yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
- Ingress
package: Nginx-Ingress
install:
- Deploy
cmd: helm install stable/nginx-ingress --name funnel
notes: you probably want to patch the service created so external source ips can be passed through
notes: kubectl patch svc -n kube-system ingress-nginx -p '{"spec":{"externalTrafficPolicy":"Local"}}'
- Certificates
package: Cert-manager
install:
- Deploy
cmd: helm install --name cert-manager --namespace cert-manager --version v0.9.0-alpha.0 jetstack/cert-manager
- Configure
note: replace email in below config
apply:
- Yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: <[email protected]>
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-sec-staging
# Enable HTTP01 validations
http01: {}
- Storage
package: Rook Ceph
note: each node must have a seperate volume labelled /dev/sdb otherwise changes to clusterconfig will be necessary
note: 1) add new volume in VM manager
2) reboot the vm
3) sudo lshw -C disk
install:
- Deploy on master node
cmd: git clone git clone https://github.com/rook/rook.git
cmd: cd rook/cluster/examples/kubernetes/ceph
cmd: kubectl create -f operator.yaml
note: WAIT A WHILE FOR ALL THE PODS TO START
- Configure a cluster
apply:
-YAML:
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
# For the latest ceph images, see https://hub.docker.com/r/ceph/ceph/tags
image: ceph/ceph:v14.2.1-20190430
dataDirHostPath: /var/lib/rook
mon:
count: 3
dashboard:
enabled: true
storage:
useAllNodes: true
useAllDevices: false
deviceFilter: sdb #THIS WILL USE ALL SDB MOUNTED VOLUMES (secondary drives on the host)
config:
# The default and recommended storeType is dynamically set to bluestore for devices and filestore for directories.
# Set the storeType explicitly only if it is required not to use the default.
# storeType: bluestore
storeType: filestore
- Configure Block storage class
apply:
-YAML:
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph
spec:
replicated:
size: 3
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
provisioner: ceph.rook.io/block
parameters:
blockPool: replicapool
# Specify the namespace of the rook cluster from which to create volumes.
# If not specified, it will use `rook` as the default namespace of the cluster.
# This is also the namespace where the cluster will be
clusterNamespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, it will use `ext4`.
fstype: xfs
note: DASHBOARD can be accessed via proxy, ingress, or setting the service type to load balancer to expose it externally.
- Enable dashboard
apply:
-YAML
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rook-ceph-mgr-dashboard
namespace: rook-ceph
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/server-snippet: |
proxy_ssl_verify off;
spec:
tls:
- hosts:
- rook-ceph.example.com
secretName: rook-ceph.example.com
rules:
- host: rook-ceph.example.com
http:
paths:
- path: /
backend:
serviceName: rook-ceph-mgr-dashboard
servicePort: https-dashboard
- Fix dashboard 500 (exec into operator and run the 3 commands below
cmd: kubectl exec -it -n rook-ceph kubectl exec -it -n rook-ceph rook-ceph-operator-775cf575c5-cmv44 /bin/sh
cmd:
ceph dashboard ac-role-create admin-no-iscsi
for scope in dashboard-settings log rgw prometheus grafana nfs-ganesha manager hosts rbd-image config-opt rbd-mirroring cephfs user osd pool monitor; do
ceph dashboard ac-role-add-scope-perms admin-no-iscsi ${scope} create delete read update;
done
ceph dashboard ac-user-set-roles admin admin-no-iscsi
- get admin credentials (username 'admin')
cmd kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
- CI/CD
package: Keel
- Install
download: https://sunstone.dev/keel?namespace=keel&username=admin&password=admin&tag=latest
note: change username, pass, and slack token then apply.
- Dashboard:
package: kubernetes/dashboard
- Deploy
cmd: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
- Configure
cmd: kubectl edit deploy -n kube-system kubernetes-dashboard
note: delete --automatically-generate-certs
note: add --enable-insecure-login
note: set livenessprobe:httpGet:port:9090 (away from 8443)
note: set livenessprobe:httpGet:scheme: HTTP (away from HTTPS)
note: set ports:-containerport: 9090 (away from 8443)
- Build Setup Ingress
apply:
-YAML
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/use-port-in-redirects: "true"
spec:
rules:
- host: kube.bsord.dev
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 9090
path: /
tls:
- hosts:
- kube.bsord.dev
secretName: kube-bsord-dev-cert
- Setup Access Token:
cmd: kubectl create serviceaccount k8sadmin -n kube-system
cmd: kubectl create clusterrolebinding k8sadmin --clusterrole=cluster-admin --serviceaccount=kube-system:k8sadmin
cmd: kubectl get secret -n kube-system | grep k8sadmin | cut -d " " -f1 | xargs -n 1 | xargs kubectl get secret -o 'jsonpath={.data.token}' -n kube-system | base64 --decode
note: don't copy the chars '(base)' at the end of command
- Monitoring
package: Prometheus
-Deploy
note: must define prometheus-custom-values.yml
cmd: helm install --name prom --namespace monitoring -f prometheus-custom-values.yml stable/prometheus-operator
-YAML
coreDns:
enabled: false
kubeDns:
enabled: true
alertmanager:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
hosts:
- alerts.bsord.dev
tls:
- secretName: alerts-bsord-dev-cert
hosts:
- alerts.bsord.dev
persistence:
enabled: true
accessModes: ["ReadWriteOnce"]
size: 10Gi
storageClassName: rook-ceph-block
prometheus:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
hosts:
- monitor.bsord.dev
tls:
- secretName: monitor-bsord-dev-cert
hosts:
- monitor.bsord.dev
persistence:
enabled: true
accessModes: ["ReadWriteOnce"]
size: 10Gi
storageClassName: rook-ceph-block
grafana:
adminPassword: "Klust3r!"
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
hosts:
- metrics.bsord.dev
tls:
- secretName: metrics-bsord-dev-cert
hosts:
- metrics.bsord.dev
persistence:
enabled: true
accessModes: ["ReadWriteOnce"]
size: 10Gi
storageClassName: rook-ceph-block
- AutoScaling
package: metrics-server
-Deploy
cmd: git clone https://github.com/kubernetes-incubator/metrics-server.git
cmd: cd metrics-server/
cmd: kubectl create -f deploy/1.8+/
note: Need patch deployment to specify cpu limits
cmd: kubectl patch deployment docker-node-app -n docker-node-app -p='{"spec":{"template":{"spec":{"containers":[{"name":"docker-node-app","resources":{"requests":{"cpu":"50m"}}}]}}}}'
cmd: kubectl autoscale deployment -n docker-node-app docker-node-app --cpu-percent=25 --min=1 --max=10
additionalNotes:
- Moving Config file to management box
- Cert man, staging vs prod
- Could improve putting infra into kube-system or their own respective namespaces as a better practice
- Should include example yaml that covers entire app deployment inclusive of Deployment, Service, and Ingress.
@yashodhank
Copy link

How do you use it?

@theoparis
Copy link

How do you use it?

Yes, I have the same question 😕

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment