To test the issue with /mnt in kubelet in Lokomotive kinvolk-archives/lokomotive-kubernetes#160.
Add following flag to apiserver, kube-controller-manager, kube-scheduler
- --feature-gates=BlockVolume=true
To test the issue with /mnt in kubelet in Lokomotive kinvolk-archives/lokomotive-kubernetes#160.
Add following flag to apiserver, kube-controller-manager, kube-scheduler
- --feature-gates=BlockVolume=true
| echo ' | |
| apiVersion: apps/v1 | |
| kind: Deployment | |
| metadata: | |
| labels: | |
| run: bash | |
| name: bash | |
| spec: | |
| replicas: 1 | |
| selector: |
| apt-get update && \ | |
| apt-get -y upgrade && \ | |
| apt-get install -y make git byobu linux-generic | |
| systemctl reboot | |
| apt-get install -y linux-headers-`uname -r` && \ | |
| apt install -y virtualbox && \ | |
| apt-get install -y vagrant && \ | |
| dpkg-reconfigure virtualbox-dkms |
| $ ./hyperkube kubelet --node-ip=10.88.81.5 --anonymous-auth=false --authentication-token-webhook --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ca.crt --cluster_dns=10.3.0.10 --cluster_domain=cluster.local --cni-conf-dir=/etc/kubernetes/cni/net.d --config=/etc/kubernetes/kubelet.config --kubeconfig=/etc/kubernetes/kubeconfig --lock-file=/var/run/lock/kubelet.lock --network-plugin=cni --pod-manifest-path=/etc/kubernetes/manifests --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --node-labels=node.kubernetes.io/node, | |
| metallb.universe.tf/my-asn=65000,metallb.universe.tf/peer-asn=65530 --register-with-taints= --address=10.88.81.5 | |
| Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
| Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by |
The cluster was in HEALTH_WARN state with backfill errors. So I followed the advise from https://centosquestions.com/how-to-resolve-ceph-pool-getting-activeremappedbackfill_toofull/.
See the health:
# ceph health detail
HEALTH_WARN 1 backfillfull osd(s); 1 pool(s) backfillfull
OSD_BACKFILLFULL 1 backfillfull osd(s)
osd.8 is backfill full
POOL_BACKFILLFULL 1 pool(s) backfillfull| apiVersion: v2 | |
| name: foo | |
| version: 1.0.0 | |
| description: foo | |
| keywords: | |
| - foo |
| ✔container_memory_working_set_bytes{cluster="bc-sjc1-2020724212341",endpoint="https-metrics",id="/kubepods/burstable/podfe81ffd4-075e-4b3f-b71c-d6c5d6f3b4e8",instance="10.88.79.145:10250",job="kubelet",namespace="linkerd",node="bc-sjc1-2020724212341-workload-worker-2",pod="linkerd-grafana-5b758ccfdf-5g9db",prometheus="monitoring/prometheus-operator-prometheus",prometheus_replica="prometheus-prometheus-operator-prometheus-0",service="prometheus-operator-kubelet"} | |
| ✔container_memory_working_set_bytes{cluster="bc-sjc1-2020724212341",endpoint="https-metrics",id="/kubepods/burstable/podfcde4b00-606c-4ea3-8cc8-43f60ec496d7",instance="10.88.79.131:10250",job="kubelet",namespace="linkerd",node="bc-sjc1-2020724212341-workload-worker-5",pod="linkerd-prometheus-6599d898db-plwdl",prometheus="monitoring/prometheus-operator-prometheus",prometheus_replica="prometheus-prometheus-operator-prometheus-0",service="prometheus-operator-kubelet"} | |
| ✔container_memory_working_set_bytes{cluster="bc-sjc1-2020724212341",endpoint="https-met |
| #!/bin/bash | |
| set -euo pipefail | |
| set -x | |
| # Source: https://docs.docker.com/engine/install/ubuntu/ | |
| apt-get update | |
| apt-get -y remove docker docker-engine docker.io containerd runc || true | |
| apt-get install -y \ |
| #!/bin/bash | |
| # Delete all unused volumes from AWS | |
| for region in $(aws ec2 describe-regions --region us-east-1 --output text | cut -f4); do | |
| echo "Region: $region" | |
| for vol in $(aws ec2 describe-volumes --region $region --filter "Name=status,Values=available" | jq -r '.Volumes[].VolumeId'); do | |
| echo "Volume: $vol" | |
| aws ec2 delete-volume --region $region --volume-id $vol | |
| done | |
| done |
| apiVersion: v1 | |
| kind: Namespace | |
| metadata: | |
| labels: | |
| cluster.x-k8s.io/provider: cluster-api | |
| control-plane: controller-manager | |
| name: capi-system | |
| --- | |
| apiVersion: apiextensions.k8s.io/v1 | |
| kind: CustomResourceDefinition |