Skip to content

Instantly share code, notes, and snippets.

@LucasVanHaaren
Last active August 23, 2024 12:30
Show Gist options
  • Save LucasVanHaaren/c46c74e230654559280adfc69ba95fde to your computer and use it in GitHub Desktop.
Save LucasVanHaaren/c46c74e230654559280adfc69ba95fde to your computer and use it in GitHub Desktop.
Kubernetes-related cli cheatsheets

helm

List charts of a custom repository / artifacthub

helm search repo $REPO_NAME
helm search hub $CHART_NAME

List installed releases on a cluster / specific namespace

helm list -A
helm list -n $NAMESPACE

Get values of a remote chart

helm show values $REPO/$CHART --version $VERSION

Get user-supplied values of a deployed chart

helm get values -n $NAMESPACE $RELEASE

kubectl

Merge multiple kubeconfig file in one

KUBECONFIG=<KUBECONFIG_1>:<KUBECONFIG_2> kubectl config view --flatten > <MERGED_KUBECONFIG>

Get image name and tag from a pod name

kubectl get pod $POD_NAME -o jsonpath="{.spec.containers[*].image}"

Run shell on a Node (using debug pod)

kubectl debug node/$NODE_NAME -it --image=alpine -- sh

Get cluster nodes IP

# Internal IP
kubectl get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}'

# External IP
kubectl get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type=="ExternalIP")].address}{"\n"}{end}'

# Format for nginx-ingress whitelist with trailing /32 CIDR
kubectl get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type=="ExternalIP")].address}{"/32, "}{end}'

Clean failed jobs

kubectl delete job $(kubectl get job -o=jsonpath='{.items[?(@.status.failed==1)].metadata.name}')

Copy files without tar executable in container

workarounds with cat and tee when kubectl cp is not possible

# copy single file from pod to host 
kubectl exec -i $POD_NAME -c $CONTAINER_NAME -- cat /etc/passwd > ./container_passwd

# copy single file from host to pod
cat /etc/passwd | kubectl exec -i $POD_NAME -c $CONTAINER_NAME -- tee /tmp/host_passwd >/dev/null

velero

Filter backups by storage-location (using velero labels)

velero get backups -l velero.io/storage-location=scaleway-keycloak

Copy storage-location on another cluster in read only mode (using yq, print to stdout)

kubectl get backupstoragelocations.velero.io -n velero $LOCATION_NAME -o yaml | yq 'del(.status) | del(.metadata.annotations.kubectl*,.metadata.creationTimestamp,.metadata.generation,.metadata.resourceVersion,.metadata.uid) | .spec.accessMode += "ReaOnly"' -

Restoring on a second cluster

Excluding some resources which won't work on another cluster

velero create restore --from-backup=$BACKUP \
  --include-namespaces=$NS \
  --exclude-resources=CustomResourceDefinition,Certificate,Challenge,CertificateRequest,Order,Ingress
--exclude-resources=CustomResourceDefinition,CertificateRequest,Order,Ingress

Restoring only presisted data

velero create restore --from-backup=keycloak-daily-20230224033035 \
  --include-namespaces=keycloak \
  --include-resources=pvc,pv

Restart velero

Sometimes velero restores got stuck on Phase: New for long time, so velero needs to be restarted : vmware-tanzu/velero#3216

kubectl rollout restart deployment/velero -n velero
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment