- Create a pod and mount a secret:
apiVersion: v1
kind: Pod
metadata:
labels:
run: httpd
name: httpd
spec:
nodeName:
apiVersion: v1
kind: Pod
metadata:
labels:
run: httpd
name: httpd
spec:
nodeName:
kubectl get ns <your_namespace> -o json > stuck_ns.json
and check the JSON to see if there are obvious indications as to what's causing it.Right way is to find out why it's stuck in terminating state. Very common reason is there's an unavailable API service(s) which prevents cluster from finalizing namespaces.
kubectl get apiservice | grep False
#!/usr/bin/env python | |
# -*- coding: utf-8 -*- | |
# Copyright 2012 Matt Martz | |
# All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); you may | |
# not use this file except in compliance with the License. You may obtain | |
# a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 |
kubectl-debug create -f- <<EOF | |
apiVersion: v1 | |
kind: Pod | |
metadata: | |
name: privileged-debug | |
spec: | |
hostNetwork: true | |
containers: | |
- name: privileged-debug | |
image: ubuntu |
timestamp=$(date +%d-%m-%Y_%H-%M-%S) | |
echo "############## ${timestamp} ##############" | |
declare -a ignore_namespaces=("kube-system" "kube-node-lease" "kube-public") | |
namespaces=($(/usr/local/bin/kubectl get ns -o name)) | |
for ns in "${namespaces[@]}"; do | |
ns=${ns##*/} | |
echo -e "\nProcessing Namespace: ${ns} " | |
status=$(/usr/local/bin/kubectl get ns ${ns} -o jsonpath='{.status.phase}') | |
if [ "${status}" == "Active" ]; then | |
if grep -q "${ns}" <<< "${ignore_namespaces[@]}" |
[toplevel] | |
whoami = sts get-caller-identity | |
create-assume-role = | |
!f() { | |
aws iam create-role --role-name "${1}" \ | |
--assume-role-policy-document \ | |
"{\"Statement\":[{\ | |
\"Action\":\"sts:AssumeRole\",\ |
curl -s https://docs.aws.amazon.com/eks/latest/userguide/doc-history.rss | grep "<title>Kubernetes version"
eksctl create cluster --version=1.14 --name suhas-eks-test --region us-east-1 --zones us-east-1a,us-east-1b --node-type t2.medium --nodes 2 --ssh-access=true --ssh-public-key basarkod-test
eksctl create cluster --without-nodegroup --version=1.14 --name delete-me --vpc-public-subnets=subnet-123,subnet-456
I use this script to check for any throttling issues for EKS or Kubernetes running on AWS. Feel free to customize it depending on your needs.
ls -1 /sys/class/net
This outlines how to use S3 bucket as a repository for your helm charts
Sources: