Created
November 3, 2019 14:07
-
-
Save alvaroaleman/238b29e1e3f49aab7e39089e4545e30a to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
wrapper.sh] [INFO] Wrapping Test Command: `bash -c gsutil cp -P gs://bentheelder-kind-ci-builds/latest/kind-linux-amd64 "${PATH%%:*}/kind" && gsutil cat gs://bentheelder-kind-ci-builds/latest/e2e-k8s.sh | sh` | |
wrapper.sh] [INFO] Running in: gcr.io/k8s-testimages/krte:v20191020-6567e5c-master | |
wrapper.sh] [INFO] See: https://github.com/kubernetes/test-infra/blob/master/images/krte/wrapper.sh | |
================================================================================ | |
wrapper.sh] [SETUP] Performing pre-test setup ... | |
wrapper.sh] [SETUP] Bazel remote cache is enabled, generating .bazelrcs ... | |
create_bazel_cache_rcs.sh: Configuring '/root/.bazelrc' and '/etc/bazel.bazelrc' with | |
# ------------------------------------------------------------------------------ | |
startup --host_jvm_args=-Dbazel.DigestFunction=sha256 | |
build --experimental_remote_spawn_cache | |
build --remote_local_fallback | |
build --remote_http_cache=http://bazel-cache.default.svc.cluster.local.:8080/kubernetes/kubernetes,7f7656b63c121afcda83188b05b5fd13 | |
# ------------------------------------------------------------------------------ | |
wrapper.sh] [SETUP] Done setting up .bazelrcs | |
wrapper.sh] [SETUP] Docker in Docker enabled, initializing ... | |
Starting Docker: docker. | |
wrapper.sh] [SETUP] Waiting for Docker to be ready, sleeping for 1 seconds ... | |
wrapper.sh] [SETUP] Done setting up Docker in Docker. | |
wrapper.sh] [SETUP] Setting SOURCE_DATE_EPOCH for build reproducibility ... | |
wrapper.sh] [SETUP] exported SOURCE_DATE_EPOCH=1572763301 | |
================================================================================ | |
wrapper.sh] [TEST] Running Test Command: `bash -c gsutil cp -P gs://bentheelder-kind-ci-builds/latest/kind-linux-amd64 "${PATH%%:*}/kind" && gsutil cat gs://bentheelder-kind-ci-builds/latest/e2e-k8s.sh | sh` ... | |
Copying gs://bentheelder-kind-ci-builds/latest/kind-linux-amd64... | |
/ [0 files][ 0.0 B/ 9.4 MiB] | |
/ [1 files][ 9.4 MiB/ 9.4 MiB] | |
Operation completed over 1 objects/9.4 MiB. | |
+ main | |
+ mktemp -d | |
+ TMP_DIR=/tmp/tmp.0uZ1no4LVH | |
+ trap cleanup EXIT | |
+ export ARTIFACTS=/logs/artifacts | |
+ mkdir -p /logs/artifacts | |
+ KUBECONFIG=/root/.kube/kind-test-config | |
+ export KUBECONFIG | |
+ echo exported KUBECONFIG=/root/.kube/kind-test-config | |
exported KUBECONFIG=/root/.kube/kind-test-config | |
+ kind version | |
kind v0.6.0-alpha+0ff0546bc81543 go1.13.3 linux/amd64 | |
+ BUILD_TYPE=bazel | |
+ [ bazel = bazel ] | |
+ build_with_bazel | |
+ [ true = true ] | |
+ create_bazel_cache_rcs.sh | |
create_bazel_cache_rcs.sh: Configuring '/root/.bazelrc' and '/etc/bazel.bazelrc' with | |
# ------------------------------------------------------------------------------ | |
startup --host_jvm_args=-Dbazel.DigestFunction=sha256 | |
build --experimental_remote_spawn_cache | |
build --remote_local_fallback | |
build --remote_http_cache=http://bazel-cache.default.svc.cluster.local.:8080/kubernetes/kubernetes,7f7656b63c121afcda83188b05b5fd13 | |
# ------------------------------------------------------------------------------ | |
+ kind build node-image --type=bazel | |
Starting to build Kubernetes | |
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'. | |
Extracting Bazel installation... | |
Starting local Bazel server and connecting to it... | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
INFO: Invocation ID: ca63bdfb-2099-49a8-a989-f344b0b3a22d | |
Loading: | |
Loading: 0 packages loaded | |
Loading: 0 packages loaded | |
Loading: 0 packages loaded | |
Loading: 0 packages loaded | |
Loading: 0 packages loaded | |
Loading: 3 packages loaded | |
currently loading: build | |
Loading: 3 packages loaded | |
currently loading: build | |
Analyzing: 4 targets (4 packages loaded, 0 targets configured) | |
Analyzing: 4 targets (15 packages loaded, 31 targets configured) | |
Analyzing: 4 targets (16 packages loaded, 31 targets configured) | |
Analyzing: 4 targets (16 packages loaded, 31 targets configured) | |
Analyzing: 4 targets (936 packages loaded, 8838 targets configured) | |
Analyzing: 4 targets (2184 packages loaded, 16012 targets configured) | |
INFO: Analysed 4 targets (2184 packages loaded, 17236 targets configured). | |
Building: checking cached actions | |
INFO: Found 4 targets... | |
[0 / 20] [-----] Expanding template external/bazel_tools/tools/build_defs/hash/sha256 [for host] | |
[61 / 1,547] GoStdlib external/io_bazel_rules_go/linux_amd64_pure_stripped/stdlib%/pkg; 3s remote-cache ... (8 actions, 7 running) | |
[1,007 / 2,680] GoCompilePkg staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions.a; 0s remote-cache ... (7 actions running) | |
[1,910 / 2,701] GoCompilePkg cmd/kubeadm/app/cmd/phases/join/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.a; 0s remote-cache ... (7 actions, 6 running) | |
[2,484 / 3,070] GoLink cmd/kubectl/linux_amd64_pure_stripped/kubectl; 7s linux-sandbox ... (8 actions, 7 running) | |
[2,921 / 3,139] GoLink cmd/kube-apiserver/linux_amd64_pure_stripped/kube-apiserver; 13s linux-sandbox ... (8 actions, 7 running) | |
[3,115 / 3,144] GoLink cmd/kube-apiserver/linux_amd64_pure_stripped/kube-apiserver; 23s linux-sandbox ... (4 actions, 3 running) | |
[3,124 / 3,144] GoLink cmd/kubelet/kubelet; 19s linux-sandbox ... (3 actions, 2 running) | |
[3,134 / 3,144] ImageLayer build/kube-apiserver-internal-layer.tar; 4s linux-sandbox ... (2 actions, 1 running) | |
[3,143 / 3,144] Executing genrule //build:gen_kube-apiserver.tar; 1s linux-sandbox | |
INFO: Elapsed time: 132.036s, Critical Path: 76.59s | |
INFO: 3095 processes: 3037 remote cache hit, 58 linux-sandbox. | |
INFO: Build completed successfully, 3144 total actions | |
INFO: Build completed successfully, 3144 total actions | |
Finished building Kubernetes | |
Building node image in: /tmp/kind-node-image098734066 | |
Starting image build ... | |
Building in kind-build-4073b6e5-ecf9-4c78-a22c-7afa63e4592b | |
fixed: k8s.gcr.io/kube-apiserver-amd64 -> k8s.gcr.io/kube-apiserver | |
fixed: k8s.gcr.io/kube-controller-manager-amd64 -> k8s.gcr.io/kube-controller-manager | |
fixed: k8s.gcr.io/kube-proxy-amd64 -> k8s.gcr.io/kube-proxy | |
fixed: k8s.gcr.io/kube-scheduler-amd64 -> k8s.gcr.io/kube-scheduler | |
Detected built images: k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011, k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011, k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011, k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011 | |
Pulling: kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555 | |
Pulling: k8s.gcr.io/pause:3.1 | |
Pulling: k8s.gcr.io/etcd:3.4.3-0 | |
Pulling: k8s.gcr.io/coredns:1.6.2 | |
fixed: k8s.gcr.io/kube-proxy-amd64 -> k8s.gcr.io/kube-proxy | |
fixed: k8s.gcr.io/kube-proxy-amd64 -> k8s.gcr.io/kube-proxy | |
fixed: k8s.gcr.io/kube-scheduler-amd64 -> k8s.gcr.io/kube-scheduler | |
fixed: k8s.gcr.io/kube-scheduler-amd64 -> k8s.gcr.io/kube-scheduler | |
fixed: k8s.gcr.io/kube-controller-manager-amd64 -> k8s.gcr.io/kube-controller-manager | |
fixed: k8s.gcr.io/kube-controller-manager-amd64 -> k8s.gcr.io/kube-controller-manager | |
fixed: k8s.gcr.io/kube-apiserver-amd64 -> k8s.gcr.io/kube-apiserver | |
fixed: k8s.gcr.io/kube-apiserver-amd64 -> k8s.gcr.io/kube-apiserver | |
sha256:a7e030da540ce82163d6b8fd1a997193f89ac0620573fd52a257f577f6b14ba2 | |
Image build completed. | |
+ bazel build //cmd/kubectl //test/e2e:e2e.test //vendor/github.com/onsi/ginkgo/ginkgo | |
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'. | |
Starting local Bazel server and connecting to it... | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
INFO: Invocation ID: 4302c021-5dab-4f55-ba83-1ab76d1bb27a | |
Loading: | |
Loading: 0 packages loaded | |
Analyzing: 3 targets (3 packages loaded) | |
Analyzing: 3 targets (3 packages loaded, 0 targets configured) | |
Analyzing: 3 targets (12 packages loaded, 19 targets configured) | |
Analyzing: 3 targets (172 packages loaded, 3807 targets configured) | |
Analyzing: 3 targets (495 packages loaded, 7877 targets configured) | |
Analyzing: 3 targets (653 packages loaded, 8573 targets configured) | |
Analyzing: 3 targets (878 packages loaded, 9627 targets configured) | |
Analyzing: 3 targets (1132 packages loaded, 11836 targets configured) | |
Analyzing: 3 targets (1460 packages loaded, 13305 targets configured) | |
Analyzing: 3 targets (1850 packages loaded, 15902 targets configured) | |
Analyzing: 3 targets (1857 packages loaded, 16775 targets configured) | |
Analyzing: 3 targets (1858 packages loaded, 16895 targets configured) | |
INFO: Analysed 3 targets (1858 packages loaded, 16899 targets configured). | |
INFO: Found 3 targets... | |
[1 / 16] [-----] BazelWorkspaceStatusAction stable-status.txt | |
[7 / 1,145] checking cached actions | |
[487 / 2,208] GoCompilePkg vendor/github.com/onsi/gomega/matchers/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/vendor/github.com/onsi/gomega/matchers.a; 0s remote-cache | |
[1,324 / 2,208] GoCompilePkg staging/src/k8s.io/apiextensions-apiserver/pkg/generated/openapi/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/generated/openapi.a; 0s remote-cache ... (7 actions running) | |
[2,092 / 2,208] GoCompilePkg test/utils/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/test/utils.a; 0s remote-cache ... (3 actions running) | |
[2,206 / 2,208] GoLink test/e2e/linux_amd64_stripped/_go_default_test-cgo; 1s linux-sandbox | |
[2,206 / 2,208] GoLink test/e2e/linux_amd64_stripped/_go_default_test-cgo; 11s linux-sandbox | |
[2,207 / 2,208] [-----] Executing genrule //test/e2e:gen_e2e.test | |
INFO: Elapsed time: 63.810s, Critical Path: 35.27s | |
INFO: 528 processes: 524 remote cache hit, 4 linux-sandbox. | |
INFO: Build completed successfully, 533 total actions | |
INFO: Build completed successfully, 533 total actions | |
+ mkdir -p _output/bin/ | |
+ cp bazel-bin/test/e2e/e2e.test _output/bin/ | |
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f | |
+ dirname /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl | |
+ PATH=/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped:/home/prow/go/bin:/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin | |
+ export PATH | |
+ [ -n ] | |
+ create_cluster | |
+ cat | |
+ NUM_NODES=2 | |
+ KIND_CREATE_ATTEMPTED=true | |
+ kind create cluster --image=kindest/node:latest --retain --wait=1m -v=3 --config=/logs/artifacts/kind-config.yaml | |
DEBUG: exec/local.go:116] Running: "docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster=kind --format '{{.Names}}'" | |
Creating cluster "kind" ... | |
• Ensuring node image (kindest/node:latest) 🖼 ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --type=image kindest/node:latest" | |
DEBUG: docker/images.go:58] Image: kindest/node:latest present locally | |
✓ Ensuring node image (kindest/node:latest) 🖼 | |
• Preparing nodes 📦 ... | |
DEBUG: exec/local.go:116] Running: "docker info --format ''"'"'{{json .SecurityOptions}}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker run --hostname kind-worker2 --name kind-worker2 --label io.k8s.sigs.kind.role=worker --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --detach --tty --label io.k8s.sigs.kind.cluster=kind kindest/node:latest" | |
DEBUG: exec/local.go:116] Running: "docker run --hostname kind-worker --name kind-worker --label io.k8s.sigs.kind.role=worker --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --detach --tty --label io.k8s.sigs.kind.cluster=kind kindest/node:latest" | |
DEBUG: exec/local.go:116] Running: "docker run --hostname kind-control-plane --name kind-control-plane --label io.k8s.sigs.kind.role=control-plane --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --detach --tty --label io.k8s.sigs.kind.cluster=kind --publish=127.0.0.1:33605:6443/TCP kindest/node:latest" | |
✓ Preparing nodes 📦 | |
DEBUG: exec/local.go:116] Running: "docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster=kind --format '{{.Names}}'" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
• Creating kubeadm config 📜 ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker cat /kind/version" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane cat /kind/version" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker2 cat /kind/version" | |
DEBUG: exec/local.go:116] Running: "docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' kind-control-plane" | |
DEBUG: kubeadm/config.go:445] Configuration Input data: {kind v1.18.0-alpha.0.178+0c66e64b140011 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} | |
DEBUG: kubeadm/config.go:445] Configuration Input data: {kind v1.18.0-alpha.0.178+0c66e64b140011 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} | |
DEBUG: config/config.go:209] Using kubeadm config: | |
apiServer: | |
certSANs: | |
- localhost | |
- 127.0.0.1 | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
clusterName: kind | |
controlPlaneEndpoint: 172.17.0.3:6443 | |
controllerManager: | |
extraArgs: | |
enable-hostpath-provisioner: "true" | |
kind: ClusterConfiguration | |
kubernetesVersion: v1.18.0-alpha.0.178+0c66e64b140011 | |
networking: | |
podSubnet: 10.244.0.0/16 | |
serviceSubnet: 10.96.0.0/12 | |
scheduler: | |
extraArgs: null | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
bootstrapTokens: | |
- token: abcdef.0123456789abcdef | |
kind: InitConfiguration | |
localAPIEndpoint: | |
advertiseAddress: 172.17.0.2 | |
bindPort: 6443 | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.2 | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
discovery: | |
bootstrapToken: | |
apiServerEndpoint: 172.17.0.3:6443 | |
token: abcdef.0123456789abcdef | |
unsafeSkipCAVerification: true | |
kind: JoinConfiguration | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.2 | |
--- | |
apiVersion: kubelet.config.k8s.io/v1beta1 | |
evictionHard: | |
imagefs.available: 0% | |
nodefs.available: 0% | |
nodefs.inodesFree: 0% | |
imageGCHighThresholdPercent: 100 | |
kind: KubeletConfiguration | |
--- | |
apiVersion: kubeproxy.config.k8s.io/v1alpha1 | |
kind: KubeProxyConfiguration | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker2 mkdir -p /kind" | |
DEBUG: config/config.go:209] Using kubeadm config: | |
apiServer: | |
certSANs: | |
- localhost | |
- 127.0.0.1 | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
clusterName: kind | |
controlPlaneEndpoint: 172.17.0.3:6443 | |
controllerManager: | |
extraArgs: | |
enable-hostpath-provisioner: "true" | |
kind: ClusterConfiguration | |
kubernetesVersion: v1.18.0-alpha.0.178+0c66e64b140011 | |
networking: | |
podSubnet: 10.244.0.0/16 | |
serviceSubnet: 10.96.0.0/12 | |
scheduler: | |
extraArgs: null | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
bootstrapTokens: | |
- token: abcdef.0123456789abcdef | |
kind: InitConfiguration | |
localAPIEndpoint: | |
advertiseAddress: 172.17.0.4 | |
bindPort: 6443 | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.4 | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
discovery: | |
bootstrapToken: | |
apiServerEndpoint: 172.17.0.3:6443 | |
token: abcdef.0123456789abcdef | |
unsafeSkipCAVerification: true | |
kind: JoinConfiguration | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.4 | |
--- | |
apiVersion: kubelet.config.k8s.io/v1beta1 | |
evictionHard: | |
imagefs.available: 0% | |
nodefs.available: 0% | |
nodefs.inodesFree: 0% | |
imageGCHighThresholdPercent: 100 | |
kind: KubeletConfiguration | |
--- | |
apiVersion: kubeproxy.config.k8s.io/v1alpha1 | |
kind: KubeProxyConfiguration | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker mkdir -p /kind" | |
DEBUG: kubeadm/config.go:445] Configuration Input data: {kind v1.18.0-alpha.0.178+0c66e64b140011 172.17.0.3:6443 6443 127.0.0.1 true 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} | |
DEBUG: config/config.go:209] Using kubeadm config: | |
apiServer: | |
certSANs: | |
- localhost | |
- 127.0.0.1 | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
clusterName: kind | |
controlPlaneEndpoint: 172.17.0.3:6443 | |
controllerManager: | |
extraArgs: | |
enable-hostpath-provisioner: "true" | |
kind: ClusterConfiguration | |
kubernetesVersion: v1.18.0-alpha.0.178+0c66e64b140011 | |
networking: | |
podSubnet: 10.244.0.0/16 | |
serviceSubnet: 10.96.0.0/12 | |
scheduler: | |
extraArgs: null | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
bootstrapTokens: | |
- token: abcdef.0123456789abcdef | |
kind: InitConfiguration | |
localAPIEndpoint: | |
advertiseAddress: 172.17.0.3 | |
bindPort: 6443 | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.3 | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
controlPlane: | |
localAPIEndpoint: | |
advertiseAddress: 172.17.0.3 | |
bindPort: 6443 | |
discovery: | |
bootstrapToken: | |
apiServerEndpoint: 172.17.0.3:6443 | |
token: abcdef.0123456789abcdef | |
unsafeSkipCAVerification: true | |
kind: JoinConfiguration | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.3 | |
--- | |
apiVersion: kubelet.config.k8s.io/v1beta1 | |
evictionHard: | |
imagefs.available: 0% | |
nodefs.available: 0% | |
nodefs.inodesFree: 0% | |
imageGCHighThresholdPercent: 100 | |
kind: KubeletConfiguration | |
--- | |
apiVersion: kubeproxy.config.k8s.io/v1alpha1 | |
kind: KubeProxyConfiguration | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane mkdir -p /kind" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf" | |
✓ Creating kubeadm config 📜 | |
• Starting control-plane 🕹️ ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6" | |
DEBUG: kubeadminit/init.go:74] I1103 06:54:42.437440 138 initconfiguration.go:207] loading configuration from "/kind/kubeadm.conf" | |
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration | |
I1103 06:54:42.447314 138 feature_gate.go:216] feature gates: &{map[]} | |
[init] Using Kubernetes version: v1.18.0-alpha.0.178+0c66e64b140011 | |
[preflight] Running pre-flight checks | |
I1103 06:54:42.447992 138 checks.go:577] validating Kubernetes and kubeadm version | |
I1103 06:54:42.448219 138 checks.go:166] validating if the firewall is enabled and active | |
I1103 06:54:42.461829 138 checks.go:201] validating availability of port 6443 | |
I1103 06:54:42.462183 138 checks.go:201] validating availability of port 10251 | |
I1103 06:54:42.462226 138 checks.go:201] validating availability of port 10252 | |
I1103 06:54:42.462274 138 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml | |
I1103 06:54:42.462322 138 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml | |
I1103 06:54:42.462340 138 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml | |
I1103 06:54:42.462353 138 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml | |
I1103 06:54:42.462369 138 checks.go:432] validating if the connectivity type is via proxy or direct | |
I1103 06:54:42.462426 138 checks.go:471] validating http connectivity to first IP address in the CIDR | |
I1103 06:54:42.462447 138 checks.go:471] validating http connectivity to first IP address in the CIDR | |
I1103 06:54:42.462459 138 checks.go:102] validating the container runtime | |
I1103 06:54:42.486634 138 checks.go:376] validating the presence of executable crictl | |
I1103 06:54:42.486709 138 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables | |
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist | |
I1103 06:54:42.486895 138 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward | |
I1103 06:54:42.486995 138 checks.go:649] validating whether swap is enabled or not | |
I1103 06:54:42.487034 138 checks.go:376] validating the presence of executable ip | |
I1103 06:54:42.487127 138 checks.go:376] validating the presence of executable iptables | |
I1103 06:54:42.487279 138 checks.go:376] validating the presence of executable mount | |
I1103 06:54:42.487322 138 checks.go:376] validating the presence of executable nsenter | |
I1103 06:54:42.487375 138 checks.go:376] validating the presence of executable ebtables | |
I1103 06:54:42.487411 138 checks.go:376] validating the presence of executable ethtool | |
I1103 06:54:42.487439 138 checks.go:376] validating the presence of executable socat | |
I1103 06:54:42.487500 138 checks.go:376] validating the presence of executable tc | |
I1103 06:54:42.487524 138 checks.go:376] validating the presence of executable touch | |
I1103 06:54:42.487563 138 checks.go:520] running all checks | |
I1103 06:54:42.493529 138 checks.go:406] checking whether the given node name is reachable using net.LookupHost | |
I1103 06:54:42.493881 138 checks.go:618] validating kubelet version | |
I1103 06:54:42.600742 138 checks.go:128] validating if the service is enabled and active | |
I1103 06:54:42.625103 138 checks.go:201] validating availability of port 10250 | |
I1103 06:54:42.625585 138 checks.go:201] validating availability of port 2379 | |
I1103 06:54:42.625755 138 checks.go:201] validating availability of port 2380 | |
I1103 06:54:42.625934 138 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd | |
[preflight] Pulling images required for setting up a Kubernetes cluster | |
[preflight] This might take a minute or two, depending on the speed of your internet connection | |
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' | |
I1103 06:54:42.642072 138 checks.go:838] image exists: k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011 | |
I1103 06:54:42.655767 138 checks.go:838] image exists: k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011 | |
I1103 06:54:42.669250 138 checks.go:838] image exists: k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011 | |
I1103 06:54:42.683247 138 checks.go:838] image exists: k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011 | |
I1103 06:54:42.704181 138 checks.go:838] image exists: k8s.gcr.io/pause:3.1 | |
I1103 06:54:42.717674 138 checks.go:838] image exists: k8s.gcr.io/etcd:3.4.3-0 | |
I1103 06:54:42.729600 138 checks.go:838] image exists: k8s.gcr.io/coredns:1.6.2 | |
I1103 06:54:42.729659 138 kubelet.go:61] Stopping the kubelet | |
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" | |
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" | |
I1103 06:54:42.772283 138 kubelet.go:79] Starting the kubelet | |
[kubelet-start] Activating the kubelet service | |
[certs] Using certificateDir folder "/etc/kubernetes/pki" | |
I1103 06:54:42.890977 138 certs.go:104] creating a new certificate authority for ca | |
[certs] Generating "ca" certificate and key | |
[certs] Generating "apiserver" certificate and key | |
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.3 172.17.0.3 127.0.0.1] | |
[certs] Generating "apiserver-kubelet-client" certificate and key | |
I1103 06:54:43.549128 138 certs.go:104] creating a new certificate authority for front-proxy-ca | |
[certs] Generating "front-proxy-ca" certificate and key | |
[certs] Generating "front-proxy-client" certificate and key | |
I1103 06:54:43.944482 138 certs.go:104] creating a new certificate authority for etcd-ca | |
[certs] Generating "etcd/ca" certificate and key | |
[certs] Generating "etcd/server" certificate and key | |
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1] | |
[certs] Generating "etcd/peer" certificate and key | |
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1] | |
[certs] Generating "etcd/healthcheck-client" certificate and key | |
[certs] Generating "apiserver-etcd-client" certificate and key | |
I1103 06:54:45.487973 138 certs.go:70] creating a new public/private key files for signing service account users | |
[certs] Generating "sa" key and public key | |
[kubeconfig] Using kubeconfig folder "/etc/kubernetes" | |
I1103 06:54:45.669426 138 kubeconfig.go:79] creating kubeconfig file for admin.conf | |
[kubeconfig] Writing "admin.conf" kubeconfig file | |
I1103 06:54:45.861455 138 kubeconfig.go:79] creating kubeconfig file for kubelet.conf | |
[kubeconfig] Writing "kubelet.conf" kubeconfig file | |
I1103 06:54:46.332341 138 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf | |
[kubeconfig] Writing "controller-manager.conf" kubeconfig file | |
I1103 06:54:46.816266 138 kubeconfig.go:79] creating kubeconfig file for scheduler.conf | |
[kubeconfig] Writing "scheduler.conf" kubeconfig file | |
I1103 06:54:47.474436 138 manifests.go:90] [control-plane] getting StaticPodSpecs | |
[control-plane] Using manifest folder "/etc/kubernetes/manifests" | |
[control-plane] Creating static Pod manifest for "kube-apiserver" | |
I1103 06:54:47.485277 138 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" | |
[control-plane] Creating static Pod manifest for "kube-controller-manager" | |
I1103 06:54:47.485562 138 manifests.go:90] [control-plane] getting StaticPodSpecs | |
W1103 06:54:47.485828 138 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" | |
I1103 06:54:47.487648 138 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" | |
[control-plane] Creating static Pod manifest for "kube-scheduler" | |
I1103 06:54:47.487763 138 manifests.go:90] [control-plane] getting StaticPodSpecs | |
W1103 06:54:47.487902 138 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" | |
I1103 06:54:47.488992 138 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" | |
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" | |
I1103 06:54:47.490104 138 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" | |
I1103 06:54:47.490127 138 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy | |
I1103 06:54:47.491386 138 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf | |
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s | |
I1103 06:54:47.492863 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:47.993705 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:48.493773 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:48.994619 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:49.494222 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:49.994282 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:50.494015 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:50.993973 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:51.493732 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:51.993846 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:52.494183 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:57.368027 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 4374 milliseconds | |
I1103 06:54:57.495956 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds | |
I1103 06:54:57.996715 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 3 milliseconds | |
I1103 06:54:58.498211 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds | |
[apiclient] All control plane components are healthy after 11.503138 seconds | |
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace | |
I1103 06:54:58.995448 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 200 OK in 2 milliseconds | |
I1103 06:54:58.995555 138 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap | |
I1103 06:54:59.002852 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 4 milliseconds | |
I1103 06:54:59.009583 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 5 milliseconds | |
I1103 06:54:59.015371 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 4 milliseconds | |
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster | |
I1103 06:54:59.016484 138 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap | |
I1103 06:54:59.023519 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 5 milliseconds | |
I1103 06:54:59.027330 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds | |
I1103 06:54:59.031695 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds | |
I1103 06:54:59.031902 138 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node | |
I1103 06:54:59.031928 138 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kind-control-plane" as an annotation | |
I1103 06:54:59.535391 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds | |
I1103 06:54:59.543352 138 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 5 milliseconds | |
[upload-certs] Skipping phase. Please see --upload-certs | |
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label "node-role.kubernetes.io/master=''" | |
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] | |
I1103 06:55:00.047499 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds | |
I1103 06:55:00.052782 138 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 4 milliseconds | |
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles | |
I1103 06:55:00.056855 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 2 milliseconds | |
I1103 06:55:00.063542 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/secrets 201 Created in 5 milliseconds | |
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials | |
I1103 06:55:00.074851 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 10 milliseconds | |
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token | |
I1103 06:55:00.078725 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds | |
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster | |
I1103 06:55:00.082191 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds | |
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace | |
I1103 06:55:00.082366 138 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig | |
I1103 06:55:00.083079 138 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf | |
I1103 06:55:00.083275 138 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig | |
I1103 06:55:00.083849 138 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace | |
I1103 06:55:00.087183 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 2 milliseconds | |
I1103 06:55:00.087481 138 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace | |
I1103 06:55:00.091263 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 3 milliseconds | |
I1103 06:55:00.095529 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 3 milliseconds | |
I1103 06:55:00.098081 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 2 milliseconds | |
I1103 06:55:00.101013 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 2 milliseconds | |
I1103 06:55:00.104440 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds | |
I1103 06:55:00.111298 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 5 milliseconds | |
I1103 06:55:00.117340 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 5 milliseconds | |
I1103 06:55:00.129526 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 3 milliseconds | |
I1103 06:55:00.160752 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 16 milliseconds | |
I1103 06:55:00.178331 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/services 201 Created in 15 milliseconds | |
[addons] Applied essential addon: CoreDNS | |
I1103 06:55:00.248951 138 request.go:573] Throttling request took 69.910732ms, request: POST:https://172.17.0.3:6443/api/v1/namespaces/kube-system/serviceaccounts | |
I1103 06:55:00.253577 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 4 milliseconds | |
I1103 06:55:00.444260 138 request.go:573] Throttling request took 188.357153ms, request: POST:https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps | |
I1103 06:55:00.449354 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 4 milliseconds | |
I1103 06:55:00.467168 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 11 milliseconds | |
I1103 06:55:00.470814 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds | |
I1103 06:55:00.474684 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds | |
I1103 06:55:00.481900 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 6 milliseconds | |
I1103 06:55:00.483053 138 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf | |
I1103 06:55:00.483977 138 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf | |
[addons] Applied essential addon: kube-proxy | |
Your Kubernetes control-plane has initialized successfully! | |
To start using your cluster, you need to run the following as a regular user: | |
mkdir -p $HOME/.kube | |
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | |
sudo chown $(id -u):$(id -g) $HOME/.kube/config | |
You should now deploy a pod network to the cluster. | |
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: | |
https://kubernetes.io/docs/concepts/cluster-administration/addons/ | |
You can now join any number of control-plane nodes by copying certificate authorities | |
and service account keys on each node and then running the following as root: | |
kubeadm join 172.17.0.3:6443 --token <value withheld> \ | |
--discovery-token-ca-cert-hash sha256:fa13192441ccaa921333b63599081f417d2326651c1f39a45a302f072024ac70 \ | |
--control-plane | |
Then you can join any number of worker nodes by running the following on each as root: | |
kubeadm join 172.17.0.3:6443 --token <value withheld> \ | |
--discovery-token-ca-cert-hash sha256:fa13192441ccaa921333b63599081f417d2326651c1f39a45a302f072024ac70 | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
✓ Starting control-plane 🕹️ | |
• Installing CNI 🔌 ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane cat /kind/manifests/default-cni.yaml" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f -" | |
✓ Installing CNI 🔌 | |
• Installing StorageClass 💾 ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -" | |
✓ Installing StorageClass 💾 | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
• Joining worker nodes 🚜 ... | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6" | |
DEBUG: kubeadmjoin/join.go:133] W1103 06:55:04.316958 352 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. | |
I1103 06:55:04.317041 352 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName | |
I1103 06:55:04.317060 352 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf" | |
I1103 06:55:04.319574 352 preflight.go:90] [preflight] Running general checks | |
I1103 06:55:04.319682 352 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests | |
[preflight] Running pre-flight checks | |
I1103 06:55:04.319780 352 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf | |
I1103 06:55:04.319823 352 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:04.319837 352 checks.go:102] validating the container runtime | |
I1103 06:55:04.335633 352 checks.go:376] validating the presence of executable crictl | |
I1103 06:55:04.335696 352 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables | |
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist | |
I1103 06:55:04.335784 352 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward | |
I1103 06:55:04.336038 352 checks.go:649] validating whether swap is enabled or not | |
I1103 06:55:04.336202 352 checks.go:376] validating the presence of executable ip | |
I1103 06:55:04.336336 352 checks.go:376] validating the presence of executable iptables | |
I1103 06:55:04.336438 352 checks.go:376] validating the presence of executable mount | |
I1103 06:55:04.336527 352 checks.go:376] validating the presence of executable nsenter | |
I1103 06:55:04.337129 352 checks.go:376] validating the presence of executable ebtables | |
I1103 06:55:04.337197 352 checks.go:376] validating the presence of executable ethtool | |
I1103 06:55:04.337217 352 checks.go:376] validating the presence of executable socat | |
I1103 06:55:04.337251 352 checks.go:376] validating the presence of executable tc | |
I1103 06:55:04.337273 352 checks.go:376] validating the presence of executable touch | |
I1103 06:55:04.337311 352 checks.go:520] running all checks | |
I1103 06:55:04.345846 352 checks.go:406] checking whether the given node name is reachable using net.LookupHost | |
I1103 06:55:04.346193 352 checks.go:618] validating kubelet version | |
I1103 06:55:04.445195 352 checks.go:128] validating if the service is enabled and active | |
I1103 06:55:04.462996 352 checks.go:201] validating availability of port 10250 | |
I1103 06:55:04.463397 352 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt | |
I1103 06:55:04.463499 352 checks.go:432] validating if the connectivity type is via proxy or direct | |
I1103 06:55:04.463542 352 join.go:441] [preflight] Discovering cluster-info | |
I1103 06:55:04.463640 352 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:04.464557 352 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:04.473464 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 8 milliseconds | |
I1103 06:55:04.474431 352 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:09.474681 352 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:09.475650 352 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:09.478144 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds | |
I1103 06:55:09.479085 352 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:14.480646 352 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:14.481773 352 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:14.484289 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds | |
I1103 06:55:14.484528 352 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:19.485085 352 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:19.485928 352 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:19.489633 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds | |
I1103 06:55:19.491696 352 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.3:6443" | |
I1103 06:55:19.491726 352 token.go:205] [discovery] Successfully established connection with API Server "172.17.0.3:6443" | |
I1103 06:55:19.491760 352 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process | |
I1103 06:55:19.491776 352 join.go:455] [preflight] Fetching init configuration | |
I1103 06:55:19.491784 352 join.go:493] [preflight] Retrieving KubeConfig objects | |
[preflight] Reading configuration from the cluster... | |
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' | |
I1103 06:55:19.503739 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 10 milliseconds | |
I1103 06:55:19.509708 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 2 milliseconds | |
I1103 06:55:19.513289 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.18 200 OK in 2 milliseconds | |
I1103 06:55:19.516342 352 interface.go:389] Looking for default routes with IPv4 addresses | |
I1103 06:55:19.516366 352 interface.go:394] Default route transits interface "eth0" | |
I1103 06:55:19.516708 352 interface.go:201] Interface eth0 is up | |
I1103 06:55:19.517107 352 interface.go:249] Interface "eth0" has 1 addresses :[172.17.0.2/16]. | |
I1103 06:55:19.517142 352 interface.go:216] Checking addr 172.17.0.2/16. | |
I1103 06:55:19.517153 352 interface.go:223] IP found 172.17.0.2 | |
I1103 06:55:19.517300 352 interface.go:255] Found valid IPv4 address 172.17.0.2 for interface "eth0". | |
I1103 06:55:19.517320 352 interface.go:400] Found active IP 172.17.0.2 | |
I1103 06:55:19.517731 352 preflight.go:101] [preflight] Running configuration dependant checks | |
I1103 06:55:19.517774 352 controlplaneprepare.go:211] [download-certs] Skipping certs download | |
I1103 06:55:19.519454 352 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:19.521043 352 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt | |
I1103 06:55:19.521443 352 loader.go:375] Config loaded from file: /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:19.522088 352 kubelet.go:133] [kubelet-start] Stopping the kubelet | |
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace | |
I1103 06:55:19.542316 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.18 200 OK in 3 milliseconds | |
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" | |
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" | |
I1103 06:55:19.557291 352 kubelet.go:150] [kubelet-start] Starting the kubelet | |
[kubelet-start] Activating the kubelet service | |
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... | |
I1103 06:55:20.708928 352 loader.go:375] Config loaded from file: /etc/kubernetes/kubelet.conf | |
I1103 06:55:20.721361 352 loader.go:375] Config loaded from file: /etc/kubernetes/kubelet.conf | |
I1103 06:55:20.722924 352 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node | |
I1103 06:55:20.722955 352 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kind-worker2" as an annotation | |
I1103 06:55:21.233007 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 9 milliseconds | |
I1103 06:55:21.726448 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:22.226928 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:22.726100 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:23.226671 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:23.726013 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:24.226802 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:24.726906 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:25.226942 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:25.727734 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds | |
I1103 06:55:26.227361 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:26.726955 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:27.226678 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:27.726100 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:28.226715 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:28.726539 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:29.226721 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:29.727526 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds | |
I1103 06:55:30.226393 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:30.726674 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:31.226217 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:31.727372 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:32.226922 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:32.727912 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:33.234981 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 200 OK in 8 milliseconds | |
I1103 06:55:33.251898 352 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-worker2 200 OK in 11 milliseconds | |
This node has joined the cluster: | |
* Certificate signing request was sent to apiserver and a response was received. | |
* The Kubelet was informed of the new secure connection details. | |
Run 'kubectl get nodes' on the control-plane to see this node join the cluster. | |
DEBUG: kubeadmjoin/join.go:133] W1103 06:55:04.304914 353 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. | |
I1103 06:55:04.305005 353 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName | |
I1103 06:55:04.305024 353 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf" | |
[preflight] Running pre-flight checks | |
I1103 06:55:04.307200 353 preflight.go:90] [preflight] Running general checks | |
I1103 06:55:04.307309 353 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests | |
I1103 06:55:04.307429 353 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf | |
I1103 06:55:04.307450 353 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:04.307463 353 checks.go:102] validating the container runtime | |
I1103 06:55:04.327111 353 checks.go:376] validating the presence of executable crictl | |
I1103 06:55:04.327317 353 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables | |
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist | |
I1103 06:55:04.327432 353 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward | |
I1103 06:55:04.327506 353 checks.go:649] validating whether swap is enabled or not | |
I1103 06:55:04.327621 353 checks.go:376] validating the presence of executable ip | |
I1103 06:55:04.327724 353 checks.go:376] validating the presence of executable iptables | |
I1103 06:55:04.327857 353 checks.go:376] validating the presence of executable mount | |
I1103 06:55:04.327884 353 checks.go:376] validating the presence of executable nsenter | |
I1103 06:55:04.327927 353 checks.go:376] validating the presence of executable ebtables | |
I1103 06:55:04.327976 353 checks.go:376] validating the presence of executable ethtool | |
I1103 06:55:04.328009 353 checks.go:376] validating the presence of executable socat | |
I1103 06:55:04.328046 353 checks.go:376] validating the presence of executable tc | |
I1103 06:55:04.328072 353 checks.go:376] validating the presence of executable touch | |
I1103 06:55:04.328119 353 checks.go:520] running all checks | |
I1103 06:55:04.336370 353 checks.go:406] checking whether the given node name is reachable using net.LookupHost | |
I1103 06:55:04.336654 353 checks.go:618] validating kubelet version | |
I1103 06:55:04.444767 353 checks.go:128] validating if the service is enabled and active | |
I1103 06:55:04.459597 353 checks.go:201] validating availability of port 10250 | |
I1103 06:55:04.460126 353 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt | |
I1103 06:55:04.460168 353 checks.go:432] validating if the connectivity type is via proxy or direct | |
I1103 06:55:04.460235 353 join.go:441] [preflight] Discovering cluster-info | |
I1103 06:55:04.460361 353 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:04.461222 353 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:04.470433 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds | |
I1103 06:55:04.472256 353 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:09.473166 353 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:09.473698 353 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:09.478145 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 4 milliseconds | |
I1103 06:55:09.478854 353 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:14.479246 353 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:14.480052 353 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:14.483768 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds | |
I1103 06:55:14.484187 353 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:19.484529 353 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:19.485407 353 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:19.489481 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds | |
I1103 06:55:19.491065 353 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.3:6443" | |
I1103 06:55:19.491134 353 token.go:205] [discovery] Successfully established connection with API Server "172.17.0.3:6443" | |
I1103 06:55:19.491183 353 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process | |
I1103 06:55:19.491237 353 join.go:455] [preflight] Fetching init configuration | |
I1103 06:55:19.491291 353 join.go:493] [preflight] Retrieving KubeConfig objects | |
[preflight] Reading configuration from the cluster... | |
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' | |
I1103 06:55:19.501485 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 9 milliseconds | |
I1103 06:55:19.505939 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 2 milliseconds | |
I1103 06:55:19.509476 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.18 200 OK in 2 milliseconds | |
I1103 06:55:19.512104 353 interface.go:389] Looking for default routes with IPv4 addresses | |
I1103 06:55:19.512125 353 interface.go:394] Default route transits interface "eth0" | |
I1103 06:55:19.512253 353 interface.go:201] Interface eth0 is up | |
I1103 06:55:19.512312 353 interface.go:249] Interface "eth0" has 1 addresses :[172.17.0.4/16]. | |
I1103 06:55:19.512333 353 interface.go:216] Checking addr 172.17.0.4/16. | |
I1103 06:55:19.512343 353 interface.go:223] IP found 172.17.0.4 | |
I1103 06:55:19.512353 353 interface.go:255] Found valid IPv4 address 172.17.0.4 for interface "eth0". | |
I1103 06:55:19.512361 353 interface.go:400] Found active IP 172.17.0.4 | |
I1103 06:55:19.512443 353 preflight.go:101] [preflight] Running configuration dependant checks | |
I1103 06:55:19.512459 353 controlplaneprepare.go:211] [download-certs] Skipping certs download | |
I1103 06:55:19.512472 353 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:19.517347 353 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt | |
I1103 06:55:19.517938 353 loader.go:375] Config loaded from file: /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:19.518546 353 kubelet.go:133] [kubelet-start] Stopping the kubelet | |
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace | |
I1103 06:55:19.535398 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.18 200 OK in 3 milliseconds | |
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" | |
I1103 06:55:19.550702 353 kubelet.go:150] [kubelet-start] Starting the kubelet | |
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" | |
[kubelet-start] Activating the kubelet service | |
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... | |
I1103 06:55:20.685305 353 loader.go:375] Config loaded from file: /etc/kubernetes/kubelet.conf | |
I1103 06:55:20.702567 353 loader.go:375] Config loaded from file: /etc/kubernetes/kubelet.conf | |
I1103 06:55:20.704749 353 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node | |
I1103 06:55:20.704832 353 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kind-worker" as an annotation | |
I1103 06:55:21.222512 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 17 milliseconds | |
I1103 06:55:21.707461 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds | |
I1103 06:55:22.208952 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:22.708677 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:23.209261 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:23.709885 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:24.209557 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:24.709016 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:25.209471 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:25.709602 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:26.208431 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:26.708281 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:27.209094 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:27.708227 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:28.208625 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:28.710235 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 5 milliseconds | |
I1103 06:55:29.208632 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:29.708624 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:30.208306 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:30.708657 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:31.208287 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:31.708459 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:32.208096 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds | |
I1103 06:55:32.708928 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:33.208940 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:33.709704 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 200 OK in 4 milliseconds | |
I1103 06:55:33.724501 353 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-worker 200 OK in 9 milliseconds | |
This node has joined the cluster: | |
* Certificate signing request was sent to apiserver and a response was received. | |
* The Kubelet was informed of the new secure connection details. | |
Run 'kubectl get nodes' on the control-plane to see this node join the cluster. | |
✓ Joining worker nodes 🚜 | |
• Waiting ≤ 1m0s for control-plane = Ready ⏳ ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
✓ Waiting ≤ 1m0s for control-plane = Ready ⏳ | |
• Ready after 24s 💚 | |
DEBUG: exec/local.go:116] Running: "docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster=kind --format '{{.Names}}'" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane cat /etc/kubernetes/admin.conf" | |
DEBUG: exec/local.go:116] Running: "docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster=kind --format '{{.Names}}'" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ with (index (index .NetworkSettings.Ports "6443/tcp") 0) }}{{ printf "%s %s" .HostIp .HostPort }}{{ end }}' kind-control-plane" | |
Set kubectl context to "kind-kind" | |
You can now use your cluster with: | |
kubectl cluster-info --context kind-kind | |
+ run_tests | |
+ [ ipv4 = ipv6 ] | |
+ SKIP=\[Slow\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PodSecurityPolicy|LoadBalancer|load.balancer|In-tree.Volumes.\[Driver:.nfs\]|PersistentVolumes.NFS|Dynamic.PV|Network.should.set.TCP.CLOSE_WAIT.timeout|Simple.pod.should.support.exec.through.an.HTTP.proxy|subPath.should.support.existing|ReplicationController.light.Should.scale.from.1.pod.to.2.pods|should.provide.basic.identity|\[NodeFeature:PodReadinessGate\] | |
+ FOCUS=. | |
+ [ true = true ] | |
+ export GINKGO_PARALLEL=y | |
+ [ -z \[Slow\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PodSecurityPolicy|LoadBalancer|load.balancer|In-tree.Volumes.\[Driver:.nfs\]|PersistentVolumes.NFS|Dynamic.PV|Network.should.set.TCP.CLOSE_WAIT.timeout|Simple.pod.should.support.exec.through.an.HTTP.proxy|subPath.should.support.existing|ReplicationController.light.Should.scale.from.1.pod.to.2.pods|should.provide.basic.identity|\[NodeFeature:PodReadinessGate\] ] | |
+ SKIP=\[Serial\]|\[Slow\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PodSecurityPolicy|LoadBalancer|load.balancer|In-tree.Volumes.\[Driver:.nfs\]|PersistentVolumes.NFS|Dynamic.PV|Network.should.set.TCP.CLOSE_WAIT.timeout|Simple.pod.should.support.exec.through.an.HTTP.proxy|subPath.should.support.existing|ReplicationController.light.Should.scale.from.1.pod.to.2.pods|should.provide.basic.identity|\[NodeFeature:PodReadinessGate\] | |
+ export KUBERNETES_CONFORMANCE_TEST=y | |
+ export KUBE_CONTAINER_RUNTIME=remote | |
+ export KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock | |
+ export KUBE_CONTAINER_RUNTIME_NAME=containerd | |
+ export GINKGO_TOLERATE_FLAKES=y | |
+ ./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus=. --ginkgo.skip=\[Serial\]|\[Slow\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PodSecurityPolicy|LoadBalancer|load.balancer|In-tree.Volumes.\[Driver:.nfs\]|PersistentVolumes.NFS|Dynamic.PV|Network.should.set.TCP.CLOSE_WAIT.timeout|Simple.pod.should.support.exec.through.an.HTTP.proxy|subPath.should.support.existing|ReplicationController.light.Should.scale.from.1.pod.to.2.pods|should.provide.basic.identity|\[NodeFeature:PodReadinessGate\] --report-dir=/logs/artifacts --disable-log-dump=true | |
Conformance test: not doing test setup. | |
Running Suite: Kubernetes e2e suite | |
=================================== | |
Random Seed: [1m1572764161[0m - Will randomize all specs | |
Will run [1m4979[0m specs | |
Running in parallel across [1m25[0m nodes | |
Nov 3 06:56:10.100: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:10.103: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable | |
Nov 3 06:56:10.184: INFO: Condition Ready of node kind-worker2 is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Nov 3 06:56:10.184: INFO: Condition Ready of node kind-worker is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Nov 3 06:56:10.184: INFO: Unschedulable nodes: | |
Nov 3 06:56:10.184: INFO: -> kind-worker2 Ready=false Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master | |
Nov 3 06:56:10.184: INFO: -> kind-worker Ready=false Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master | |
Nov 3 06:56:10.184: INFO: ================================ | |
Nov 3 06:56:40.188: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready | |
Nov 3 06:56:40.246: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) | |
Nov 3 06:56:40.246: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. | |
Nov 3 06:56:40.246: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start | |
Nov 3 06:56:40.264: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) | |
Nov 3 06:56:40.264: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) | |
Nov 3 06:56:40.264: INFO: e2e test version: v0.0.0-master+$Format:%h$ | |
Nov 3 06:56:40.266: INFO: kube-apiserver version: v1.18.0-alpha.0.178+0c66e64b140011 | |
Nov 3 06:56:40.267: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.273: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.282: INFO: Driver vsphere doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: vsphere] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver vsphere doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.285: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: tmpfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly directory specified in the volumeMount [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:359[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.312: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.386: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.310: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.389: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.389: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.304: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.389: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.390: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.319: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.392: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.315: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.390: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.320: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.391: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.391: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.394: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.309: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.395: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.308: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.395: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.307: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.395: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.305: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.395: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.308: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.396: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.308: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.402: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.317: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.400: INFO: Cluster IP family: ipv4 | |
[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.307: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.397: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.397: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.316: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.405: INFO: Cluster IP family: ipv4 | |
[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.308: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.409: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.314: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.416: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.316: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.412: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.426: INFO: Driver local doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-link] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
Nov 3 06:56:40.316: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.430: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.448: INFO: Only supported for providers [gce gke] (not skeleton) | |
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gcepd] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mOnly supported for providers [gce gke] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1194 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.452: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould not mount / map unused volumes in a pod [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumemode.go:334[0m | |
[36mDriver csi-hostpath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.452: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver csi-hostpath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.454: INFO: Driver local doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.448: INFO: Driver local doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.457: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPath] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support non-existent path [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:189[0m | |
[36mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.456: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPathSymlink] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver hostPathSymlink doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.446: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPath] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.464: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: block] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support file as subpath [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:225[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.464: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: vsphere] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly directory specified in the volumeMount [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:359[0m | |
[36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.463: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPathSymlink] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support file as subpath [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:225[0m | |
[36mDriver hostPathSymlink doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.473: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gluster] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould be able to unmount after the subpath directory is deleted [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:437[0m | |
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m | |
test/e2e/storage/drivers/in_tree.go:258 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.475: INFO: Driver local doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.476: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gcepd] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould be able to unmount after the subpath directory is deleted [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:437[0m | |
[36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.483: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-link-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly file specified in the volumeMount [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:374[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.486: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.487: INFO: Driver csi-hostpath-v0 doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Inline-volume (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver csi-hostpath-v0 doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.487: INFO: Driver cinder doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: cinder] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver cinder doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.489: INFO: Driver csi-hostpath-v0 doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver csi-hostpath-v0 doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.491: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly directory specified in the volumeMount [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:359[0m | |
[36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-windows] DNS | |
test/e2e/windows/framework.go:28 | |
Nov 3 06:56:40.496: INFO: Only supported for node OS distro [windows] (not debian) | |
[AfterEach] [sig-windows] DNS | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-windows] DNS | |
[90mtest/e2e/windows/framework.go:27[0m | |
[36m[1mshould support configurable pod DNS servers [BeforeEach][0m | |
[90mtest/e2e/windows/dns.go:42[0m | |
[36mOnly supported for node OS distro [windows] (not debian)[0m | |
test/e2e/windows/framework.go:30 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.498: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: emptydir] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver emptydir doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.511: INFO: Driver local doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.508: INFO: Driver gluster doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gluster] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver gluster doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] ResourceQuota | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.395: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename resourcequota | |
Nov 3 06:56:40.611: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.630: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8261 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should be able to update and delete ResourceQuota. [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a ResourceQuota | |
[1mSTEP[0m: Getting a ResourceQuota | |
[1mSTEP[0m: Updating a ResourceQuota | |
[1mSTEP[0m: Verifying a ResourceQuota was modified | |
[1mSTEP[0m: Deleting a ResourceQuota | |
[1mSTEP[0m: Verifying the deleted ResourceQuota | |
[AfterEach] [sig-api-machinery] ResourceQuota | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "resourcequota-8261" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Volume Placement | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.507: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename volume-placement | |
Nov 3 06:56:42.603: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:42.724: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-placement-9859 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Volume Placement | |
test/e2e/storage/vsphere/vsphere_volume_placement.go:52 | |
Nov 3 06:56:42.850: INFO: Only supported for providers [vsphere] (not skeleton) | |
[AfterEach] [sig-storage] Volume Placement | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:42.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "volume-placement-9859" for this suite. | |
[AfterEach] [sig-storage] Volume Placement | |
test/e2e/storage/vsphere/vsphere_volume_placement.go:70 | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [2.411 seconds][0m | |
[sig-storage] Volume Placement | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[36m[1mtest back to back pod creation and deletion with different volume sources on the same worker node [BeforeEach][0m | |
[90mtest/e2e/storage/vsphere/vsphere_volume_placement.go:276[0m | |
[36mOnly supported for providers [vsphere] (not skeleton)[0m | |
test/e2e/storage/vsphere/vsphere_volume_placement.go:53 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.402: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename emptydir | |
Nov 3 06:56:40.577: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.598: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4492 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] | |
test/e2e/common/empty_dir.go:45 | |
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup | |
test/e2e/common/empty_dir.go:58 | |
[1mSTEP[0m: Creating a pod to test emptydir subpath on tmpfs | |
Nov 3 06:56:40.772: INFO: Waiting up to 5m0s for pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8" in namespace "emptydir-4492" to be "success or failure" | |
Nov 3 06:56:40.791: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.627572ms | |
Nov 3 06:56:42.839: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066567371s | |
Nov 3 06:56:44.845: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072062332s | |
Nov 3 06:56:46.890: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117658859s | |
Nov 3 06:56:48.912: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139959904s | |
Nov 3 06:56:50.923: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150555575s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:50.923: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8" satisfied condition "success or failure" | |
Nov 3 06:56:50.927: INFO: Trying to get logs from node kind-worker2 pod pod-9208840f-63c4-43bc-af9c-1271ef1e87f8 container test-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:51.200: INFO: Waiting for pod pod-9208840f-63c4-43bc-af9c-1271ef1e87f8 to disappear | |
Nov 3 06:56:51.204: INFO: Pod pod-9208840f-63c4-43bc-af9c-1271ef1e87f8 no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "emptydir-4492" for this suite. | |
[32m• [SLOW TEST:10.816 seconds][0m | |
[sig-storage] EmptyDir volumes | |
[90mtest/e2e/common/empty_dir.go:40[0m | |
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] | |
[90mtest/e2e/common/empty_dir.go:43[0m | |
nonexistent volume subPath should have the correct mode and owner using FSGroup | |
[90mtest/e2e/common/empty_dir.go:58[0m | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:51.222: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould be able to unmount after the subpath directory is deleted [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:437[0m | |
[36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.458: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename emptydir | |
Nov 3 06:56:40.677: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.745: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8834 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] | |
test/e2e/common/empty_dir.go:45 | |
[It] files with FSGroup ownership should support (root,0644,tmpfs) | |
test/e2e/common/empty_dir.go:62 | |
[1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs | |
Nov 3 06:56:40.917: INFO: Waiting up to 5m0s for pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e" in namespace "emptydir-8834" to be "success or failure" | |
Nov 3 06:56:40.943: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.096838ms | |
Nov 3 06:56:43.249: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331974528s | |
Nov 3 06:56:45.269: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352546294s | |
Nov 3 06:56:47.582: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664816887s | |
Nov 3 06:56:49.587: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.670013683s | |
Nov 3 06:56:51.590: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.673383668s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:51.590: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e" satisfied condition "success or failure" | |
Nov 3 06:56:51.596: INFO: Trying to get logs from node kind-worker2 pod pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e container test-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:51.631: INFO: Waiting for pod pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e to disappear | |
Nov 3 06:56:51.634: INFO: Pod pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "emptydir-8834" for this suite. | |
[32m• [SLOW TEST:11.186 seconds][0m | |
[sig-storage] EmptyDir volumes | |
[90mtest/e2e/common/empty_dir.go:40[0m | |
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] | |
[90mtest/e2e/common/empty_dir.go:43[0m | |
files with FSGroup ownership should support (root,0644,tmpfs) | |
[90mtest/e2e/common/empty_dir.go:62[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.960: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename projected | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-584 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
test/e2e/common/projected_downwardapi.go:40 | |
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] | |
test/e2e/common/projected_downwardapi.go:90 | |
[1mSTEP[0m: Creating a pod to test downward API volume plugin | |
Nov 3 06:56:43.266: INFO: Waiting up to 5m0s for pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219" in namespace "projected-584" to be "success or failure" | |
Nov 3 06:56:43.298: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Pending", Reason="", readiness=false. Elapsed: 31.790449ms | |
Nov 3 06:56:45.368: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101721691s | |
Nov 3 06:56:47.582: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315861952s | |
Nov 3 06:56:49.587: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Pending", Reason="", readiness=false. Elapsed: 6.320769327s | |
Nov 3 06:56:51.592: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.325773467s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:51.592: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219" satisfied condition "success or failure" | |
Nov 3 06:56:51.596: INFO: Trying to get logs from node kind-worker2 pod metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219 container client-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:51.631: INFO: Waiting for pod metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219 to disappear | |
Nov 3 06:56:51.634: INFO: Pod metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219 no longer exists | |
[AfterEach] [sig-storage] Projected downwardAPI | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "projected-584" for this suite. | |
[32m• [SLOW TEST:10.690 seconds][0m | |
[sig-storage] Projected downwardAPI | |
[90mtest/e2e/common/projected_downwardapi.go:34[0m | |
should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] | |
[90mtest/e2e/common/projected_downwardapi.go:90[0m | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:51.657: INFO: Only supported for providers [azure] (not skeleton) | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mOnly supported for providers [azure] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1449 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:51.693: INFO: Driver local doesn't support ntfs -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: block] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support ntfs -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] Garbage collector | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.291: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename gc | |
Nov 3 06:56:40.421: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.533: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-9197 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should support cascading deletion of custom resources | |
test/e2e/apimachinery/garbage_collector.go:869 | |
Nov 3 06:56:40.678: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:41.536: INFO: created owner resource "owner4wcl2" | |
Nov 3 06:56:41.557: INFO: created dependent resource "dependentrvxzj" | |
[AfterEach] [sig-api-machinery] Garbage collector | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:52.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "gc-9197" for this suite. | |
[32m• [SLOW TEST:11.809 seconds][0m | |
[sig-api-machinery] Garbage collector | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should support cascading deletion of custom resources | |
[90mtest/e2e/apimachinery/garbage_collector.go:869[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-auth] ServiceAccounts | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:51.704: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename svcaccounts | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3966 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should allow opting out of API token automount [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: getting the auto-created API token | |
Nov 3 06:56:52.456: INFO: created pod pod-service-account-defaultsa | |
Nov 3 06:56:52.456: INFO: pod pod-service-account-defaultsa service account token volume mount: true | |
Nov 3 06:56:52.463: INFO: created pod pod-service-account-mountsa | |
Nov 3 06:56:52.463: INFO: pod pod-service-account-mountsa service account token volume mount: true | |
Nov 3 06:56:52.474: INFO: created pod pod-service-account-nomountsa | |
Nov 3 06:56:52.474: INFO: pod pod-service-account-nomountsa service account token volume mount: false | |
Nov 3 06:56:52.504: INFO: created pod pod-service-account-defaultsa-mountspec | |
Nov 3 06:56:52.504: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true | |
Nov 3 06:56:52.518: INFO: created pod pod-service-account-mountsa-mountspec | |
Nov 3 06:56:52.518: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true | |
Nov 3 06:56:52.531: INFO: created pod pod-service-account-nomountsa-mountspec | |
Nov 3 06:56:52.531: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true | |
Nov 3 06:56:52.551: INFO: created pod pod-service-account-defaultsa-nomountspec | |
Nov 3 06:56:52.551: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false | |
Nov 3 06:56:52.593: INFO: created pod pod-service-account-mountsa-nomountspec | |
Nov 3 06:56:52.593: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false | |
Nov 3 06:56:52.617: INFO: created pod pod-service-account-nomountsa-nomountspec | |
Nov 3 06:56:52.617: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false | |
[AfterEach] [sig-auth] ServiceAccounts | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:52.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "svcaccounts-3966" for this suite. | |
[32m•[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:52.742: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:52.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (default fs)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver csi-hostpath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [sig-windows] Windows volume mounts | |
test/e2e/windows/framework.go:28 | |
Nov 3 06:56:52.745: INFO: Only supported for node OS distro [windows] (not debian) | |
[AfterEach] [sig-windows] Windows volume mounts | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:52.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-windows] Windows volume mounts | |
[90mtest/e2e/windows/framework.go:27[0m | |
[36m[1mcheck volume mount permissions [BeforeEach][0m | |
[90mtest/e2e/windows/volumes.go:62[0m | |
container should have readOnly permissions on emptyDir | |
[90mtest/e2e/windows/volumes.go:64[0m | |
[36mOnly supported for node OS distro [windows] (not debian)[0m | |
test/e2e/windows/framework.go:30 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Secrets | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.453: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename secrets | |
Nov 3 06:56:40.654: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.698: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7025 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating secret with name secret-test-07c122a4-c615-4468-8240-b02d1b4a84a9 | |
[1mSTEP[0m: Creating a pod to test consume secrets | |
Nov 3 06:56:40.934: INFO: Waiting up to 5m0s for pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14" in namespace "secrets-7025" to be "success or failure" | |
Nov 3 06:56:40.973: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 38.485149ms | |
Nov 3 06:56:43.240: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305760134s | |
Nov 3 06:56:45.270: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335869855s | |
Nov 3 06:56:47.556: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621851772s | |
Nov 3 06:56:49.560: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.626340278s | |
Nov 3 06:56:51.565: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 10.630405697s | |
Nov 3 06:56:53.590: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.656001781s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:53.590: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14" satisfied condition "success or failure" | |
Nov 3 06:56:53.619: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14 container secret-volume-test: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:53.790: INFO: Waiting for pod pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14 to disappear | |
Nov 3 06:56:53.821: INFO: Pod pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14 no longer exists | |
[AfterEach] [sig-storage] Secrets | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:53.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "secrets-7025" for this suite. | |
[32m• [SLOW TEST:13.463 seconds][0m | |
[sig-storage] Secrets | |
[90mtest/e2e/common/secrets_volume.go:34[0m | |
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.496: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename emptydir | |
Nov 3 06:56:41.622: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:41.693: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1334 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium | |
Nov 3 06:56:41.933: INFO: Waiting up to 5m0s for pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c" in namespace "emptydir-1334" to be "success or failure" | |
Nov 3 06:56:41.958: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.830321ms | |
Nov 3 06:56:43.964: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031633337s | |
Nov 3 06:56:45.979: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04599743s | |
Nov 3 06:56:48.234: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.301239804s | |
Nov 3 06:56:50.240: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Running", Reason="", readiness=true. Elapsed: 8.306953068s | |
Nov 3 06:56:52.246: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Running", Reason="", readiness=true. Elapsed: 10.313579641s | |
Nov 3 06:56:54.252: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.319042007s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:54.252: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c" satisfied condition "success or failure" | |
Nov 3 06:56:54.255: INFO: Trying to get logs from node kind-worker2 pod pod-57880e8f-988b-41d9-acd1-1e93dda8679c container test-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:54.302: INFO: Waiting for pod pod-57880e8f-988b-41d9-acd1-1e93dda8679c to disappear | |
Nov 3 06:56:54.317: INFO: Pod pod-57880e8f-988b-41d9-acd1-1e93dda8679c no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:54.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "emptydir-1334" for this suite. | |
[32m• [SLOW TEST:13.853 seconds][0m | |
[sig-storage] EmptyDir volumes | |
[90mtest/e2e/common/empty_dir.go:40[0m | |
should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:54.363: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:54.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPathSymlink] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver hostPathSymlink doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Security Context | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.461: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename security-context-test | |
Nov 3 06:56:40.752: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.816: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-3748 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Security Context | |
test/e2e/common/security_context.go:39 | |
[It] should not run with an explicit root user ID [LinuxOnly] | |
test/e2e/common/security_context.go:132 | |
[AfterEach] [k8s.io] Security Context | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:54.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "security-context-test-3748" for this suite. | |
[32m• [SLOW TEST:14.549 seconds][0m | |
[k8s.io] Security Context | |
[90mtest/e2e/framework/framework.go:683[0m | |
When creating a container with runAsNonRoot | |
[90mtest/e2e/common/security_context.go:97[0m | |
should not run with an explicit root user ID [LinuxOnly] | |
[90mtest/e2e/common/security_context.go:132[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.479: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test | |
Nov 3 06:56:42.485: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:42.581: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-3239 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/storage/persistent_volumes-local.go:153 | |
[BeforeEach] [Volume type: dir-link-bindmounted] | |
test/e2e/storage/persistent_volumes-local.go:189 | |
[1mSTEP[0m: Initializing test volumes | |
Nov 3 06:56:53.277: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend && mount --bind /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend && ln -s /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd] Namespace:persistent-local-volumes-test-3239 PodName:hostexec-kind-worker-ftnbr ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:53.277: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Creating local PVCs and PVs | |
Nov 3 06:56:53.801: INFO: Creating a PV followed by a PVC | |
Nov 3 06:56:53.906: INFO: Waiting for PV local-pvbr9zr to bind to PVC pvc-tjqdc | |
Nov 3 06:56:53.906: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-tjqdc] to have phase Bound | |
Nov 3 06:56:53.926: INFO: PersistentVolumeClaim pvc-tjqdc found but phase is Pending instead of Bound. | |
Nov 3 06:56:55.930: INFO: PersistentVolumeClaim pvc-tjqdc found and phase=Bound (2.023506165s) | |
Nov 3 06:56:55.930: INFO: Waiting up to 3m0s for PersistentVolume local-pvbr9zr to have phase Bound | |
Nov 3 06:56:55.933: INFO: PersistentVolume local-pvbr9zr found and phase=Bound (3.815805ms) | |
[BeforeEach] Set fsGroup for local volume | |
test/e2e/storage/persistent_volumes-local.go:255 | |
[It] should set different fsGroup for second pod if first pod is deleted | |
test/e2e/storage/persistent_volumes-local.go:280 | |
Nov 3 06:56:55.940: INFO: Disabled temporarily, reopen after #73168 is fixed | |
[AfterEach] [Volume type: dir-link-bindmounted] | |
test/e2e/storage/persistent_volumes-local.go:198 | |
[1mSTEP[0m: Cleaning up PVC and PV | |
Nov 3 06:56:55.941: INFO: Deleting PersistentVolumeClaim "pvc-tjqdc" | |
Nov 3 06:56:55.955: INFO: Deleting PersistentVolume "local-pvbr9zr" | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:56:55.971: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd && umount /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend && rm -r /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend] Namespace:persistent-local-volumes-test-3239 PodName:hostexec-kind-worker-ftnbr ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:55.971: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[AfterEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:56.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "persistent-local-volumes-test-3239" for this suite. | |
[36m[1mS [SKIPPING] [15.847 seconds][0m | |
[sig-storage] PersistentVolumes-local | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Volume type: dir-link-bindmounted] | |
[90mtest/e2e/storage/persistent_volumes-local.go:186[0m | |
Set fsGroup for local volume | |
[90mtest/e2e/storage/persistent_volumes-local.go:254[0m | |
[36m[1mshould set different fsGroup for second pod if first pod is deleted [It][0m | |
[90mtest/e2e/storage/persistent_volumes-local.go:280[0m | |
[36mDisabled temporarily, reopen after #73168 is fixed[0m | |
test/e2e/storage/persistent_volumes-local.go:281 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:56.328: INFO: Only supported for providers [azure] (not skeleton) | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:56.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mOnly supported for providers [azure] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1449 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-cli] Kubectl alpha client | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:56.336: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename kubectl | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-96 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl alpha client | |
test/e2e/kubectl/kubectl.go:208 | |
[BeforeEach] Kubectl run CronJob | |
test/e2e/kubectl/kubectl.go:217 | |
[It] should create a CronJob | |
test/e2e/kubectl/kubectl.go:226 | |
Nov 3 06:56:56.547: INFO: Could not find batch/v2alpha1, Resource=cronjobs resource, skipping test: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"Status", APIVersion:"v1"}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server could not find the requested resource", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc002845560), Code:404}} | |
[AfterEach] Kubectl run CronJob | |
test/e2e/kubectl/kubectl.go:222 | |
Nov 3 06:56:56.548: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-96' | |
Nov 3 06:56:56.902: INFO: rc: 1 | |
Nov 3 06:56:56.902: FAIL: Unexpected error: | |
<exec.CodeExitError>: { | |
Err: { | |
s: "error running &{/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl [kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-96] [] <nil> Error from server (NotFound): cronjobs.batch \"e2e-test-echo-cronjob-alpha\" not found\n [] <nil> 0xc0028b0210 exit status 1 <nil> <nil> true [0xc001cdfa00 0xc001cdfa18 0xc001cdfa30] [0xc001cdfa00 0xc001cdfa18 0xc001cdfa30] [0xc001cdfa10 0xc001cdfa28] [0x1109c80 0x1109c80] 0xc0028458c0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (NotFound): cronjobs.batch \"e2e-test-echo-cronjob-alpha\" not found\n\nerror:\nexit status 1", | |
}, | |
Code: 1, | |
} | |
error running &{/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl [kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-96] [] <nil> Error from server (NotFound): cronjobs.batch "e2e-test-echo-cronjob-alpha" not found | |
[] <nil> 0xc0028b0210 exit status 1 <nil> <nil> true [0xc001cdfa00 0xc001cdfa18 0xc001cdfa30] [0xc001cdfa00 0xc001cdfa18 0xc001cdfa30] [0xc001cdfa10 0xc001cdfa28] [0x1109c80 0x1109c80] 0xc0028458c0 <nil>}: | |
Command stdout: | |
stderr: | |
Error from server (NotFound): cronjobs.batch "e2e-test-echo-cronjob-alpha" not found | |
error: | |
exit status 1 | |
occurred | |
[AfterEach] [sig-cli] Kubectl alpha client | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:56.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "kubectl-96" for this suite. | |
[36m[1mS [SKIPPING] [0.596 seconds][0m | |
[sig-cli] Kubectl alpha client | |
[90mtest/e2e/kubectl/framework.go:23[0m | |
Kubectl run CronJob | |
[90mtest/e2e/kubectl/kubectl.go:213[0m | |
[36m[1mshould create a CronJob [It][0m | |
[90mtest/e2e/kubectl/kubectl.go:226[0m | |
[36mCould not find batch/v2alpha1, Resource=cronjobs resource, skipping test: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"Status", APIVersion:"v1"}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server could not find the requested resource", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc002845560), Code:404}}[0m | |
test/e2e/kubectl/kubectl.go:227 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:42.921: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2540 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/storage/persistent_volumes-local.go:153 | |
[BeforeEach] [Volume type: dir-link] | |
test/e2e/storage/persistent_volumes-local.go:189 | |
[1mSTEP[0m: Initializing test volumes | |
Nov 3 06:56:55.768: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3-backend && ln -s /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3-backend /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3] Namespace:persistent-local-volumes-test-2540 PodName:hostexec-kind-worker-wrt76 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:55.768: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Creating local PVCs and PVs | |
Nov 3 06:56:56.015: INFO: Creating a PV followed by a PVC | |
Nov 3 06:56:56.045: INFO: Waiting for PV local-pvpd8qk to bind to PVC pvc-gfcqn | |
Nov 3 06:56:56.046: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-gfcqn] to have phase Bound | |
Nov 3 06:56:56.069: INFO: PersistentVolumeClaim pvc-gfcqn found but phase is Pending instead of Bound. | |
Nov 3 06:56:58.073: INFO: PersistentVolumeClaim pvc-gfcqn found but phase is Pending instead of Bound. | |
Nov 3 06:57:00.078: INFO: PersistentVolumeClaim pvc-gfcqn found but phase is Pending instead of Bound. | |
Nov 3 06:57:02.086: INFO: PersistentVolumeClaim pvc-gfcqn found and phase=Bound (6.040929945s) | |
Nov 3 06:57:02.087: INFO: Waiting up to 3m0s for PersistentVolume local-pvpd8qk to have phase Bound | |
Nov 3 06:57:02.090: INFO: PersistentVolume local-pvpd8qk found and phase=Bound (3.304692ms) | |
[BeforeEach] Set fsGroup for local volume | |
test/e2e/storage/persistent_volumes-local.go:255 | |
[It] should set different fsGroup for second pod if first pod is deleted | |
test/e2e/storage/persistent_volumes-local.go:280 | |
Nov 3 06:57:02.102: INFO: Disabled temporarily, reopen after #73168 is fixed | |
[AfterEach] [Volume type: dir-link] | |
test/e2e/storage/persistent_volumes-local.go:198 | |
[1mSTEP[0m: Cleaning up PVC and PV | |
Nov 3 06:57:02.103: INFO: Deleting PersistentVolumeClaim "pvc-gfcqn" | |
Nov 3 06:57:02.123: INFO: Deleting PersistentVolume "local-pvpd8qk" | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:57:02.136: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3 && rm -r /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3-backend] Namespace:persistent-local-volumes-test-2540 PodName:hostexec-kind-worker-wrt76 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:02.136: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[AfterEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "persistent-local-volumes-test-2540" for this suite. | |
[36m[1mS [SKIPPING] [19.467 seconds][0m | |
[sig-storage] PersistentVolumes-local | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Volume type: dir-link] | |
[90mtest/e2e/storage/persistent_volumes-local.go:186[0m | |
Set fsGroup for local volume | |
[90mtest/e2e/storage/persistent_volumes-local.go:254[0m | |
[36m[1mshould set different fsGroup for second pod if first pod is deleted [It][0m | |
[90mtest/e2e/storage/persistent_volumes-local.go:280[0m | |
[36mDisabled temporarily, reopen after #73168 is fixed[0m | |
test/e2e/storage/persistent_volumes-local.go:281 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:02.394: INFO: Driver azure doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver azure doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:02.404: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: cinder] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support non-existent path [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:189[0m | |
[36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:51.660: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9640 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/storage/persistent_volumes-local.go:153 | |
[BeforeEach] [Volume type: tmpfs] | |
test/e2e/storage/persistent_volumes-local.go:189 | |
[1mSTEP[0m: Initializing test volumes | |
[1mSTEP[0m: Creating tmpfs mount point on node "kind-worker" at path "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef" | |
Nov 3 06:56:59.926: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef" "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef"] Namespace:persistent-local-volumes-test-9640 PodName:hostexec-kind-worker-dmz5t ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:59.927: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Creating local PVCs and PVs | |
Nov 3 06:57:00.106: INFO: Creating a PV followed by a PVC | |
Nov 3 06:57:00.126: INFO: Waiting for PV local-pvzjpt2 to bind to PVC pvc-vxjdc | |
Nov 3 06:57:00.126: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-vxjdc] to have phase Bound | |
Nov 3 06:57:00.145: INFO: PersistentVolumeClaim pvc-vxjdc found but phase is Pending instead of Bound. | |
Nov 3 06:57:02.158: INFO: PersistentVolumeClaim pvc-vxjdc found and phase=Bound (2.032078507s) | |
Nov 3 06:57:02.158: INFO: Waiting up to 3m0s for PersistentVolume local-pvzjpt2 to have phase Bound | |
Nov 3 06:57:02.169: INFO: PersistentVolume local-pvzjpt2 found and phase=Bound (10.8475ms) | |
[BeforeEach] Set fsGroup for local volume | |
test/e2e/storage/persistent_volumes-local.go:255 | |
[It] should set different fsGroup for second pod if first pod is deleted | |
test/e2e/storage/persistent_volumes-local.go:280 | |
Nov 3 06:57:02.185: INFO: Disabled temporarily, reopen after #73168 is fixed | |
[AfterEach] [Volume type: tmpfs] | |
test/e2e/storage/persistent_volumes-local.go:198 | |
[1mSTEP[0m: Cleaning up PVC and PV | |
Nov 3 06:57:02.186: INFO: Deleting PersistentVolumeClaim "pvc-vxjdc" | |
Nov 3 06:57:02.212: INFO: Deleting PersistentVolume "local-pvzjpt2" | |
[1mSTEP[0m: Unmount tmpfs mount point on node "kind-worker" at path "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef" | |
Nov 3 06:57:02.238: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef"] Namespace:persistent-local-volumes-test-9640 PodName:hostexec-kind-worker-dmz5t ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:02.238: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:57:02.486: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef] Namespace:persistent-local-volumes-test-9640 PodName:hostexec-kind-worker-dmz5t ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:02.486: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[AfterEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "persistent-local-volumes-test-9640" for this suite. | |
[36m[1mS [SKIPPING] [11.042 seconds][0m | |
[sig-storage] PersistentVolumes-local | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Volume type: tmpfs] | |
[90mtest/e2e/storage/persistent_volumes-local.go:186[0m | |
Set fsGroup for local volume | |
[90mtest/e2e/storage/persistent_volumes-local.go:254[0m | |
[36m[1mshould set different fsGroup for second pod if first pod is deleted [It][0m | |
[90mtest/e2e/storage/persistent_volumes-local.go:280[0m | |
[36mDisabled temporarily, reopen after #73168 is fixed[0m | |
test/e2e/storage/persistent_volumes-local.go:281 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:02.705: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: aws] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support file as subpath [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:225[0m | |
[36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:02.710: INFO: Driver hostPath doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPath] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver hostPath doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Downward API volume | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.498: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename downward-api | |
Nov 3 06:56:41.638: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:41.655: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2755 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Downward API volume | |
test/e2e/common/downwardapi_volume.go:40 | |
[It] should provide container's cpu limit [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a pod to test downward API volume plugin | |
Nov 3 06:56:41.887: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a" in namespace "downward-api-2755" to be "success or failure" | |
Nov 3 06:56:41.925: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.187873ms | |
Nov 3 06:56:43.934: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047378418s | |
Nov 3 06:56:45.939: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051867026s | |
Nov 3 06:56:48.234: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.346924228s | |
Nov 3 06:56:50.239: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35201566s | |
Nov 3 06:56:52.246: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.359204056s | |
Nov 3 06:56:54.251: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.364465546s | |
Nov 3 06:56:56.265: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.378567859s | |
Nov 3 06:56:58.269: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.382305572s | |
Nov 3 06:57:00.277: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.390268677s | |
Nov 3 06:57:02.303: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.416660282s | |
Nov 3 06:57:04.309: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.421717977s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:04.309: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a" satisfied condition "success or failure" | |
Nov 3 06:57:04.311: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a container client-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:04.821: INFO: Waiting for pod downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a to disappear | |
Nov 3 06:57:04.824: INFO: Pod downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a no longer exists | |
[AfterEach] [sig-storage] Downward API volume | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:04.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "downward-api-2755" for this suite. | |
[32m• [SLOW TEST:24.335 seconds][0m | |
[sig-storage] Downward API volume | |
[90mtest/e2e/common/downwardapi_volume.go:35[0m | |
should provide container's cpu limit [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:04.852: INFO: Driver local doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:04.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:04.859: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:04.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support file as subpath [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:225[0m | |
[36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:04.864: INFO: csi-hostpath-v0 has no volume attributes defined, doesn't support ephemeral inline volumes | |
[AfterEach] [Testpattern: inline ephemeral CSI volume] ephemeral | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:04.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: inline ephemeral CSI volume] ephemeral | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support two pods which share the same volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/ephemeral.go:140[0m | |
[36mcsi-hostpath-v0 has no volume attributes defined, doesn't support ephemeral inline volumes[0m | |
test/e2e/storage/drivers/csi.go:136 | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/common/sysctl.go:34 | |
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.519: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename sysctl | |
Nov 3 06:56:42.795: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:42.866: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-2435 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/common/sysctl.go:63 | |
[It] should support unsafe sysctls which are actually whitelisted | |
test/e2e/common/sysctl.go:110 | |
[1mSTEP[0m: Creating a pod with the kernel.shm_rmid_forced sysctl | |
[1mSTEP[0m: Watching for error events or started pod | |
[1mSTEP[0m: Waiting for pod completion | |
[1mSTEP[0m: Checking that the pod succeeded | |
[1mSTEP[0m: Getting logs from the pod | |
[1mSTEP[0m: Checking that the sysctl is actually updated | |
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:05.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "sysctl-2435" for this suite. | |
[32m• [SLOW TEST:24.858 seconds][0m | |
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
[90mtest/e2e/framework/framework.go:683[0m | |
should support unsafe sysctls which are actually whitelisted | |
[90mtest/e2e/common/sysctl.go:110[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.482: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename webhook | |
Nov 3 06:56:42.605: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:42.677: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8732 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:87 | |
[1mSTEP[0m: Setting up server cert | |
[1mSTEP[0m: Create role binding to let webhook read extension-apiserver-authentication | |
[1mSTEP[0m: Deploying the webhook pod | |
[1mSTEP[0m: Wait for the deployment to be ready | |
Nov 3 06:56:43.588: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Nov 3 06:56:45.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:48.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:49.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:51.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
[1mSTEP[0m: Deploying the webhook service | |
[1mSTEP[0m: Verifying the service has paired with the endpoint | |
Nov 3 06:56:54.776: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should be able to deny pod and configmap creation [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Registering the webhook via the AdmissionRegistration API | |
Nov 3 06:56:54.822: INFO: Waiting for webhook configuration to be ready... | |
[1mSTEP[0m: create a pod that should be denied by the webhook | |
[1mSTEP[0m: create a pod that causes the webhook to hang | |
[1mSTEP[0m: create a configmap that should be denied by the webhook | |
[1mSTEP[0m: create a configmap that should be admitted by the webhook | |
[1mSTEP[0m: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook | |
[1mSTEP[0m: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook | |
[1mSTEP[0m: create a namespace that bypass the webhook | |
[1mSTEP[0m: create a configmap that violates the webhook policy but is in a whitelisted namespace | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:05.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "webhook-8732" for this suite. | |
[1mSTEP[0m: Destroying namespace "webhook-8732-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:102 | |
[32m• [SLOW TEST:25.136 seconds][0m | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should be able to deny pod and configmap creation [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:51.248: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename webhook | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2475 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:87 | |
[1mSTEP[0m: Setting up server cert | |
[1mSTEP[0m: Create role binding to let webhook read extension-apiserver-authentication | |
Nov 3 06:56:52.162: INFO: role binding webhook-auth-reader already exists | |
[1mSTEP[0m: Deploying the webhook pod | |
[1mSTEP[0m: Wait for the deployment to be ready | |
Nov 3 06:56:52.186: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Nov 3 06:56:54.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:56.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:58.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:00.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:02.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
[1mSTEP[0m: Deploying the webhook service | |
[1mSTEP[0m: Verifying the service has paired with the endpoint | |
Nov 3 06:57:05.242: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should mutate pod and apply defaults after mutation [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Registering the mutating pod webhook via the AdmissionRegistration API | |
[1mSTEP[0m: create a pod that should be updated by the webhook | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:05.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "webhook-2475" for this suite. | |
[1mSTEP[0m: Destroying namespace "webhook-2475-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:102 | |
[32m• [SLOW TEST:14.630 seconds][0m | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should mutate pod and apply defaults after mutation [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:05.626: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename custom-resource-definition | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-3847 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should include custom resource definition resources in discovery documents [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: fetching the /apis discovery document | |
[1mSTEP[0m: finding the apiextensions.k8s.io API group in the /apis discovery document | |
[1mSTEP[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document | |
[1mSTEP[0m: fetching the /apis/apiextensions.k8s.io discovery document | |
[1mSTEP[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document | |
[1mSTEP[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document | |
[1mSTEP[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document | |
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:05.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "custom-resource-definition-3847" for this suite. | |
[32m•[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:52.103: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename webhook | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6882 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:87 | |
[1mSTEP[0m: Setting up server cert | |
[1mSTEP[0m: Create role binding to let webhook read extension-apiserver-authentication | |
Nov 3 06:56:53.301: INFO: role binding webhook-auth-reader already exists | |
[1mSTEP[0m: Deploying the webhook pod | |
[1mSTEP[0m: Wait for the deployment to be ready | |
Nov 3 06:56:53.483: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Nov 3 06:56:55.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:57.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:59.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:01.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
[1mSTEP[0m: Deploying the webhook service | |
[1mSTEP[0m: Verifying the service has paired with the endpoint | |
Nov 3 06:57:04.531: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should mutate custom resource with different stored version [Conformance] | |
test/e2e/framework/framework.go:688 | |
Nov 3 06:57:04.538: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Registering the mutating webhook for custom resource e2e-test-webhook-9779-crds.webhook.example.com via the AdmissionRegistration API | |
[1mSTEP[0m: Creating a custom resource while v1 is storage version | |
[1mSTEP[0m: Patching Custom Resource Definition to set v2 as storage | |
[1mSTEP[0m: Patching the custom resource while v2 is storage version | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:06.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "webhook-6882" for this suite. | |
[1mSTEP[0m: Destroying namespace "webhook-6882-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:102 | |
[32m• [SLOW TEST:14.151 seconds][0m | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should mutate custom resource with different stored version [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-node] ConfigMap | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:06.271: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename configmap | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5652 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should fail to create ConfigMap with empty key [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating configMap that has name configmap-test-emptyKey-d04fa7fe-1503-462c-a779-9cd7d80db7e7 | |
[AfterEach] [sig-node] ConfigMap | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:06.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "configmap-5652" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:06.560: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:06.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly file specified in the volumeMount [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:374[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Docker Containers | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:54.372: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename containers | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-6399 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a pod to test override all | |
Nov 3 06:56:54.549: INFO: Waiting up to 5m0s for pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf" in namespace "containers-6399" to be "success or failure" | |
Nov 3 06:56:54.574: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 24.818809ms | |
Nov 3 06:56:56.583: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034281714s | |
Nov 3 06:56:58.588: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03874951s | |
Nov 3 06:57:00.591: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042615832s | |
Nov 3 06:57:02.596: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046723138s | |
Nov 3 06:57:04.600: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050996206s | |
Nov 3 06:57:06.611: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.061882432s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:06.611: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf" satisfied condition "success or failure" | |
Nov 3 06:57:06.617: INFO: Trying to get logs from node kind-worker2 pod client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf container test-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:06.667: INFO: Waiting for pod client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf to disappear | |
Nov 3 06:57:06.678: INFO: Pod client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf no longer exists | |
[AfterEach] [k8s.io] Docker Containers | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:06.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "containers-6399" for this suite. | |
[32m• [SLOW TEST:12.355 seconds][0m | |
[k8s.io] Docker Containers | |
[90mtest/e2e/framework/framework.go:683[0m | |
should be able to override the image's default command and arguments [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Zone Support | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:06.749: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename zone-support | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in zone-support-5395 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Zone Support | |
test/e2e/storage/vsphere/vsphere_zone_support.go:101 | |
Nov 3 06:57:07.033: INFO: Only supported for providers [vsphere] (not skeleton) | |
[AfterEach] [sig-storage] Zone Support | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:07.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "zone-support-5395" for this suite. | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.402 seconds][0m | |
[sig-storage] Zone Support | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[36m[1mVerify a pod is created and attached to a dynamically created PV, based on multiple zones specified in the storage class. (No shared datastores exist among both zones) [BeforeEach][0m | |
[90mtest/e2e/storage/vsphere/vsphere_zone_support.go:282[0m | |
[36mOnly supported for providers [vsphere] (not skeleton)[0m | |
test/e2e/storage/vsphere/vsphere_zone_support.go:102 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:07.159: INFO: Distro debian doesn't support ntfs -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:07.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: aws] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDistro debian doesn't support ntfs -- skipping[0m | |
test/e2e/storage/testsuites/base.go:163 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:07.165: INFO: Driver azure doesn't support ntfs -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:07.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver azure doesn't support ntfs -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] ResourceQuota | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:56.937: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename resourcequota | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3610 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a pod. [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Counting existing ResourceQuota | |
[1mSTEP[0m: Creating a ResourceQuota | |
[1mSTEP[0m: Ensuring resource quota status is calculated | |
[1mSTEP[0m: Creating a Pod that fits quota | |
[1mSTEP[0m: Ensuring ResourceQuota status captures the pod usage | |
[1mSTEP[0m: Not allowing a pod to be created that exceeds remaining quota | |
[1mSTEP[0m: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) | |
[1mSTEP[0m: Ensuring a pod cannot update its resource requirements | |
[1mSTEP[0m: Ensuring attempts to update pod resource requirements did not change quota usage | |
[1mSTEP[0m: Deleting the pod | |
[1mSTEP[0m: Ensuring resource quota status released the pod usage | |
[AfterEach] [sig-api-machinery] ResourceQuota | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:10.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "resourcequota-3610" for this suite. | |
[32m• [SLOW TEST:13.718 seconds][0m | |
[sig-api-machinery] ResourceQuota | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should create a ResourceQuota and capture the life of a pod. [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Secrets | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:53.930: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename secrets | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-915 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating secret with name secret-test-240ea91a-1459-4ee4-84d8-25e8094b5a18 | |
[1mSTEP[0m: Creating a pod to test consume secrets | |
Nov 3 06:56:54.167: INFO: Waiting up to 5m0s for pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f" in namespace "secrets-915" to be "success or failure" | |
Nov 3 06:56:54.174: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343212ms | |
Nov 3 06:56:56.196: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029071933s | |
Nov 3 06:56:58.243: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075794817s | |
Nov 3 06:57:00.251: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083931922s | |
Nov 3 06:57:02.263: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095160433s | |
Nov 3 06:57:04.267: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.099672949s | |
Nov 3 06:57:06.289: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.121433664s | |
Nov 3 06:57:08.365: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.197980395s | |
Nov 3 06:57:10.377: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.209511934s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:10.377: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f" satisfied condition "success or failure" | |
Nov 3 06:57:10.428: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f container secret-volume-test: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:10.576: INFO: Waiting for pod pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f to disappear | |
Nov 3 06:57:10.600: INFO: Pod pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f no longer exists | |
[AfterEach] [sig-storage] Secrets | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:10.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "secrets-915" for this suite. | |
[32m• [SLOW TEST:16.733 seconds][0m | |
[sig-storage] Secrets | |
[90mtest/e2e/common/secrets_volume.go:34[0m | |
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:10.676: INFO: Only supported for providers [openstack] (not skeleton) | |
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:10.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: cinder] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould not mount / map unused volumes in a pod [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumemode.go:334[0m | |
[36mOnly supported for providers [openstack] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1019 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:10.684: INFO: Driver local doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:10.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-link-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-cli] Kubectl client | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:10.675: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename kubectl | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1706 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl client | |
test/e2e/kubectl/kubectl.go:269 | |
[It] should reject quota with invalid scopes | |
test/e2e/kubectl/kubectl.go:2086 | |
[1mSTEP[0m: calling kubectl quota | |
Nov 3 06:57:11.032: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config create quota scopes --hard=hard=pods=1000000 --scopes=Foo --namespace=kubectl-1706' | |
Nov 3 06:57:11.179: INFO: rc: 1 | |
[AfterEach] [sig-cli] Kubectl client | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:11.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "kubectl-1706" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.508: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename projected | |
Nov 3 06:56:42.604: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:42.717: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9259 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
test/e2e/common/projected_downwardapi.go:40 | |
[It] should update labels on modification [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating the pod | |
Nov 3 06:57:09.550: INFO: Successfully updated pod "labelsupdatea76f124b-e5f7-4d1a-8b3a-fe720a612f7a" | |
[AfterEach] [sig-storage] Projected downwardAPI | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:11.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "projected-9259" for this suite. | |
[32m• [SLOW TEST:31.215 seconds][0m | |
[sig-storage] Projected downwardAPI | |
[90mtest/e2e/common/projected_downwardapi.go:34[0m | |
should update labels on modification [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:11.733: INFO: Driver csi-hostpath-v0 doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:11.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould not mount / map unused volumes in a pod [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumemode.go:334[0m | |
[36mDriver csi-hostpath-v0 doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:04.867: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename emptydir | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8329 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium | |
Nov 3 06:57:05.061: INFO: Waiting up to 5m0s for pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465" in namespace "emptydir-8329" to be "success or failure" | |
Nov 3 06:57:05.080: INFO: Pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465": Phase="Pending", Reason="", readiness=false. Elapsed: 18.178861ms | |
Nov 3 06:57:07.110: INFO: Pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048458602s | |
Nov 3 06:57:09.153: INFO: Pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091147509s | |
Nov 3 06:57:11.222: INFO: Pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160366127s | |
Nov 3 06:57:13.234: INFO: Pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172244526s | |
Nov 3 06:57:15.238: INFO: Pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465": Phase="Pending", Reason="", readiness=false. Elapsed: 10.176483701s | |
Nov 3 06:57:17.242: INFO: Pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465": Phase="Pending", Reason="", readiness=false. Elapsed: 12.18084505s | |
Nov 3 06:57:19.248: INFO: Pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.186666182s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:19.248: INFO: Pod "pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465" satisfied condition "success or failure" | |
Nov 3 06:57:19.252: INFO: Trying to get logs from node kind-worker2 pod pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465 container test-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:19.274: INFO: Waiting for pod pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465 to disappear | |
Nov 3 06:57:19.285: INFO: Pod pod-f0f795ad-d0af-4bf2-8d1c-6d2dc574f465 no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:19.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "emptydir-8329" for this suite. | |
[32m• [SLOW TEST:14.441 seconds][0m | |
[sig-storage] EmptyDir volumes | |
[90mtest/e2e/common/empty_dir.go:40[0m | |
should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:19.317: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) | |
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:19.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gluster] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould not mount / map unused volumes in a pod [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumemode.go:334[0m | |
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m | |
test/e2e/storage/drivers/in_tree.go:258 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Subpath | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.460: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename subpath | |
Nov 3 06:56:40.753: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.795: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-245 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] Atomic writer volumes | |
test/e2e/storage/subpath.go:37 | |
[1mSTEP[0m: Setting up data | |
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating pod pod-subpath-test-configmap-4695 | |
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath | |
Nov 3 06:56:40.999: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4695" in namespace "subpath-245" to be "success or failure" | |
Nov 3 06:56:41.010: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Pending", Reason="", readiness=false. Elapsed: 11.685853ms | |
Nov 3 06:56:43.240: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241434747s | |
Nov 3 06:56:45.269: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270183632s | |
Nov 3 06:56:47.540: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Pending", Reason="", readiness=false. Elapsed: 6.541001185s | |
Nov 3 06:56:49.544: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544988374s | |
Nov 3 06:56:51.549: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 10.549805924s | |
Nov 3 06:56:53.591: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 12.59210983s | |
Nov 3 06:56:55.594: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 14.594785507s | |
Nov 3 06:56:57.598: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 16.599279288s | |
Nov 3 06:56:59.607: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 18.60785328s | |
Nov 3 06:57:01.643: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 20.644056475s | |
Nov 3 06:57:03.647: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 22.647895065s | |
Nov 3 06:57:05.682: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 24.683390987s | |
Nov 3 06:57:07.754: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 26.755200537s | |
Nov 3 06:57:09.770: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 28.770884607s | |
Nov 3 06:57:11.793: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 30.794156215s | |
Nov 3 06:57:13.834: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 32.835114781s | |
Nov 3 06:57:15.839: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 34.840620764s | |
Nov 3 06:57:17.844: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Running", Reason="", readiness=true. Elapsed: 36.845079426s | |
Nov 3 06:57:19.848: INFO: Pod "pod-subpath-test-configmap-4695": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.848920725s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:19.848: INFO: Pod "pod-subpath-test-configmap-4695" satisfied condition "success or failure" | |
Nov 3 06:57:19.853: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-configmap-4695 container test-container-subpath-configmap-4695: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:19.871: INFO: Waiting for pod pod-subpath-test-configmap-4695 to disappear | |
Nov 3 06:57:19.873: INFO: Pod pod-subpath-test-configmap-4695 no longer exists | |
[1mSTEP[0m: Deleting pod pod-subpath-test-configmap-4695 | |
Nov 3 06:57:19.873: INFO: Deleting pod "pod-subpath-test-configmap-4695" in namespace "subpath-245" | |
[AfterEach] [sig-storage] Subpath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:19.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "subpath-245" for this suite. | |
[32m• [SLOW TEST:39.425 seconds][0m | |
[sig-storage] Subpath | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
Atomic writer volumes | |
[90mtest/e2e/storage/subpath.go:33[0m | |
should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:19.890: INFO: Only supported for providers [aws] (not skeleton) | |
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:19.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: aws] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mOnly supported for providers [aws] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1590 | |
[90m------------------------------[0m | |
[BeforeEach] [sig-cli] Kubectl client | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:19.899: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename kubectl | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9109 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl client | |
test/e2e/kubectl/kubectl.go:269 | |
[It] should check if cluster-info dump succeeds | |
test/e2e/kubectl/kubectl.go:1038 | |
[1mSTEP[0m: running cluster-info dump | |
Nov 3 06:57:20.044: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config cluster-info dump' | |
Nov 3 06:57:20.903: INFO: stderr: "" | |
Nov 3 06:57:20.904: INFO: stdout: "{\n \"kind\": \"NodeList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/nodes\",\n \"resourceVersion\": \"2533\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kind-control-plane\",\n \"selfLink\": \"/api/v1/nodes/kind-control-plane\",\n \"uid\": \"23a11f0f-cb4a-4387-9c1c-7a5ccad4b305\",\n \"resourceVersion\": \"1549\",\n \"creationTimestamp\": \"2019-11-03T06:54:57Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"kind-control-plane\",\n \"kubernetes.io/os\": \"linux\",\n \"node-role.kubernetes.io/master\": \"\"\n },\n \"annotations\": {\n \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n }\n },\n \"spec\": {\n \"podCIDR\": \"10.244.0.0/24\",\n \"podCIDRs\": [\n \"10.244.0.0/24\"\n ],\n \"taints\": [\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n }\n ]\n },\n \"status\": {\n \"capacity\": {\n \"cpu\": \"8\",\n \"ephemeral-storage\": \"253696108Ki\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"53588956Ki\",\n \"pods\": \"110\"\n },\n \"allocatable\": {\n \"cpu\": \"8\",\n \"ephemeral-storage\": \"253696108Ki\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"53588956Ki\",\n \"pods\": \"110\"\n },\n \"conditions\": [\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:57Z\",\n \"lastTransitionTime\": \"2019-11-03T06:54:53Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:57Z\",\n \"lastTransitionTime\": \"2019-11-03T06:54:53Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:57Z\",\n \"lastTransitionTime\": \"2019-11-03T06:54:53Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:57Z\",\n \"lastTransitionTime\": \"2019-11-03T06:55:57Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"172.17.0.3\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"kind-control-plane\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"925ae34057a64693bb5cf785faccd5d6\",\n \"systemUUID\": \"89f994e4-fb3a-40b0-bbfd-11daf7e98946\",\n \"bootID\": \"afad25ed-e88e-4711-8228-088061582e5c\",\n \"kernelVersion\": \"4.14.137+\",\n \"osImage\": \"Ubuntu Eoan Ermine (development branch)\",\n \"containerRuntimeVersion\": \"containerd://1.3.0-20-g7af311b4\",\n \"kubeletVersion\": \"v1.18.0-alpha.0.178+0c66e64b140011\",\n \"kubeProxyVersion\": \"v1.18.0-alpha.0.178+0c66e64b140011\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 300389019\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd:3.4.3-0\"\n ],\n \"sizeBytes\": 289997247\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 185496228\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 96285338\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 94115360\n },\n {\n \"names\": [\n \"k8s.gcr.io/coredns:1.6.2\"\n ],\n \"sizeBytes\": 44229087\n },\n {\n \"names\": [\n \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\n ],\n \"sizeBytes\": 32397572\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause:3.1\"\n ],\n \"sizeBytes\": 746479\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"kind-worker\",\n \"selfLink\": \"/api/v1/nodes/kind-worker\",\n \"uid\": \"c5cf28bd-4520-4c92-95da-3a6d0e344375\",\n \"resourceVersion\": \"2531\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"kubelet_cleanup\": \"true\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"kind-worker\",\n \"kubernetes.io/os\": \"linux\"\n },\n \"annotations\": {\n \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-mock-csi-mock-volumes-2275\\\":\\\"csi-mock-csi-mock-volumes-2275\\\"}\",\n \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n }\n },\n \"spec\": {\n \"podCIDR\": \"10.244.2.0/24\",\n \"podCIDRs\": [\n \"10.244.2.0/24\"\n ]\n },\n \"status\": {\n \"capacity\": {\n \"cpu\": \"8\",\n \"ephemeral-storage\": \"253696108Ki\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"53588956Ki\",\n \"pods\": \"110\"\n },\n \"allocatable\": {\n \"cpu\": \"8\",\n \"ephemeral-storage\": \"253696108Ki\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"53588956Ki\",\n \"pods\": \"110\"\n },\n \"conditions\": [\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:53Z\",\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:53Z\",\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:53Z\",\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:53Z\",\n \"lastTransitionTime\": \"2019-11-03T06:56:13Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"172.17.0.4\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"kind-worker\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"c8e64b38ef974ce9b0565f552bf843ad\",\n \"systemUUID\": \"91f4f4d7-fd98-43c4-bfd6-78a23240d301\",\n \"bootID\": \"afad25ed-e88e-4711-8228-088061582e5c\",\n \"kernelVersion\": \"4.14.137+\",\n \"osImage\": \"Ubuntu Eoan Ermine (development branch)\",\n \"containerRuntimeVersion\": \"containerd://1.3.0-20-g7af311b4\",\n \"kubeletVersion\": \"v1.18.0-alpha.0.178+0c66e64b140011\",\n \"kubeProxyVersion\": \"v1.18.0-alpha.0.178+0c66e64b140011\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 300389019\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd:3.4.3-0\"\n ],\n \"sizeBytes\": 289997247\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 185496228\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 96285338\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 94115360\n },\n {\n \"names\": [\n \"k8s.gcr.io/coredns:1.6.2\"\n ],\n \"sizeBytes\": 44229087\n },\n {\n \"names\": [\n \"docker.io/library/httpd:2.4.38-alpine\"\n ],\n \"sizeBytes\": 40765017\n },\n {\n \"names\": [\n \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\n ],\n \"sizeBytes\": 32397572\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727\",\n \"gcr.io/kubernetes-e2e-test-images/agnhost:2.6\"\n ],\n \"sizeBytes\": 18352698\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause:3.1\"\n ],\n \"sizeBytes\": 746479\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"kind-worker2\",\n \"selfLink\": \"/api/v1/nodes/kind-worker2\",\n \"uid\": \"322cad9e-96cc-464c-9003-9b9c8297934c\",\n \"resourceVersion\": \"1997\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"kubelet_cleanup\": \"true\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"kind-worker2\",\n \"kubernetes.io/os\": \"linux\"\n },\n \"annotations\": {\n \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n }\n },\n \"spec\": {\n \"podCIDR\": \"10.244.1.0/24\",\n \"podCIDRs\": [\n \"10.244.1.0/24\"\n ]\n },\n \"status\": {\n \"capacity\": {\n \"cpu\": \"8\",\n \"ephemeral-storage\": \"253696108Ki\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"53588956Ki\",\n \"pods\": \"110\"\n },\n \"allocatable\": {\n \"cpu\": \"8\",\n \"ephemeral-storage\": \"253696108Ki\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"53588956Ki\",\n \"pods\": \"110\"\n },\n \"conditions\": [\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:53Z\",\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:53Z\",\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:53Z\",\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2019-11-03T06:56:53Z\",\n \"lastTransitionTime\": \"2019-11-03T06:56:13Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"172.17.0.2\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"kind-worker2\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"167878653d264e699a398d3def849668\",\n \"systemUUID\": \"56565a32-14a2-4281-9621-3fbf5342e132\",\n \"bootID\": \"afad25ed-e88e-4711-8228-088061582e5c\",\n \"kernelVersion\": \"4.14.137+\",\n \"osImage\": \"Ubuntu Eoan Ermine (development branch)\",\n \"containerRuntimeVersion\": \"containerd://1.3.0-20-g7af311b4\",\n \"kubeletVersion\": \"v1.18.0-alpha.0.178+0c66e64b140011\",\n \"kubeProxyVersion\": \"v1.18.0-alpha.0.178+0c66e64b140011\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 300389019\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd:3.4.3-0\"\n ],\n \"sizeBytes\": 289997247\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 185496228\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 96285338\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\"\n ],\n \"sizeBytes\": 94115360\n },\n {\n \"names\": [\n \"k8s.gcr.io/coredns:1.6.2\"\n ],\n \"sizeBytes\": 44229087\n },\n {\n \"names\": [\n \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\n ],\n \"sizeBytes\": 32397572\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727\",\n \"gcr.io/kubernetes-e2e-test-images/agnhost:2.6\"\n ],\n \"sizeBytes\": 18352698\n },\n {\n \"names\": [\n \"docker.io/library/nginx:1.14-alpine\"\n ],\n \"sizeBytes\": 6978806\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause:3.1\"\n ],\n \"sizeBytes\": 746479\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2\",\n \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\n ],\n \"sizeBytes\": 599341\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d\",\n \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\"\n ],\n \"sizeBytes\": 539309\n }\n ]\n }\n }\n ]\n}\n{\n \"kind\": \"EventList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/events\",\n \"resourceVersion\": \"2533\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-58dzr.15d394a18136dd89\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-58dzr.15d394a18136dd89\",\n \"uid\": \"95489f0c-0e64-4297-8ddd-a755114bc591\",\n \"resourceVersion\": \"395\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-58dzr\",\n \"uid\": \"cf78ce08-e950-4fdc-b782-47f893532d2c\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"370\"\n },\n \"reason\": \"FailedScheduling\",\n \"message\": \"0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 2,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-58dzr.15d394a574d26488\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-58dzr.15d394a574d26488\",\n \"uid\": \"e1bf233a-4a6e-454b-8809-d006f06cc12c\",\n \"resourceVersion\": \"461\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-58dzr\",\n \"uid\": \"cf78ce08-e950-4fdc-b782-47f893532d2c\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"387\"\n },\n \"reason\": \"FailedScheduling\",\n \"message\": \"0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-58dzr.15d394a576af7282\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-58dzr.15d394a576af7282\",\n \"uid\": \"f36b08f4-571a-467a-a085-de11d7509901\",\n \"resourceVersion\": \"583\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-58dzr\",\n \"uid\": \"cf78ce08-e950-4fdc-b782-47f893532d2c\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"462\"\n },\n \"reason\": \"FailedScheduling\",\n \"message\": \"0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:57Z\",\n \"count\": 3,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-58dzr.15d394ab8d7d3807\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-58dzr.15d394ab8d7d3807\",\n \"uid\": \"7ac56864-3696-4d18-b5ba-69399afa302e\",\n \"resourceVersion\": \"598\",\n \"creationTimestamp\": \"2019-11-03T06:55:59Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-58dzr\",\n \"uid\": \"cf78ce08-e950-4fdc-b782-47f893532d2c\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"478\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/coredns-5644d7b6d9-58dzr to kind-control-plane\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:59Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:59Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-58dzr.15d394abb1b3b35b\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-58dzr.15d394abb1b3b35b\",\n \"uid\": \"1ca3fc5e-68ce-4a27-b7c7-94f50a8c9d08\",\n \"resourceVersion\": \"602\",\n \"creationTimestamp\": \"2019-11-03T06:56:00Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-58dzr\",\n \"uid\": \"cf78ce08-e950-4fdc-b782-47f893532d2c\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"597\",\n \"fieldPath\": \"spec.containers{coredns}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/coredns:1.6.2\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:56:00Z\",\n \"lastTimestamp\": \"2019-11-03T06:56:00Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-58dzr.15d394abb603da10\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-58dzr.15d394abb603da10\",\n \"uid\": \"4c4d8ae7-a44b-4756-ad73-aaf3d6c1e987\",\n \"resourceVersion\": \"605\",\n \"creationTimestamp\": \"2019-11-03T06:56:00Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-58dzr\",\n \"uid\": \"cf78ce08-e950-4fdc-b782-47f893532d2c\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"597\",\n \"fieldPath\": \"spec.containers{coredns}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container coredns\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:56:00Z\",\n \"lastTimestamp\": \"2019-11-03T06:56:00Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-58dzr.15d394abc0c33dcb\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-58dzr.15d394abc0c33dcb\",\n \"uid\": \"f78fad12-97ef-4b47-8aae-d10862d76f15\",\n \"resourceVersion\": \"608\",\n \"creationTimestamp\": \"2019-11-03T06:56:00Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-58dzr\",\n \"uid\": \"cf78ce08-e950-4fdc-b782-47f893532d2c\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"597\",\n \"fieldPath\": \"spec.containers{coredns}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container coredns\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:56:00Z\",\n \"lastTimestamp\": \"2019-11-03T06:56:00Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-j9fqd.15d394a17e36be12\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-j9fqd.15d394a17e36be12\",\n \"uid\": \"d9342f4b-166c-4f4a-a527-70bc2edab4cb\",\n \"resourceVersion\": \"388\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-j9fqd\",\n \"uid\": \"aa121b8c-53ce-4baa-bfe4-44f08dec5504\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"360\"\n },\n \"reason\": \"FailedScheduling\",\n \"message\": \"0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 2,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-j9fqd.15d394a573bf02fb\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-j9fqd.15d394a573bf02fb\",\n \"uid\": \"21448870-4515-4f6c-8d70-93fe4421187d\",\n \"resourceVersion\": \"453\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-j9fqd\",\n \"uid\": \"aa121b8c-53ce-4baa-bfe4-44f08dec5504\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"369\"\n },\n \"reason\": \"FailedScheduling\",\n \"message\": \"0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-j9fqd.15d394a5ba90601a\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-j9fqd.15d394a5ba90601a\",\n \"uid\": \"faf55c2c-21b1-4060-a4e3-168db9bea065\",\n \"resourceVersion\": \"503\",\n \"creationTimestamp\": \"2019-11-03T06:55:34Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-j9fqd\",\n \"uid\": \"aa121b8c-53ce-4baa-bfe4-44f08dec5504\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"452\"\n },\n \"reason\": \"FailedScheduling\",\n \"message\": \"0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:34Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:34Z\",\n \"count\": 2,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-j9fqd.15d394ab1d9c3210\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-j9fqd.15d394ab1d9c3210\",\n \"uid\": \"385f410e-215d-48b0-b945-f649492783f5\",\n \"resourceVersion\": \"585\",\n \"creationTimestamp\": \"2019-11-03T06:55:57Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-j9fqd\",\n \"uid\": \"aa121b8c-53ce-4baa-bfe4-44f08dec5504\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"501\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/coredns-5644d7b6d9-j9fqd to kind-control-plane\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:57Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:57Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-j9fqd.15d394ab44b83b69\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-j9fqd.15d394ab44b83b69\",\n \"uid\": \"3a56addc-3937-48ff-b49b-67e442f9062a\",\n \"resourceVersion\": \"591\",\n \"creationTimestamp\": \"2019-11-03T06:55:58Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-j9fqd\",\n \"uid\": \"aa121b8c-53ce-4baa-bfe4-44f08dec5504\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"584\",\n \"fieldPath\": \"spec.containers{coredns}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/coredns:1.6.2\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:58Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:58Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-j9fqd.15d394ab673573bd\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-j9fqd.15d394ab673573bd\",\n \"uid\": \"881a7a01-b5f9-4487-b909-c4a41a87477b\",\n \"resourceVersion\": \"593\",\n \"creationTimestamp\": \"2019-11-03T06:55:58Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-j9fqd\",\n \"uid\": \"aa121b8c-53ce-4baa-bfe4-44f08dec5504\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"584\",\n \"fieldPath\": \"spec.containers{coredns}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container coredns\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:58Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:58Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-j9fqd.15d394ab7195f259\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9-j9fqd.15d394ab7195f259\",\n \"uid\": \"cc110a80-b57f-42aa-a2b8-ae442d9e1f53\",\n \"resourceVersion\": \"594\",\n \"creationTimestamp\": \"2019-11-03T06:55:58Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9-j9fqd\",\n \"uid\": \"aa121b8c-53ce-4baa-bfe4-44f08dec5504\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"584\",\n \"fieldPath\": \"spec.containers{coredns}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container coredns\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:58Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:58Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9.15d394a17dfd7e3e\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9.15d394a17dfd7e3e\",\n \"uid\": \"0bba165c-3b3c-4376-8799-fa209a01238c\",\n \"resourceVersion\": \"364\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"ReplicaSet\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9\",\n \"uid\": \"00f25722-b999-4b82-a73f-0c99be5b1e9d\",\n \"apiVersion\": \"apps/v1\",\n \"resourceVersion\": \"351\"\n },\n \"reason\": \"SuccessfulCreate\",\n \"message\": \"Created pod: coredns-5644d7b6d9-j9fqd\",\n \"source\": {\n \"component\": \"replicaset-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9.15d394a17fc33954\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-5644d7b6d9.15d394a17fc33954\",\n \"uid\": \"5cfac185-c00a-4ed1-acb0-c27899a9bcde\",\n \"resourceVersion\": \"380\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"ReplicaSet\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns-5644d7b6d9\",\n \"uid\": \"00f25722-b999-4b82-a73f-0c99be5b1e9d\",\n \"apiVersion\": \"apps/v1\",\n \"resourceVersion\": \"351\"\n },\n \"reason\": \"SuccessfulCreate\",\n \"message\": \"Created pod: coredns-5644d7b6d9-58dzr\",\n \"source\": {\n \"component\": \"replicaset-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"coredns.15d394a17c898ed3\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns.15d394a17c898ed3\",\n \"uid\": \"316521dd-35cf-44e8-881f-86e30c4c8c1b\",\n \"resourceVersion\": \"359\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Deployment\",\n \"namespace\": \"kube-system\",\n \"name\": \"coredns\",\n \"uid\": \"af23a4c9-64e5-408e-8737-ca144be79102\",\n \"apiVersion\": \"apps/v1\",\n \"resourceVersion\": \"192\"\n },\n \"reason\": \"ScalingReplicaSet\",\n \"message\": \"Scaled up replica set coredns-5644d7b6d9 to 2\",\n \"source\": {\n \"component\": \"deployment-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"etcd-kind-control-plane.15d3949aeecaa9cc\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/etcd-kind-control-plane.15d3949aeecaa9cc\",\n \"uid\": \"0339732f-ef2a-40d9-a643-812a640b44af\",\n \"resourceVersion\": \"219\",\n \"creationTimestamp\": \"2019-11-03T06:55:02Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"etcd-kind-control-plane\",\n \"uid\": \"28ba3ba0264772641c791ff01a5eecff\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{etcd}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/etcd:3.4.3-0\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:48Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:48Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"etcd-kind-control-plane.15d3949bfe545e34\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/etcd-kind-control-plane.15d3949bfe545e34\",\n \"uid\": \"8011cac4-1bcc-4ce7-9115-82bb709f7e7e\",\n \"resourceVersion\": \"240\",\n \"creationTimestamp\": \"2019-11-03T06:55:04Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"etcd-kind-control-plane\",\n \"uid\": \"28ba3ba0264772641c791ff01a5eecff\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{etcd}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container etcd\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:52Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:52Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"etcd-kind-control-plane.15d3949c0bf75050\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/etcd-kind-control-plane.15d3949c0bf75050\",\n \"uid\": \"b44e2546-8632-4441-b67c-003dccb94aea\",\n \"resourceVersion\": \"241\",\n \"creationTimestamp\": \"2019-11-03T06:55:04Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"etcd-kind-control-plane\",\n \"uid\": \"28ba3ba0264772641c791ff01a5eecff\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{etcd}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container etcd\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:52Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:52Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-9g7zl.15d394a578cec82d\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-9g7zl.15d394a578cec82d\",\n \"uid\": \"ba343bea-bd30-4ab0-9ec1-b2a60c009361\",\n \"resourceVersion\": \"487\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-9g7zl\",\n \"uid\": \"ff1980f2-2dbd-4e95-b6ac-b6f3e4922a4e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"457\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/kindnet-9g7zl to kind-worker2\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-9g7zl.15d394a5a5e394f5\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-9g7zl.15d394a5a5e394f5\",\n \"uid\": \"69142c97-5313-40ed-8729-bedfeed378b0\",\n \"resourceVersion\": \"499\",\n \"creationTimestamp\": \"2019-11-03T06:55:34Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-9g7zl\",\n \"uid\": \"ff1980f2-2dbd-4e95-b6ac-b6f3e4922a4e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"485\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Pulling\",\n \"message\": \"Pulling image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker2\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:34Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:34Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-9g7zl.15d394a6506cc4b1\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-9g7zl.15d394a6506cc4b1\",\n \"uid\": \"bb9da24e-6057-4be9-9230-669edef48d18\",\n \"resourceVersion\": \"518\",\n \"creationTimestamp\": \"2019-11-03T06:55:36Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-9g7zl\",\n \"uid\": \"ff1980f2-2dbd-4e95-b6ac-b6f3e4922a4e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"485\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Successfully pulled image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker2\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:36Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:36Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-9g7zl.15d394a654ef85c9\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-9g7zl.15d394a654ef85c9\",\n \"uid\": \"9aa6eba6-a9ba-4d51-8018-193c9ced0196\",\n \"resourceVersion\": \"521\",\n \"creationTimestamp\": \"2019-11-03T06:55:36Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-9g7zl\",\n \"uid\": \"ff1980f2-2dbd-4e95-b6ac-b6f3e4922a4e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"485\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container kindnet-cni\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker2\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:36Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:36Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-9g7zl.15d394a66b9dae08\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-9g7zl.15d394a66b9dae08\",\n \"uid\": \"0ca78416-11cf-4eb8-bbea-95c52e6d558c\",\n \"resourceVersion\": \"522\",\n \"creationTimestamp\": \"2019-11-03T06:55:37Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-9g7zl\",\n \"uid\": \"ff1980f2-2dbd-4e95-b6ac-b6f3e4922a4e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"485\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container kindnet-cni\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker2\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:37Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:37Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-c744w.15d394a17ca85137\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-c744w.15d394a17ca85137\",\n \"uid\": \"b73edcc1-c71a-48da-9fb4-5b9c395dd557\",\n \"resourceVersion\": \"361\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-c744w\",\n \"uid\": \"35ef4026-034e-4ef5-883d-43692c07aeb9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"344\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/kindnet-c744w to kind-control-plane\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-c744w.15d394a1adb52e82\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-c744w.15d394a1adb52e82\",\n \"uid\": \"e2fdcef6-23a4-47a0-a9fe-e9a520af8e21\",\n \"resourceVersion\": \"397\",\n \"creationTimestamp\": \"2019-11-03T06:55:17Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-c744w\",\n \"uid\": \"35ef4026-034e-4ef5-883d-43692c07aeb9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"349\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Pulling\",\n \"message\": \"Pulling image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:17Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:17Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-c744w.15d394a226150207\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-c744w.15d394a226150207\",\n \"uid\": \"b94e714f-10e0-47a7-b995-fd7892b10407\",\n \"resourceVersion\": \"407\",\n \"creationTimestamp\": \"2019-11-03T06:55:19Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-c744w\",\n \"uid\": \"35ef4026-034e-4ef5-883d-43692c07aeb9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"349\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Successfully pulled image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:19Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:19Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-c744w.15d394a22f8ff4c5\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-c744w.15d394a22f8ff4c5\",\n \"uid\": \"cba98a4a-de58-4a9d-892e-fb6b3b98d5d1\",\n \"resourceVersion\": \"409\",\n \"creationTimestamp\": \"2019-11-03T06:55:19Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-c744w\",\n \"uid\": \"35ef4026-034e-4ef5-883d-43692c07aeb9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"349\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container kindnet-cni\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:19Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:19Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-c744w.15d394a25030a38f\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-c744w.15d394a25030a38f\",\n \"uid\": \"492f6149-0dc9-4e21-a6ac-4e072111a698\",\n \"resourceVersion\": \"414\",\n \"creationTimestamp\": \"2019-11-03T06:55:19Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-c744w\",\n \"uid\": \"35ef4026-034e-4ef5-883d-43692c07aeb9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"349\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container kindnet-cni\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:19Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:19Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-zlgk8.15d394a5791ccd5f\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-zlgk8.15d394a5791ccd5f\",\n \"uid\": \"e3cf2620-e2e5-4d22-ad1f-50d3ca672019\",\n \"resourceVersion\": \"488\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-zlgk8\",\n \"uid\": \"89e0dc23-f3fd-4a81-8bf9-e661b598dce9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"470\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/kindnet-zlgk8 to kind-worker\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-zlgk8.15d394a5a3eaa791\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-zlgk8.15d394a5a3eaa791\",\n \"uid\": \"96444b7a-c33f-4452-9c5f-ae4f9b6f1697\",\n \"resourceVersion\": \"498\",\n \"creationTimestamp\": \"2019-11-03T06:55:34Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-zlgk8\",\n \"uid\": \"89e0dc23-f3fd-4a81-8bf9-e661b598dce9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"486\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Pulling\",\n \"message\": \"Pulling image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:34Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:34Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-zlgk8.15d394a64243ab44\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-zlgk8.15d394a64243ab44\",\n \"uid\": \"476634b7-342f-4443-8731-a2f10c6f8bcb\",\n \"resourceVersion\": \"514\",\n \"creationTimestamp\": \"2019-11-03T06:55:36Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-zlgk8\",\n \"uid\": \"89e0dc23-f3fd-4a81-8bf9-e661b598dce9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"486\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Successfully pulled image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:36Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:36Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-zlgk8.15d394a651ae9f78\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-zlgk8.15d394a651ae9f78\",\n \"uid\": \"96e1de38-8221-4cdf-991b-7bebb052d592\",\n \"resourceVersion\": \"520\",\n \"creationTimestamp\": \"2019-11-03T06:55:36Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-zlgk8\",\n \"uid\": \"89e0dc23-f3fd-4a81-8bf9-e661b598dce9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"486\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container kindnet-cni\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:36Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:36Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-zlgk8.15d394a66de0a9ba\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-zlgk8.15d394a66de0a9ba\",\n \"uid\": \"38362101-851f-4f23-9112-7c2a6b4974ae\",\n \"resourceVersion\": \"523\",\n \"creationTimestamp\": \"2019-11-03T06:55:37Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet-zlgk8\",\n \"uid\": \"89e0dc23-f3fd-4a81-8bf9-e661b598dce9\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"486\",\n \"fieldPath\": \"spec.containers{kindnet-cni}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container kindnet-cni\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:37Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:37Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet.15d394a17af1c88a\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15d394a17af1c88a\",\n \"uid\": \"d28da908-f3d6-4263-8ace-da810c5b14f4\",\n \"resourceVersion\": \"348\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"DaemonSet\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet\",\n \"uid\": \"6205a1e4-b14b-4964-91f6-b11c04209bbb\",\n \"apiVersion\": \"apps/v1\",\n \"resourceVersion\": \"223\"\n },\n \"reason\": \"SuccessfulCreate\",\n \"message\": \"Created pod: kindnet-c744w\",\n \"source\": {\n \"component\": \"daemonset-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet.15d394a574cf0f4d\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15d394a574cf0f4d\",\n \"uid\": \"c3be3ae4-aadb-4fa2-8074-46c53e9e890f\",\n \"resourceVersion\": \"460\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"DaemonSet\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet\",\n \"uid\": \"6205a1e4-b14b-4964-91f6-b11c04209bbb\",\n \"apiVersion\": \"apps/v1\",\n \"resourceVersion\": \"416\"\n },\n \"reason\": \"SuccessfulCreate\",\n \"message\": \"Created pod: kindnet-9g7zl\",\n \"source\": {\n \"component\": \"daemonset-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kindnet.15d394a576855d39\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15d394a576855d39\",\n \"uid\": \"526cb965-8b94-4852-b065-99a841c2cea5\",\n \"resourceVersion\": \"481\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"DaemonSet\",\n \"namespace\": \"kube-system\",\n \"name\": \"kindnet\",\n \"uid\": \"6205a1e4-b14b-4964-91f6-b11c04209bbb\",\n \"apiVersion\": \"apps/v1\",\n \"resourceVersion\": \"416\"\n },\n \"reason\": \"SuccessfulCreate\",\n \"message\": \"Created pod: kindnet-zlgk8\",\n \"source\": {\n \"component\": \"daemonset-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-apiserver-kind-control-plane.15d3949af1536670\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-apiserver-kind-control-plane.15d3949af1536670\",\n \"uid\": \"88abb54b-a26a-4d8a-a96a-2f336bfd0149\",\n \"resourceVersion\": \"228\",\n \"creationTimestamp\": \"2019-11-03T06:55:02Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-apiserver-kind-control-plane\",\n \"uid\": \"1f4a1fd45c079aadc57e77a085cbbea0\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-apiserver}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:48Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:48Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-apiserver-kind-control-plane.15d3949beeab2cf8\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-apiserver-kind-control-plane.15d3949beeab2cf8\",\n \"uid\": \"537e501f-6ab9-4343-8eb6-b45b26d949e2\",\n \"resourceVersion\": \"234\",\n \"creationTimestamp\": \"2019-11-03T06:55:03Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-apiserver-kind-control-plane\",\n \"uid\": \"1f4a1fd45c079aadc57e77a085cbbea0\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-apiserver}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container kube-apiserver\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:52Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:52Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-apiserver-kind-control-plane.15d3949bfd5f2807\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-apiserver-kind-control-plane.15d3949bfd5f2807\",\n \"uid\": \"302ecde1-ec36-4de3-8d83-f709cfb2c720\",\n \"resourceVersion\": \"239\",\n \"creationTimestamp\": \"2019-11-03T06:55:04Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-apiserver-kind-control-plane\",\n \"uid\": \"1f4a1fd45c079aadc57e77a085cbbea0\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-apiserver}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container kube-apiserver\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:52Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:52Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-controller-manager-kind-control-plane.15d3949af0dd9c63\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager-kind-control-plane.15d3949af0dd9c63\",\n \"uid\": \"82b2ef21-febd-425a-83f7-2df74145f33b\",\n \"resourceVersion\": \"227\",\n \"creationTimestamp\": \"2019-11-03T06:55:02Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-controller-manager-kind-control-plane\",\n \"uid\": \"e8347a9972165bc92f6de0a5bf784ce4\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:48Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:48Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-controller-manager-kind-control-plane.15d3949bed346b2c\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager-kind-control-plane.15d3949bed346b2c\",\n \"uid\": \"14a15b97-ff21-4eba-a2c1-89cb5f11f81f\",\n \"resourceVersion\": \"231\",\n \"creationTimestamp\": \"2019-11-03T06:55:03Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-controller-manager-kind-control-plane\",\n \"uid\": \"e8347a9972165bc92f6de0a5bf784ce4\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container kube-controller-manager\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:52Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:52Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-controller-manager-kind-control-plane.15d3949bf9088b23\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager-kind-control-plane.15d3949bf9088b23\",\n \"uid\": \"e986c954-d7ec-4765-a91b-d0b1f83f3a61\",\n \"resourceVersion\": \"237\",\n \"creationTimestamp\": \"2019-11-03T06:55:03Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-controller-manager-kind-control-plane\",\n \"uid\": \"e8347a9972165bc92f6de0a5bf784ce4\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container kube-controller-manager\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:52Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:52Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-controller-manager.15d3949d95b1f16f\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15d3949d95b1f16f\",\n \"uid\": \"732d1fae-02ff-4f40-ac20-10d94c427766\",\n \"resourceVersion\": \"158\",\n \"creationTimestamp\": \"2019-11-03T06:54:59Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Endpoints\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-controller-manager\",\n \"uid\": \"2fc77524-998a-4908-ad5c-b35d6eb77ea0\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"156\"\n },\n \"reason\": \"LeaderElection\",\n \"message\": \"kind-control-plane_628d308b-676a-4b24-89d4-a22b72af91ad became leader\",\n \"source\": {\n \"component\": \"kube-controller-manager\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:59Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:59Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-controller-manager.15d3949d95b243ff\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15d3949d95b243ff\",\n \"uid\": \"492d3119-ef88-4bff-ac20-c0612028eaa3\",\n \"resourceVersion\": \"159\",\n \"creationTimestamp\": \"2019-11-03T06:54:59Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Lease\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-controller-manager\",\n \"uid\": \"2e734402-ef1d-49c9-a2ec-ccd698d810fc\",\n \"apiVersion\": \"coordination.k8s.io/v1\",\n \"resourceVersion\": \"157\"\n },\n \"reason\": \"LeaderElection\",\n \"message\": \"kind-control-plane_628d308b-676a-4b24-89d4-a22b72af91ad became leader\",\n \"source\": {\n \"component\": \"kube-controller-manager\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:59Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:59Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5qht6.15d394a57749b3b6\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-5qht6.15d394a57749b3b6\",\n \"uid\": \"c2523d16-0824-4d9c-93f0-4dfa9c8c3851\",\n \"resourceVersion\": \"480\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-5qht6\",\n \"uid\": \"e85e54c8-3e80-43a3-8553-a4ee90da8581\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"471\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/kube-proxy-5qht6 to kind-worker\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5qht6.15d394a596108bcb\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-5qht6.15d394a596108bcb\",\n \"uid\": \"a99f5f0a-a7c0-453f-9170-8e4eae614256\",\n \"resourceVersion\": \"492\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-5qht6\",\n \"uid\": \"e85e54c8-3e80-43a3-8553-a4ee90da8581\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"475\",\n \"fieldPath\": \"spec.containers{kube-proxy}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5qht6.15d394a609fff575\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-5qht6.15d394a609fff575\",\n \"uid\": \"34cb5c1c-859f-4e40-9ed1-972ea54d462d\",\n \"resourceVersion\": \"505\",\n \"creationTimestamp\": \"2019-11-03T06:55:35Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-5qht6\",\n \"uid\": \"e85e54c8-3e80-43a3-8553-a4ee90da8581\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"475\",\n \"fieldPath\": \"spec.containers{kube-proxy}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container kube-proxy\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:35Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:35Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5qht6.15d394a618bfdfee\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-5qht6.15d394a618bfdfee\",\n \"uid\": \"b792940b-dc8a-4cfc-b14c-5d36b9d69e06\",\n \"resourceVersion\": \"511\",\n \"creationTimestamp\": \"2019-11-03T06:55:35Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-5qht6\",\n \"uid\": \"e85e54c8-3e80-43a3-8553-a4ee90da8581\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"475\",\n \"fieldPath\": \"spec.containers{kube-proxy}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container kube-proxy\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:35Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:35Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5zhtl.15d394a17caf26b0\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-5zhtl.15d394a17caf26b0\",\n \"uid\": \"b5e0dbfc-210e-42ea-a77e-dc71633b3542\",\n \"resourceVersion\": \"365\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-5zhtl\",\n \"uid\": \"4e6d32c5-8ec5-41eb-98df-f2c07cac0f92\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"345\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/kube-proxy-5zhtl to kind-control-plane\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5zhtl.15d394a19c92d298\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-5zhtl.15d394a19c92d298\",\n \"uid\": \"46b4c6fe-43d5-4ad1-8e91-d2ac7d3adae1\",\n \"resourceVersion\": \"396\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-5zhtl\",\n \"uid\": \"4e6d32c5-8ec5-41eb-98df-f2c07cac0f92\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"350\",\n \"fieldPath\": \"spec.containers{kube-proxy}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5zhtl.15d394a1c26f8637\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-5zhtl.15d394a1c26f8637\",\n \"uid\": \"d6f4ac94-5681-438e-ad89-b8ec771e165a\",\n \"resourceVersion\": \"398\",\n \"creationTimestamp\": \"2019-11-03T06:55:17Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-5zhtl\",\n \"uid\": \"4e6d32c5-8ec5-41eb-98df-f2c07cac0f92\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"350\",\n \"fieldPath\": \"spec.containers{kube-proxy}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container kube-proxy\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:17Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:17Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5zhtl.15d394a1c9e71105\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-5zhtl.15d394a1c9e71105\",\n \"uid\": \"ab7ed40f-72fd-4a20-b85e-8a7af052aec0\",\n \"resourceVersion\": \"399\",\n \"creationTimestamp\": \"2019-11-03T06:55:17Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-5zhtl\",\n \"uid\": \"4e6d32c5-8ec5-41eb-98df-f2c07cac0f92\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"350\",\n \"fieldPath\": \"spec.containers{kube-proxy}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container kube-proxy\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:17Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:17Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-xzk56.15d394a575a1b6cd\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-xzk56.15d394a575a1b6cd\",\n \"uid\": \"10e6fc0d-1c93-4097-b08b-db564a0008e0\",\n \"resourceVersion\": \"467\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-xzk56\",\n \"uid\": \"3f611db6-3278-409a-9cc3-d9d82f6f804f\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"458\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/kube-proxy-xzk56 to kind-worker2\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-xzk56.15d394a5982288fb\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-xzk56.15d394a5982288fb\",\n \"uid\": \"da598b88-696d-4f55-8694-027d4a3b3726\",\n \"resourceVersion\": \"495\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-xzk56\",\n \"uid\": \"3f611db6-3278-409a-9cc3-d9d82f6f804f\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"465\",\n \"fieldPath\": \"spec.containers{kube-proxy}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker2\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-xzk56.15d394a609fd666e\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-xzk56.15d394a609fd666e\",\n \"uid\": \"466a06e4-3813-4481-a1f6-8a3e21466af5\",\n \"resourceVersion\": \"504\",\n \"creationTimestamp\": \"2019-11-03T06:55:35Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-xzk56\",\n \"uid\": \"3f611db6-3278-409a-9cc3-d9d82f6f804f\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"465\",\n \"fieldPath\": \"spec.containers{kube-proxy}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container kube-proxy\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker2\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:35Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:35Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-xzk56.15d394a613c0d7c4\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-xzk56.15d394a613c0d7c4\",\n \"uid\": \"e16a1658-0efd-4305-a197-ddf4b34c6b47\",\n \"resourceVersion\": \"508\",\n \"creationTimestamp\": \"2019-11-03T06:55:35Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy-xzk56\",\n \"uid\": \"3f611db6-3278-409a-9cc3-d9d82f6f804f\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"465\",\n \"fieldPath\": \"spec.containers{kube-proxy}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container kube-proxy\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker2\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:35Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:35Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy.15d394a17b524827\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15d394a17b524827\",\n \"uid\": \"d781ab83-7239-4fc9-955c-915f3ac59393\",\n \"resourceVersion\": \"358\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"DaemonSet\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy\",\n \"uid\": \"84867c7a-1a39-453d-9710-49aa83ccb389\",\n \"apiVersion\": \"apps/v1\",\n \"resourceVersion\": \"200\"\n },\n \"reason\": \"SuccessfulCreate\",\n \"message\": \"Created pod: kube-proxy-5zhtl\",\n \"source\": {\n \"component\": \"daemonset-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy.15d394a574d86f69\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15d394a574d86f69\",\n \"uid\": \"3fa4c755-9ea4-4192-8a42-cf59e6dcbab7\",\n \"resourceVersion\": \"469\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"DaemonSet\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy\",\n \"uid\": \"84867c7a-1a39-453d-9710-49aa83ccb389\",\n \"apiVersion\": \"apps/v1\",\n \"resourceVersion\": \"405\"\n },\n \"reason\": \"SuccessfulCreate\",\n \"message\": \"Created pod: kube-proxy-xzk56\",\n \"source\": {\n \"component\": \"daemonset-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy.15d394a576746dea\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15d394a576746dea\",\n \"uid\": \"58f97f74-04ad-4957-ab56-bb1e6376a668\",\n \"resourceVersion\": \"473\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"DaemonSet\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-proxy\",\n \"uid\": \"84867c7a-1a39-453d-9710-49aa83ccb389\",\n \"apiVersion\": \"apps/v1\",\n \"resourceVersion\": \"464\"\n },\n \"reason\": \"SuccessfulCreate\",\n \"message\": \"Created pod: kube-proxy-5qht6\",\n \"source\": {\n \"component\": \"daemonset-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:33Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-scheduler-kind-control-plane.15d3949aef8318f5\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler-kind-control-plane.15d3949aef8318f5\",\n \"uid\": \"2cdcd1cc-8c5c-4015-a501-7126032dd95d\",\n \"resourceVersion\": \"226\",\n \"creationTimestamp\": \"2019-11-03T06:55:02Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-scheduler-kind-control-plane\",\n \"uid\": \"9170df3c54089a31fddf64a59f662b80\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-scheduler}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:48Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:48Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-scheduler-kind-control-plane.15d3949beccb1b64\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler-kind-control-plane.15d3949beccb1b64\",\n \"uid\": \"296ff9f2-8e38-4f69-9e75-6ff4edae8c92\",\n \"resourceVersion\": \"229\",\n \"creationTimestamp\": \"2019-11-03T06:55:03Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-scheduler-kind-control-plane\",\n \"uid\": \"9170df3c54089a31fddf64a59f662b80\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-scheduler}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container kube-scheduler\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:52Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:52Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-scheduler-kind-control-plane.15d3949bfa486d26\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler-kind-control-plane.15d3949bfa486d26\",\n \"uid\": \"4ac770d1-a3d4-409a-9900-2e0dd54fe264\",\n \"resourceVersion\": \"238\",\n \"creationTimestamp\": \"2019-11-03T06:55:03Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-scheduler-kind-control-plane\",\n \"uid\": \"9170df3c54089a31fddf64a59f662b80\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-scheduler}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container kube-scheduler\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:52Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:52Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-scheduler.15d3949d9aa59935\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15d3949d9aa59935\",\n \"uid\": \"f09c82a2-3b23-48d1-85de-18e941987c57\",\n \"resourceVersion\": \"163\",\n \"creationTimestamp\": \"2019-11-03T06:54:59Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Endpoints\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-scheduler\",\n \"uid\": \"8facb795-f557-4c31-90dd-c5c8d6eff8fc\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"161\"\n },\n \"reason\": \"LeaderElection\",\n \"message\": \"kind-control-plane_efc2b310-d371-4c6d-9bd4-80e612d239a9 became leader\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:59Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:59Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-scheduler.15d3949d9aa5d82a\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15d3949d9aa5d82a\",\n \"uid\": \"76b39021-1cae-4a0b-929e-b21d7fdc8953\",\n \"resourceVersion\": \"164\",\n \"creationTimestamp\": \"2019-11-03T06:54:59Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Lease\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-scheduler\",\n \"uid\": \"6f28bc0f-96d1-4eb0-a0b0-0d78c6989aae\",\n \"apiVersion\": \"coordination.k8s.io/v1\",\n \"resourceVersion\": \"162\"\n },\n \"reason\": \"LeaderElection\",\n \"message\": \"kind-control-plane_efc2b310-d371-4c6d-9bd4-80e612d239a9 became leader\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:59Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:59Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n }\n ]\n}\n{\n \"kind\": \"ReplicationControllerList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/replicationcontrollers\",\n \"resourceVersion\": \"2534\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/services\",\n \"resourceVersion\": \"2534\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/kube-dns\",\n \"uid\": \"169aa2ce-e8f5-4f8b-8188-386a4c19de83\",\n \"resourceVersion\": \"194\",\n \"creationTimestamp\": \"2019-11-03T06:55:00Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"kubernetes.io/cluster-service\": \"true\",\n \"kubernetes.io/name\": \"KubeDNS\"\n },\n \"annotations\": {\n \"prometheus.io/port\": \"9153\",\n \"prometheus.io/scrape\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"dns\",\n \"protocol\": \"UDP\",\n \"port\": 53,\n \"targetPort\": 53\n },\n {\n \"name\": \"dns-tcp\",\n \"protocol\": \"TCP\",\n \"port\": 53,\n \"targetPort\": 53\n },\n {\n \"name\": \"metrics\",\n \"protocol\": \"TCP\",\n \"port\": 9153,\n \"targetPort\": 9153\n }\n ],\n \"selector\": {\n \"k8s-app\": \"kube-dns\"\n },\n \"clusterIP\": \"10.96.0.10\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n }\n ]\n}\n{\n \"kind\": \"DaemonSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets\",\n \"resourceVersion\": \"2534\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kindnet\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\",\n \"uid\": \"6205a1e4-b14b-4964-91f6-b11c04209bbb\",\n \"resourceVersion\": \"529\",\n \"generation\": 1,\n \"creationTimestamp\": \"2019-11-03T06:55:02Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"k8s-app\": \"kindnet\",\n \"tier\": \"node\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"kindnet\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"kindnet\",\n \"k8s-app\": \"kindnet\",\n \"tier\": \"node\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\",\n \"uid\": \"84867c7a-1a39-453d-9710-49aa83ccb389\",\n \"resourceVersion\": \"519\",\n \"generation\": 1,\n \"creationTimestamp\": \"2019-11-03T06:55:00Z\",\n \"labels\": {\n \"k8s-app\": \"kube-proxy\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-proxy\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"kube-proxy\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\"\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n }\n ]\n}\n{\n \"kind\": \"DeploymentList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments\",\n \"resourceVersion\": \"2534\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/coredns\",\n \"uid\": \"af23a4c9-64e5-408e-8737-ca144be79102\",\n \"resourceVersion\": \"634\",\n \"generation\": 1,\n \"creationTimestamp\": \"2019-11-03T06:55:00Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-dns\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"kube-dns\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns:1.6.2\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 10,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 2,\n \"updatedReplicas\": 2,\n \"readyReplicas\": 2,\n \"availableReplicas\": 2,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2019-11-03T06:56:10Z\",\n \"lastTransitionTime\": \"2019-11-03T06:56:10Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2019-11-03T06:56:10Z\",\n \"lastTransitionTime\": \"2019-11-03T06:55:16Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"coredns-5644d7b6d9\\\" has successfully progressed.\"\n }\n ]\n }\n }\n ]\n}\n{\n \"kind\": \"ReplicaSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets\",\n \"resourceVersion\": \"2534\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/coredns-5644d7b6d9\",\n \"uid\": \"00f25722-b999-4b82-a73f-0c99be5b1e9d\",\n \"resourceVersion\": \"633\",\n \"generation\": 1,\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"5644d7b6d9\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"2\",\n \"deployment.kubernetes.io/max-replicas\": \"3\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"coredns\",\n \"uid\": \"af23a4c9-64e5-408e-8737-ca144be79102\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"5644d7b6d9\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"5644d7b6d9\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns:1.6.2\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 2,\n \"fullyLabeledReplicas\": 2,\n \"readyReplicas\": 2,\n \"availableReplicas\": 2,\n \"observedGeneration\": 1\n }\n }\n ]\n}\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods\",\n \"resourceVersion\": \"2534\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-58dzr\",\n \"generateName\": \"coredns-5644d7b6d9-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-5644d7b6d9-58dzr\",\n \"uid\": \"cf78ce08-e950-4fdc-b782-47f893532d2c\",\n \"resourceVersion\": \"631\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"5644d7b6d9\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"coredns-5644d7b6d9\",\n \"uid\": \"00f25722-b999-4b82-a73f-0c99be5b1e9d\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"coredns-token-fgs9w\",\n \"secret\": {\n \"secretName\": \"coredns-token-fgs9w\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns:1.6.2\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"coredns-token-fgs9w\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"nodeName\": \"kind-control-plane\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:59Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:56:08Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:56:08Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:59Z\"\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"podIP\": \"10.244.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.0.3\"\n }\n ],\n \"startTime\": \"2019-11-03T06:55:59Z\",\n \"containerStatuses\": [\n {\n \"name\": \"coredns\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:56:00Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/coredns:1.6.2\",\n \"imageID\": \"sha256:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b\",\n \"containerID\": \"containerd://552a62a4e4a7f826c515acf5037e165e217273b676481e75787e8d841a6dea9b\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"coredns-5644d7b6d9-j9fqd\",\n \"generateName\": \"coredns-5644d7b6d9-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-5644d7b6d9-j9fqd\",\n \"uid\": \"aa121b8c-53ce-4baa-bfe4-44f08dec5504\",\n \"resourceVersion\": \"626\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"5644d7b6d9\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"coredns-5644d7b6d9\",\n \"uid\": \"00f25722-b999-4b82-a73f-0c99be5b1e9d\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"coredns-token-fgs9w\",\n \"secret\": {\n \"secretName\": \"coredns-token-fgs9w\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns:1.6.2\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"coredns-token-fgs9w\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"nodeName\": \"kind-control-plane\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:57Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:56:06Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:56:06Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:57Z\"\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"podIP\": \"10.244.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.0.2\"\n }\n ],\n \"startTime\": \"2019-11-03T06:55:57Z\",\n \"containerStatuses\": [\n {\n \"name\": \"coredns\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:55:58Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/coredns:1.6.2\",\n \"imageID\": \"sha256:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b\",\n \"containerID\": \"containerd://3435e09aefa0de4695378f1f85fed31379c56aac3d100dbe69b6697c608ebb1b\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"etcd-kind-control-plane\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/etcd-kind-control-plane\",\n \"uid\": \"b4d9a6b7-cf5c-443e-a65c-28697efdc11b\",\n \"resourceVersion\": \"624\",\n \"creationTimestamp\": \"2019-11-03T06:55:58Z\",\n \"labels\": {\n \"component\": \"etcd\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubernetes.io/config.hash\": \"28ba3ba0264772641c791ff01a5eecff\",\n \"kubernetes.io/config.mirror\": \"28ba3ba0264772641c791ff01a5eecff\",\n \"kubernetes.io/config.seen\": \"2019-11-03T06:54:47.502108056Z\",\n \"kubernetes.io/config.source\": \"file\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"etcd-certs\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/pki/etcd\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"etcd-data\",\n \"hostPath\": {\n \"path\": \"/var/lib/etcd\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"etcd\",\n \"image\": \"k8s.gcr.io/etcd:3.4.3-0\",\n \"command\": [\n \"etcd\",\n \"--advertise-client-urls=https://172.17.0.3:2379\",\n \"--cert-file=/etc/kubernetes/pki/etcd/server.crt\",\n \"--client-cert-auth=true\",\n \"--data-dir=/var/lib/etcd\",\n \"--initial-advertise-peer-urls=https://172.17.0.3:2380\",\n \"--initial-cluster=kind-control-plane=https://172.17.0.3:2380\",\n \"--key-file=/etc/kubernetes/pki/etcd/server.key\",\n \"--listen-client-urls=https://127.0.0.1:2379,https://172.17.0.3:2379\",\n \"--listen-metrics-urls=http://127.0.0.1:2381\",\n \"--listen-peer-urls=https://172.17.0.3:2380\",\n \"--name=kind-control-plane\",\n \"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt\",\n \"--peer-client-cert-auth=true\",\n \"--peer-key-file=/etc/kubernetes/pki/etcd/peer.key\",\n \"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\",\n \"--snapshot-count=10000\",\n \"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"etcd-data\",\n \"mountPath\": \"/var/lib/etcd\"\n },\n {\n \"name\": \"etcd-certs\",\n \"mountPath\": \"/etc/kubernetes/pki/etcd\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 2381,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 15,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"kind-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:47Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:53Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:53Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:47Z\"\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"podIP\": \"172.17.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.3\"\n }\n ],\n \"startTime\": \"2019-11-03T06:54:47Z\",\n \"containerStatuses\": [\n {\n \"name\": \"etcd\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:54:52Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/etcd:3.4.3-0\",\n \"imageID\": \"sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f\",\n \"containerID\": \"containerd://148073813f67a618fee8557d78f6bd1194d7b2ab84f8d4135188f1e1d62ee1f8\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-9g7zl\",\n \"generateName\": \"kindnet-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-9g7zl\",\n \"uid\": \"ff1980f2-2dbd-4e95-b6ac-b6f3e4922a4e\",\n \"resourceVersion\": \"524\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"controller-revision-hash\": \"775d694485\",\n \"k8s-app\": \"kindnet\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kindnet\",\n \"uid\": \"6205a1e4-b14b-4964-91f6-b11c04209bbb\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kindnet-token-lsqm7\",\n \"secret\": {\n \"secretName\": \"kindnet-token-lsqm7\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kindnet-token-lsqm7\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"nodeName\": \"kind-worker2\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kind-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:37Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:37Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\"\n }\n ],\n \"hostIP\": \"172.17.0.2\",\n \"podIP\": \"172.17.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.2\"\n }\n ],\n \"startTime\": \"2019-11-03T06:55:33Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kindnet-cni\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:55:37Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17\",\n \"imageID\": \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n \"containerID\": \"containerd://def5fc2327f5a64a200c56e7a2f5fa5953086299edd80556bf8c8d926055cfe7\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-c744w\",\n \"generateName\": \"kindnet-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-c744w\",\n \"uid\": \"35ef4026-034e-4ef5-883d-43692c07aeb9\",\n \"resourceVersion\": \"415\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"controller-revision-hash\": \"775d694485\",\n \"k8s-app\": \"kindnet\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kindnet\",\n \"uid\": \"6205a1e4-b14b-4964-91f6-b11c04209bbb\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kindnet-token-lsqm7\",\n \"secret\": {\n \"secretName\": \"kindnet-token-lsqm7\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kindnet-token-lsqm7\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"nodeName\": \"kind-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kind-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:16Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:19Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:19Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:16Z\"\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"podIP\": \"172.17.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.3\"\n }\n ],\n \"startTime\": \"2019-11-03T06:55:16Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kindnet-cni\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:55:19Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17\",\n \"imageID\": \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n \"containerID\": \"containerd://f70b75f70a6e0ff45fceff9eb63dc11acb01a38301d755aac0afe55f920b6ea1\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-zlgk8\",\n \"generateName\": \"kindnet-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-zlgk8\",\n \"uid\": \"89e0dc23-f3fd-4a81-8bf9-e661b598dce9\",\n \"resourceVersion\": \"528\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"controller-revision-hash\": \"775d694485\",\n \"k8s-app\": \"kindnet\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kindnet\",\n \"uid\": \"6205a1e4-b14b-4964-91f6-b11c04209bbb\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kindnet-token-lsqm7\",\n \"secret\": {\n \"secretName\": \"kindnet-token-lsqm7\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kindnet-token-lsqm7\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"nodeName\": \"kind-worker\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kind-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:37Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:37Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\"\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"podIP\": \"172.17.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.4\"\n }\n ],\n \"startTime\": \"2019-11-03T06:55:33Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kindnet-cni\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:55:37Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17\",\n \"imageID\": \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n \"containerID\": \"containerd://5370feae146ed7f40accdfac956b9d5c16a86f8850d98fdaebd145c1eb3d84d4\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-apiserver-kind-control-plane\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane\",\n \"uid\": \"997ced57-20c3-450e-b922-f26216c1589a\",\n \"resourceVersion\": \"681\",\n \"creationTimestamp\": \"2019-11-03T06:56:23Z\",\n \"labels\": {\n \"component\": \"kube-apiserver\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubernetes.io/config.hash\": \"1f4a1fd45c079aadc57e77a085cbbea0\",\n \"kubernetes.io/config.mirror\": \"1f4a1fd45c079aadc57e77a085cbbea0\",\n \"kubernetes.io/config.seen\": \"2019-11-03T06:54:47.494648257Z\",\n \"kubernetes.io/config.source\": \"file\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"ca-certs\",\n \"hostPath\": {\n \"path\": \"/etc/ssl/certs\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/etc/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"k8s-certs\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/pki\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/local/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-apiserver\",\n \"image\": \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"command\": [\n \"kube-apiserver\",\n \"--advertise-address=172.17.0.3\",\n \"--allow-privileged=true\",\n \"--authorization-mode=Node,RBAC\",\n \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n \"--enable-admission-plugins=NodeRestriction\",\n \"--enable-bootstrap-token-auth=true\",\n \"--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt\",\n \"--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt\",\n \"--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key\",\n \"--etcd-servers=https://127.0.0.1:2379\",\n \"--insecure-port=0\",\n \"--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt\",\n \"--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key\",\n \"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\",\n \"--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt\",\n \"--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key\",\n \"--requestheader-allowed-names=front-proxy-client\",\n \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n \"--requestheader-group-headers=X-Remote-Group\",\n \"--requestheader-username-headers=X-Remote-User\",\n \"--secure-port=6443\",\n \"--service-account-key-file=/etc/kubernetes/pki/sa.pub\",\n \"--service-cluster-ip-range=10.96.0.0/12\",\n \"--tls-cert-file=/etc/kubernetes/pki/apiserver.crt\",\n \"--tls-private-key-file=/etc/kubernetes/pki/apiserver.key\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"250m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"ca-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ssl/certs\"\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ca-certificates\"\n },\n {\n \"name\": \"k8s-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/pki\"\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/local/share/ca-certificates\"\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/share/ca-certificates\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 6443,\n \"host\": \"172.17.0.3\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 15,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"kind-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:47Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:52Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:52Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:47Z\"\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"podIP\": \"172.17.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.3\"\n }\n ],\n \"startTime\": \"2019-11-03T06:54:47Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-apiserver\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:54:52Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"imageID\": \"sha256:2426a7b6d11daa4e4905286a4da3ef60ef4deb2ec9582656a6b93a4fea920802\",\n \"containerID\": \"containerd://6dfb53af0d2ebeef97113c89e213f18044de6dc73dbdd981da8f61e53637ae7c\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-controller-manager-kind-control-plane\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-controller-manager-kind-control-plane\",\n \"uid\": \"94e319b7-415d-4f0e-b8c7-bcdb4b59f373\",\n \"resourceVersion\": \"619\",\n \"creationTimestamp\": \"2019-11-03T06:56:04Z\",\n \"labels\": {\n \"component\": \"kube-controller-manager\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubernetes.io/config.hash\": \"e8347a9972165bc92f6de0a5bf784ce4\",\n \"kubernetes.io/config.mirror\": \"e8347a9972165bc92f6de0a5bf784ce4\",\n \"kubernetes.io/config.seen\": \"2019-11-03T06:54:47.497720149Z\",\n \"kubernetes.io/config.source\": \"file\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"ca-certs\",\n \"hostPath\": {\n \"path\": \"/etc/ssl/certs\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/etc/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"flexvolume-dir\",\n \"hostPath\": {\n \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"k8s-certs\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/pki\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"kubeconfig\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/controller-manager.conf\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/local/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-controller-manager\",\n \"image\": \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"command\": [\n \"kube-controller-manager\",\n \"--allocate-node-cidrs=true\",\n \"--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n \"--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n \"--bind-address=127.0.0.1\",\n \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n \"--cluster-cidr=10.244.0.0/16\",\n \"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt\",\n \"--cluster-signing-key-file=/etc/kubernetes/pki/ca.key\",\n \"--controllers=*,bootstrapsigner,tokencleaner\",\n \"--enable-hostpath-provisioner=true\",\n \"--kubeconfig=/etc/kubernetes/controller-manager.conf\",\n \"--leader-elect=true\",\n \"--node-cidr-mask-size=24\",\n \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n \"--root-ca-file=/etc/kubernetes/pki/ca.crt\",\n \"--service-account-private-key-file=/etc/kubernetes/pki/sa.key\",\n \"--service-cluster-ip-range=10.96.0.0/12\",\n \"--use-service-account-credentials=true\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"200m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"ca-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ssl/certs\"\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ca-certificates\"\n },\n {\n \"name\": \"flexvolume-dir\",\n \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\"\n },\n {\n \"name\": \"k8s-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/pki\"\n },\n {\n \"name\": \"kubeconfig\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/controller-manager.conf\"\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/local/share/ca-certificates\"\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/share/ca-certificates\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10252,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 15,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"kind-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:47Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:52Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:52Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:47Z\"\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"podIP\": \"172.17.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.3\"\n }\n ],\n \"startTime\": \"2019-11-03T06:54:47Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-controller-manager\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:54:52Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"imageID\": \"sha256:02815e475f23ef6eb644ebb032649ca65143344d7b6d877868eefb5d98a82315\",\n \"containerID\": \"containerd://43f4a210ae8429366655eeef0fd3ec2421b574ca9c180a26fcdb26be20b1bb80\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5qht6\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-5qht6\",\n \"uid\": \"e85e54c8-3e80-43a3-8553-a4ee90da8581\",\n \"resourceVersion\": \"517\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\",\n \"labels\": {\n \"controller-revision-hash\": \"7db5c74b55\",\n \"k8s-app\": \"kube-proxy\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"84867c7a-1a39-453d-9710-49aa83ccb389\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-proxy-token-thc2w\",\n \"secret\": {\n \"secretName\": \"kube-proxy-token-thc2w\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-proxy-token-thc2w\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"nodeName\": \"kind-worker\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kind-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:36Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:36Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\"\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"podIP\": \"172.17.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.4\"\n }\n ],\n \"startTime\": \"2019-11-03T06:55:33Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:55:35Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"imageID\": \"sha256:9c3ec6d89e95ce27f5698e86a10587dcf6dbba5083c1332fb0dc8833d71499b5\",\n \"containerID\": \"containerd://6bfb24ee98d4ee61b7e865a969e0d8c7f0a1529b8b5bb0c2247d0e15d5e6fde0\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-5zhtl\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-5zhtl\",\n \"uid\": \"4e6d32c5-8ec5-41eb-98df-f2c07cac0f92\",\n \"resourceVersion\": \"404\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\",\n \"labels\": {\n \"controller-revision-hash\": \"7db5c74b55\",\n \"k8s-app\": \"kube-proxy\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"84867c7a-1a39-453d-9710-49aa83ccb389\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-proxy-token-thc2w\",\n \"secret\": {\n \"secretName\": \"kube-proxy-token-thc2w\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-proxy-token-thc2w\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"nodeName\": \"kind-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kind-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:16Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:17Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:17Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:16Z\"\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"podIP\": \"172.17.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.3\"\n }\n ],\n \"startTime\": \"2019-11-03T06:55:16Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:55:17Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"imageID\": \"sha256:9c3ec6d89e95ce27f5698e86a10587dcf6dbba5083c1332fb0dc8833d71499b5\",\n \"containerID\": \"containerd://f1384dac83e07cc46ccf6844c94b0ba038b2f8a97c3f3f0977c52eb53faddd98\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-xzk56\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-xzk56\",\n \"uid\": \"3f611db6-3278-409a-9cc3-d9d82f6f804f\",\n \"resourceVersion\": \"515\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\",\n \"labels\": {\n \"controller-revision-hash\": \"7db5c74b55\",\n \"k8s-app\": \"kube-proxy\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"84867c7a-1a39-453d-9710-49aa83ccb389\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-proxy-token-thc2w\",\n \"secret\": {\n \"secretName\": \"kube-proxy-token-thc2w\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-proxy-token-thc2w\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"nodeName\": \"kind-worker2\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kind-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:36Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:36Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:55:33Z\"\n }\n ],\n \"hostIP\": \"172.17.0.2\",\n \"podIP\": \"172.17.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.2\"\n }\n ],\n \"startTime\": \"2019-11-03T06:55:33Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:55:35Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"imageID\": \"sha256:9c3ec6d89e95ce27f5698e86a10587dcf6dbba5083c1332fb0dc8833d71499b5\",\n \"containerID\": \"containerd://200a15b1ca5f761d14299ca2a418ec997417e49bd4f8594f370e7c831ad720c1\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-scheduler-kind-control-plane\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-scheduler-kind-control-plane\",\n \"uid\": \"a3d7edba-dd05-4eb1-a60e-89aa7f662992\",\n \"resourceVersion\": \"655\",\n \"creationTimestamp\": \"2019-11-03T06:56:08Z\",\n \"labels\": {\n \"component\": \"kube-scheduler\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubernetes.io/config.hash\": \"9170df3c54089a31fddf64a59f662b80\",\n \"kubernetes.io/config.mirror\": \"9170df3c54089a31fddf64a59f662b80\",\n \"kubernetes.io/config.seen\": \"2019-11-03T06:54:47.499594199Z\",\n \"kubernetes.io/config.source\": \"file\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kubeconfig\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/scheduler.conf\",\n \"type\": \"FileOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-scheduler\",\n \"image\": \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"command\": [\n \"kube-scheduler\",\n \"--authentication-kubeconfig=/etc/kubernetes/scheduler.conf\",\n \"--authorization-kubeconfig=/etc/kubernetes/scheduler.conf\",\n \"--bind-address=127.0.0.1\",\n \"--kubeconfig=/etc/kubernetes/scheduler.conf\",\n \"--leader-elect=true\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"100m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"kubeconfig\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/scheduler.conf\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10251,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 15,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"kind-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:47Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:52Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:52Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-11-03T06:54:47Z\"\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"podIP\": \"172.17.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.0.3\"\n }\n ],\n \"startTime\": \"2019-11-03T06:54:47Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-scheduler\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-11-03T06:54:52Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011\",\n \"imageID\": \"sha256:2a08f1cbd3bb32ade0f548031ffcae8b1b79f61c42390f5e0616366bef8cd2e6\",\n \"containerID\": \"containerd://92769dfa4dcf301c44329768efd03ba392d17a85281a560dbf1262965ebd40cc\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n }\n ]\n}\n==== START logs for container coredns of pod kube-system/coredns-5644d7b6d9-58dzr ====\n.:53\n2019-11-03T06:56:00.372Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76\n2019-11-03T06:56:00.372Z [INFO] CoreDNS-1.6.2\n2019-11-03T06:56:00.372Z [INFO] linux/amd64, go1.12.8, 795a3eb\nCoreDNS-1.6.2\nlinux/amd64, go1.12.8, 795a3eb\n==== END logs for container coredns of pod kube-system/coredns-5644d7b6d9-58dzr ====\n==== START logs for container coredns of pod kube-system/coredns-5644d7b6d9-j9fqd ====\n.:53\n2019-11-03T06:55:59.070Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76\n2019-11-03T06:55:59.071Z [INFO] CoreDNS-1.6.2\n2019-11-03T06:55:59.071Z [INFO] linux/amd64, go1.12.8, 795a3eb\nCoreDNS-1.6.2\nlinux/amd64, go1.12.8, 795a3eb\n==== END logs for container coredns of pod kube-system/coredns-5644d7b6d9-j9fqd ====\n==== START logs for container etcd of pod kube-system/etcd-kind-control-plane ====\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2019-11-03 06:54:52.805629 I | etcdmain: etcd Version: 3.4.3\n2019-11-03 06:54:52.805695 I | etcdmain: Git SHA: 3cf2f69b5\n2019-11-03 06:54:52.805700 I | etcdmain: Go Version: go1.12.12\n2019-11-03 06:54:52.805704 I | etcdmain: Go OS/Arch: linux/amd64\n2019-11-03 06:54:52.805710 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2019-11-03 06:54:52.806326 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2019-11-03 06:54:52.809210 I | embed: name = kind-control-plane\n2019-11-03 06:54:52.809227 I | embed: data dir = /var/lib/etcd\n2019-11-03 06:54:52.809232 I | embed: member dir = /var/lib/etcd/member\n2019-11-03 06:54:52.809237 I | embed: heartbeat = 100ms\n2019-11-03 06:54:52.809242 I | embed: election = 1000ms\n2019-11-03 06:54:52.809246 I | embed: snapshot count = 10000\n2019-11-03 06:54:52.809269 I | embed: advertise client URLs = https://172.17.0.3:2379\n2019-11-03 06:54:52.827903 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2\nraft2019/11/03 06:54:52 INFO: b273bc7741bcb020 switched to configuration voters=()\nraft2019/11/03 06:54:52 INFO: b273bc7741bcb020 became follower at term 0\nraft2019/11/03 06:54:52 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\nraft2019/11/03 06:54:52 INFO: b273bc7741bcb020 became follower at term 1\nraft2019/11/03 06:54:52 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)\n2019-11-03 06:54:52.838034 W | auth: simple token is not cryptographically signed\n2019-11-03 06:54:52.846438 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]\n2019-11-03 06:54:52.850283 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)\nraft2019/11/03 06:54:52 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)\n2019-11-03 06:54:52.853296 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2\n2019-11-03 06:54:52.853839 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2019-11-03 06:54:52.854137 I | embed: listening for metrics on http://127.0.0.1:2381\n2019-11-03 06:54:52.854189 I | embed: listening for peers on 172.17.0.3:2380\nraft2019/11/03 06:54:53 INFO: b273bc7741bcb020 is starting a new election at term 1\nraft2019/11/03 06:54:53 INFO: b273bc7741bcb020 became candidate at term 2\nraft2019/11/03 06:54:53 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2\nraft2019/11/03 06:54:53 INFO: b273bc7741bcb020 became leader at term 2\nraft2019/11/03 06:54:53 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2\n2019-11-03 06:54:53.530715 I | etcdserver: published {Name:kind-control-plane ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2\n2019-11-03 06:54:53.530905 I | embed: ready to serve client requests\n2019-11-03 06:54:53.531566 I | embed: ready to serve client requests\n2019-11-03 06:54:53.532773 I | etcdserver: setting up the initial cluster version to 3.4\n2019-11-03 06:54:53.534415 I | embed: serving client requests on 172.17.0.3:2379\n2019-11-03 06:54:53.535886 N | etcdserver/membership: set the initial cluster version to 3.4\n2019-11-03 06:54:53.536075 I | etcdserver/api: enabled capabilities for version 3.4\n2019-11-03 06:54:53.537856 I | embed: serving client requests on 127.0.0.1:2379\n2019-11-03 06:56:03.619390 W | etcdserver: read-only range request \"key:\\\"/registry/volumeattachments\\\" range_end:\\\"/registry/volumeattachmentt\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (296.866635ms) to execute\n2019-11-03 06:56:04.281197 W | etcdserver: request \"header:<ID:12691264902561112603 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/leases/kube-system/kube-scheduler\\\" mod_revision:607 > success:<request_put:<key:\\\"/registry/leases/kube-system/kube-scheduler\\\" value_size:224 >> failure:<request_range:<key:\\\"/registry/leases/kube-system/kube-scheduler\\\" > >>\" with result \"size:16\" took too long (116.798096ms) to execute\n2019-11-03 06:56:04.791500 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:458\" took too long (640.896371ms) to execute\n2019-11-03 06:56:05.137386 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/kube-system/\\\" range_end:\\\"/registry/resourcequotas/kube-system0\\\" \" with result \"range_response_count:0 size:5\" took too long (235.609755ms) to execute\n2019-11-03 06:56:05.137655 W | etcdserver: read-only range request \"key:\\\"/registry/clusterrolebindings\\\" range_end:\\\"/registry/clusterrolebindingt\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (544.086475ms) to execute\n2019-11-03 06:56:06.365134 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs\\\" range_end:\\\"/registry/cronjobt\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (374.305365ms) to execute\n2019-11-03 06:56:06.501303 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:308\" took too long (451.633448ms) to execute\n2019-11-03 06:56:07.841505 W | etcdserver: request \"header:<ID:12691264902561112619 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" mod_revision:612 > success:<request_put:<key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" value_size:233 >> failure:<request_range:<key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" > >>\" with result \"size:16\" took too long (137.886646ms) to execute\n2019-11-03 06:56:08.951570 W | etcdserver: read-only range request \"key:\\\"/registry/validatingwebhookconfigurations\\\" range_end:\\\"/registry/validatingwebhookconfigurationt\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (1.721819407s) to execute\n2019-11-03 06:56:09.005191 W | etcdserver: read-only range request \"key:\\\"/registry/podtemplates\\\" range_end:\\\"/registry/podtemplatet\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (1.71500298s) to execute\n2019-11-03 06:56:09.150626 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:290\" took too long (1.070590082s) to execute\n2019-11-03 06:56:09.170236 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kube-system/etcd-kind-control-plane\\\" \" with result \"range_response_count:1 size:1746\" took too long (882.330974ms) to execute\n2019-11-03 06:56:09.170650 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:6337\" took too long (1.072831504s) to execute\n2019-11-03 06:56:09.187521 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs\\\" range_end:\\\"/registry/services/spect\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (1.335036351s) to execute\n2019-11-03 06:56:09.208551 W | etcdserver: request \"header:<ID:12691264902561112624 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/pods/kube-system/kube-scheduler-kind-control-plane\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/pods/kube-system/kube-scheduler-kind-control-plane\\\" value_size:1135 >> failure:<>>\" with result \"size:16\" took too long (289.958858ms) to execute\n2019-11-03 06:56:09.235180 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:6337\" took too long (1.065304193s) to execute\n2019-11-03 06:56:09.468325 W | etcdserver: read-only range request \"key:\\\"/registry/rolebindings\\\" range_end:\\\"/registry/rolebindingt\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (593.997557ms) to execute\n2019-11-03 06:56:09.621013 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/default\\\" \" with result \"range_response_count:1 size:172\" took too long (572.558459ms) to execute\n2019-11-03 06:56:09.659063 W | etcdserver: read-only range request \"key:\\\"/registry/events\\\" range_end:\\\"/registry/eventt\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (301.942698ms) to execute\n2019-11-03 06:56:10.131457 W | etcdserver: read-only range request \"key:\\\"/registry/horizontalpodautoscalers\\\" range_end:\\\"/registry/horizontalpodautoscalert\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (761.186929ms) to execute\n2019-11-03 06:56:10.186991 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs/default/kubernetes\\\" \" with result \"range_response_count:1 size:293\" took too long (349.300215ms) to execute\n2019-11-03 06:56:41.932356 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/projected-9259/default\\\" \" with result \"range_response_count:1 size:187\" took too long (101.059123ms) to execute\n2019-11-03 06:56:43.170686 W | etcdserver: request \"header:<ID:12691264902561113368 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/events/projected-9259/labelsupdatea76f124b-e5f7-4d1a-8b3a-fe720a612f7a.15d394b5ae47f8fb\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/events/projected-9259/labelsupdatea76f124b-e5f7-4d1a-8b3a-fe720a612f7a.15d394b5ae47f8fb\\\" value_size:464 lease:3467892865706336753 >> failure:<>>\" with result \"size:16\" took too long (118.096715ms) to execute\n2019-11-03 06:56:43.210896 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-4701/hostexec-kind-worker-rlg8k\\\" \" with result \"range_response_count:1 size:1179\" took too long (161.330929ms) to execute\n2019-11-03 06:56:43.211624 W | etcdserver: read-only range request \"key:\\\"/registry/events/security-context-test-3748/\\\" range_end:\\\"/registry/events/security-context-test-37480\\\" \" with result \"range_response_count:2 size:1041\" took too long (159.943901ms) to execute\n2019-11-03 06:56:43.212091 W | etcdserver: read-only range request \"key:\\\"/registry/pods/subpath-245/pod-subpath-test-configmap-4695\\\" \" with result \"range_response_count:1 size:1457\" took too long (162.462305ms) to execute\n2019-11-03 06:56:43.212666 W | etcdserver: read-only range request \"key:\\\"/registry/pods/secrets-7025/pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14\\\" \" with result \"range_response_count:1 size:1372\" took too long (162.006996ms) to execute\n2019-11-03 06:56:43.213281 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:6017\" took too long (164.985179ms) to execute\n2019-11-03 06:56:43.215666 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/projected-584/\\\" range_end:\\\"/registry/limitranges/projected-5840\\\" \" with result \"range_response_count:0 size:5\" took too long (167.442626ms) to execute\n2019-11-03 06:56:43.216099 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-8834/pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e\\\" \" with result \"range_response_count:1 size:1314\" took too long (165.866873ms) to execute\n2019-11-03 06:56:43.217061 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/sysctl-2435/\\\" range_end:\\\"/registry/limitranges/sysctl-24350\\\" \" with result \"range_response_count:0 size:5\" took too long (167.02374ms) to execute\n2019-11-03 06:56:43.218225 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:458\" took too long (133.565474ms) to execute\n2019-11-03 06:56:43.218775 W | etcdserver: read-only range request \"key:\\\"/registry/pods/volumemode-7875/hostexec-kind-worker2-2c9lb\\\" \" with result \"range_response_count:1 size:769\" took too long (101.228358ms) to execute\n2019-11-03 06:56:46.546340 W | etcdserver: read-only range request \"key:\\\"/registry/secrets/resourcequota-8261/default-token-9dg84\\\" \" with result \"range_response_count:1 size:2379\" took too long (116.811295ms) to execute\n2019-11-03 06:56:47.280300 W | etcdserver: request \"header:<ID:12691264902561113599 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/leases/kube-system/kube-scheduler\\\" mod_revision:1062 > success:<request_put:<key:\\\"/registry/leases/kube-system/kube-scheduler\\\" value_size:225 >> failure:<request_range:<key:\\\"/registry/leases/kube-system/kube-scheduler\\\" > >>\" with result \"size:16\" took too long (279.455588ms) to execute\n2019-11-03 06:56:47.280466 W | etcdserver: read-only range request \"key:\\\"/registry/podtemplates/resourcequota-8261/\\\" range_end:\\\"/registry/podtemplates/resourcequota-82610\\\" \" with result \"range_response_count:0 size:5\" took too long (305.818006ms) to execute\n2019-11-03 06:56:47.281482 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-4701/hostexec-kind-worker-rlg8k\\\" \" with result \"range_response_count:1 size:1179\" took too long (306.813663ms) to execute\n2019-11-03 06:56:47.433815 W | etcdserver: read-only range request \"key:\\\"/registry/pods/nettest-9766/netserver-0\\\" \" with result \"range_response_count:1 size:967\" took too long (146.895497ms) to execute\n2019-11-03 06:56:47.434304 W | etcdserver: read-only range request \"key:\\\"/registry/replicasets/resourcequota-8261/\\\" range_end:\\\"/registry/replicasets/resourcequota-82610\\\" \" with result \"range_response_count:0 size:5\" took too long (147.855817ms) to execute\n2019-11-03 06:56:47.489011 W | etcdserver: read-only range request \"key:\\\"/registry/pods/volumemode-7875/hostexec-kind-worker2-2c9lb\\\" \" with result \"range_response_count:1 size:1179\" took too long (311.579317ms) to execute\n2019-11-03 06:56:47.504081 W | etcdserver: read-only range request \"key:\\\"/registry/pods/subpath-245/pod-subpath-test-configmap-4695\\\" \" with result \"range_response_count:1 size:1457\" took too long (161.535991ms) to execute\n2019-11-03 06:56:47.508515 W | etcdserver: read-only range request \"key:\\\"/registry/pods/secrets-7025/pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14\\\" \" with result \"range_response_count:1 size:1372\" took too long (161.27312ms) to execute\n2019-11-03 06:56:47.548158 W | etcdserver: read-only range request \"key:\\\"/registry/pods/dns-6138/dns-test-19bb20a7-9ca3-455a-b873-75aa75d43cd9\\\" \" with result \"range_response_count:1 size:1675\" took too long (433.540429ms) to execute\n2019-11-03 06:56:47.548448 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-9812/hostexec-kind-worker2-74fgs\\\" \" with result \"range_response_count:1 size:1184\" took too long (249.343281ms) to execute\n2019-11-03 06:56:47.549312 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-3239/hostexec-kind-worker-ftnbr\\\" \" with result \"range_response_count:1 size:804\" took too long (184.35822ms) to execute\n2019-11-03 06:56:47.549640 W | etcdserver: read-only range request \"key:\\\"/registry/events/sysctl-2435/\\\" range_end:\\\"/registry/events/sysctl-24350\\\" \" with result \"range_response_count:2 size:1106\" took too long (133.304815ms) to execute\n2019-11-03 06:56:47.549864 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-8834/pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e\\\" \" with result \"range_response_count:1 size:1314\" took too long (162.494078ms) to execute\n2019-11-03 06:56:47.957819 W | etcdserver: request \"header:<ID:12691264902561113616 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/pods/nettest-9766/netserver-1\\\" mod_revision:948 > success:<request_put:<key:\\\"/registry/pods/nettest-9766/netserver-1\\\" value_size:1258 >> failure:<request_range:<key:\\\"/registry/pods/nettest-9766/netserver-1\\\" > >>\" with result \"size:16\" took too long (207.01761ms) to execute\n2019-11-03 06:56:47.958507 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:309\" took too long (310.706715ms) to execute\n2019-11-03 06:56:48.224387 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/webhook-8732/sample-webhook-deployment\\\" \" with result \"range_response_count:1 size:1248\" took too long (563.567025ms) to execute\n2019-11-03 06:56:48.224583 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/volume-placement-9859\\\" \" with result \"range_response_count:1 size:302\" took too long (300.065972ms) to execute\n2019-11-03 06:56:48.225204 W | etcdserver: read-only range request \"key:\\\"/registry/pods/downward-api-2755/downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a\\\" \" with result \"range_response_count:1 size:1606\" took too long (283.782108ms) to execute\n2019-11-03 06:56:48.226154 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-2540/hostexec-kind-worker-wrt76\\\" \" with result \"range_response_count:1 size:804\" took too long (456.953139ms) to execute\n2019-11-03 06:56:48.226400 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/resourcequota-8261/\\\" range_end:\\\"/registry/persistentvolumeclaims/resourcequota-82610\\\" \" with result \"range_response_count:0 size:5\" took too long (565.063423ms) to execute\n2019-11-03 06:56:48.226644 W | etcdserver: request \"header:<ID:12691264902561113619 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" mod_revision:1080 > success:<request_put:<key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" value_size:234 >> failure:<request_range:<key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" > >>\" with result \"size:16\" took too long (114.320096ms) to execute\n2019-11-03 06:56:48.226870 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-584/metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219\\\" \" with result \"range_response_count:1 size:1156\" took too long (258.533846ms) to execute\n2019-11-03 06:56:48.227414 W | etcdserver: read-only range request \"key:\\\"/registry/pods/nettest-9766/netserver-0\\\" \" with result \"range_response_count:1 size:1318\" took too long (142.269524ms) to execute\n2019-11-03 06:56:48.227755 W | etcdserver: read-only range request \"key:\\\"/registry/pods/dns-6138/dns-test-19bb20a7-9ca3-455a-b873-75aa75d43cd9\\\" \" with result \"range_response_count:1 size:2285\" took too long (221.391068ms) to execute\n2019-11-03 06:56:48.227989 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-3239/hostexec-kind-worker-ftnbr\\\" \" with result \"range_response_count:1 size:804\" took too long (221.667068ms) to execute\n2019-11-03 06:56:48.228174 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-1334/pod-57880e8f-988b-41d9-acd1-1e93dda8679c\\\" \" with result \"range_response_count:1 size:1316\" took too long (244.069218ms) to execute\n2019-11-03 06:56:52.722487 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:6017\" took too long (102.667364ms) to execute\n2019-11-03 06:57:08.192340 W | etcdserver: request \"header:<ID:12691264902561116460 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/serviceaccounts/persistent-local-volumes-test-9640/default\\\" mod_revision:1225 > success:<request_put:<key:\\\"/registry/serviceaccounts/persistent-local-volumes-test-9640/default\\\" value_size:137 >> failure:<request_range:<key:\\\"/registry/serviceaccounts/persistent-local-volumes-test-9640/default\\\" > >>\" with result \"size:16\" took too long (120.732474ms) to execute\n2019-11-03 06:57:08.195442 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/csi-mock-volumes-2275/pvc-hrtld\\\" \" with result \"range_response_count:1 size:425\" took too long (190.7146ms) to execute\n2019-11-03 06:57:08.195660 W | etcdserver: read-only range request \"key:\\\"/registry/events/kubelet-9511/cleanup20-d9990ddd-00cb-499c-a521-caf29622374f.15d394bb7a6b0944\\\" \" with result \"range_response_count:1 size:611\" took too long (200.163579ms) to execute\n2019-11-03 06:57:08.196082 W | etcdserver: read-only range request \"key:\\\"/registry/secrets/persistent-local-volumes-test-9640/\\\" range_end:\\\"/registry/secrets/persistent-local-volumes-test-96400\\\" \" with result \"range_response_count:0 size:5\" took too long (214.529803ms) to execute\n2019-11-03 06:57:08.245212 W | etcdserver: read-only range request \"key:\\\"/registry/secrets/persistent-local-volumes-test-2540/default-token-fl4gl\\\" \" with result \"range_response_count:1 size:2470\" took too long (263.691183ms) to execute\n2019-11-03 06:57:08.247268 W | etcdserver: read-only range request \"key:\\\"/registry/pods/dns-6138/dns-test-19bb20a7-9ca3-455a-b873-75aa75d43cd9\\\" \" with result \"range_response_count:1 size:2285\" took too long (174.374856ms) to execute\n2019-11-03 06:57:08.249998 W | etcdserver: read-only range request \"key:\\\"/registry/poddisruptionbudgets/disruption-6726/foo\\\" \" with result \"range_response_count:1 size:240\" took too long (141.984371ms) to execute\n2019-11-03 06:57:08.252369 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/exempted-namesapce/default\\\" \" with result \"range_response_count:1 size:195\" took too long (169.83863ms) to execute\n2019-11-03 06:57:08.253525 W | etcdserver: read-only range request \"key:\\\"/registry/pods/nettest-9766/netserver-0\\\" \" with result \"range_response_count:1 size:1524\" took too long (143.321956ms) to execute\n2019-11-03 06:57:08.359488 W | etcdserver: read-only range request \"key:\\\"/registry/daemonsets/persistent-local-volumes-test-9640/\\\" range_end:\\\"/registry/daemonsets/persistent-local-volumes-test-96400\\\" \" with result \"range_response_count:0 size:5\" took too long (102.296997ms) to execute\n2019-11-03 06:57:08.945462 W | etcdserver: read-only range request \"key:\\\"/registry/poddisruptionbudgets/persistent-local-volumes-test-2540/\\\" range_end:\\\"/registry/poddisruptionbudgets/persistent-local-volumes-test-25400\\\" \" with result \"range_response_count:0 size:5\" took too long (125.303926ms) to execute\n2019-11-03 06:57:08.954660 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-9640/\\\" range_end:\\\"/registry/pods/persistent-local-volumes-test-96400\\\" \" with result \"range_response_count:0 size:5\" took too long (124.64033ms) to execute\n2019-11-03 06:57:11.163495 W | etcdserver: read-only range request \"key:\\\"/registry/controllerrevisions/webhook-8732/\\\" range_end:\\\"/registry/controllerrevisions/webhook-87320\\\" \" with result \"range_response_count:0 size:5\" took too long (108.572314ms) to execute\n2019-11-03 06:57:11.177702 W | etcdserver: read-only range request \"key:\\\"/registry/daemonsets/webhook-2475/\\\" range_end:\\\"/registry/daemonsets/webhook-24750\\\" \" with result \"range_response_count:0 size:5\" took too long (111.415552ms) to execute\n2019-11-03 06:57:11.183740 W | etcdserver: read-only range request \"key:\\\"/registry/roles/custom-resource-definition-3847/\\\" range_end:\\\"/registry/roles/custom-resource-definition-38470\\\" \" with result \"range_response_count:0 size:5\" took too long (105.968503ms) to execute\n2019-11-03 06:57:11.185249 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/pods-975/\\\" range_end:\\\"/registry/limitranges/pods-9750\\\" \" with result \"range_response_count:0 size:5\" took too long (110.514482ms) to execute\n2019-11-03 06:57:11.185488 W | etcdserver: read-only range request \"key:\\\"/registry/events/sysctl-2435/sysctl-d8873cbb-bd33-403f-acbe-f91a96b847ba.15d394b61c8322f2\\\" \" with result \"range_response_count:1 size:549\" took too long (110.988309ms) to execute\n2019-11-03 06:57:11.185696 W | etcdserver: read-only range request \"key:\\\"/registry/secrets/webhook-8732-markers/\\\" range_end:\\\"/registry/secrets/webhook-8732-markers0\\\" \" with result \"range_response_count:0 size:5\" took too long (111.379694ms) to execute\n2019-11-03 06:57:11.188618 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/webhook-2475-markers/\\\" range_end:\\\"/registry/ingress/webhook-2475-markers0\\\" \" with result \"range_response_count:0 size:5\" took too long (122.169596ms) to execute\n2019-11-03 06:57:11.191589 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/webhook-8732-markers/default\\\" \" with result \"range_response_count:1 size:235\" took too long (112.747676ms) to execute\n2019-11-03 06:57:11.192155 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/exempted-namesapce/\\\" range_end:\\\"/registry/resourcequotas/exempted-namesapce0\\\" \" with result \"range_response_count:0 size:5\" took too long (118.075205ms) to execute\n2019-11-03 06:57:12.709618 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs/containers-6399/\\\" range_end:\\\"/registry/services/specs/containers-63990\\\" \" with result \"range_response_count:0 size:5\" took too long (109.797624ms) to execute\n2019-11-03 06:57:12.710491 W | etcdserver: read-only range request \"key:\\\"/registry/minions/kind-control-plane\\\" \" with result \"range_response_count:1 size:2083\" took too long (106.487404ms) to execute\n2019-11-03 06:57:12.711195 W | etcdserver: read-only range request \"key:\\\"/registry/pods/webhook-2475/\\\" range_end:\\\"/registry/pods/webhook-24750\\\" \" with result \"range_response_count:1 size:1372\" took too long (162.717533ms) to execute\n2019-11-03 06:57:12.712641 W | etcdserver: read-only range request \"key:\\\"/registry/rolebindings/configmap-5652/\\\" range_end:\\\"/registry/rolebindings/configmap-56520\\\" \" with result \"range_response_count:0 size:5\" took too long (110.298488ms) to execute\n==== END logs for container etcd of pod kube-system/etcd-kind-control-plane ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-9g7zl ====\nI1103 06:55:37.536879 1 main.go:64] hostIP = 172.17.0.2\npodIP = 172.17.0.2\nI1103 06:56:07.594234 1 main.go:104] Failed to get nodes, retrying after error: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout\nI1103 06:56:10.350428 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:10.350470 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:10.350624 1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} \nI1103 06:56:10.350681 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:10.350687 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:10.350807 1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} \nI1103 06:56:10.350915 1 main.go:150] handling current node\nI1103 06:56:20.360335 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:20.360373 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:20.360490 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:20.360496 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:20.360544 1 main.go:150] handling current node\nI1103 06:56:30.436874 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:30.436904 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:30.437017 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:30.437025 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:30.437084 1 main.go:150] handling current node\nI1103 06:56:40.490020 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:40.490085 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:40.490285 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:40.490313 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:40.490396 1 main.go:150] handling current node\nI1103 06:56:50.537207 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:50.537353 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:50.537622 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:50.537633 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:50.537735 1 main.go:150] handling current node\nI1103 06:57:00.544743 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:57:00.544772 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:57:00.544952 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:57:00.544959 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:57:00.545034 1 main.go:150] handling current node\nI1103 06:57:10.610142 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:57:10.610186 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:57:10.610723 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:57:10.610739 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:57:10.636209 1 main.go:150] handling current node\n==== END logs for container kindnet-cni of pod kube-system/kindnet-9g7zl ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-c744w ====\nI1103 06:55:19.937535 1 main.go:64] hostIP = 172.17.0.3\npodIP = 172.17.0.3\nI1103 06:55:49.943195 1 main.go:104] Failed to get nodes, retrying after error: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout\nI1103 06:55:50.042258 1 main.go:150] handling current node\nI1103 06:55:50.138649 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:55:50.139013 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:55:50.139603 1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} \nI1103 06:55:50.140147 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:55:50.140250 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:55:50.140467 1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} \nI1103 06:56:00.147236 1 main.go:150] handling current node\nI1103 06:56:00.147284 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:00.147293 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:00.147645 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:00.147655 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:56:10.367012 1 main.go:150] handling current node\nI1103 06:56:10.367056 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:10.367066 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:10.367225 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:10.367233 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:56:20.443046 1 main.go:150] handling current node\nI1103 06:56:20.443081 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:20.443087 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:20.443426 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:20.443452 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:56:30.448059 1 main.go:150] handling current node\nI1103 06:56:30.448098 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:30.448103 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:30.448209 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:30.448214 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:56:40.537408 1 main.go:150] handling current node\nI1103 06:56:40.537889 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:40.538057 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:40.538349 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:40.539291 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:56:50.546877 1 main.go:150] handling current node\nI1103 06:56:50.547027 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:56:50.547059 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:56:50.547244 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:50.547272 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:57:00.552494 1 main.go:150] handling current node\nI1103 06:57:00.552535 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:57:00.552541 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:57:00.552694 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:57:00.552710 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:57:10.598647 1 main.go:150] handling current node\nI1103 06:57:10.635886 1 main.go:161] Handling node with IP: 172.17.0.4\nI1103 06:57:10.636263 1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI1103 06:57:10.636603 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:57:10.636766 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \n==== END logs for container kindnet-cni of pod kube-system/kindnet-c744w ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-zlgk8 ====\nI1103 06:55:37.540665 1 main.go:64] hostIP = 172.17.0.4\npodIP = 172.17.0.4\nI1103 06:56:07.644679 1 main.go:104] Failed to get nodes, retrying after error: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout\nI1103 06:56:10.320314 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:10.320440 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:10.320881 1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} \nI1103 06:56:10.320988 1 main.go:150] handling current node\nI1103 06:56:10.348377 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:10.348775 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:56:10.350623 1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} \nI1103 06:56:20.436915 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:20.436949 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:20.437074 1 main.go:150] handling current node\nI1103 06:56:20.437087 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:20.437093 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:56:30.538606 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:30.538654 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:30.538829 1 main.go:150] handling current node\nI1103 06:56:30.538848 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:30.538854 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:56:40.637327 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:40.637367 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:40.637507 1 main.go:150] handling current node\nI1103 06:56:40.637524 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:40.637529 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:56:50.737244 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:56:50.737298 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:56:50.737579 1 main.go:150] handling current node\nI1103 06:56:50.737597 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:56:50.737604 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:57:00.837628 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:57:00.837673 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:57:00.837933 1 main.go:150] handling current node\nI1103 06:57:00.837949 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:57:00.837954 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI1103 06:57:10.881409 1 main.go:161] Handling node with IP: 172.17.0.3\nI1103 06:57:10.881442 1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1103 06:57:10.881817 1 main.go:150] handling current node\nI1103 06:57:10.881837 1 main.go:161] Handling node with IP: 172.17.0.2\nI1103 06:57:10.881842 1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \n==== END logs for container kindnet-cni of pod kube-system/kindnet-zlgk8 ====\n==== START logs for container kube-apiserver of pod kube-system/kube-apiserver-kind-control-plane ====\nFlag --insecure-port has been deprecated, This flag will be removed in a future version.\nI1103 06:54:52.666415 1 server.go:622] external host was not specified, using 172.17.0.3\nI1103 06:54:52.666784 1 server.go:149] Version: v1.18.0-alpha.0.178+0c66e64b140011\nI1103 06:54:53.211248 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.\nI1103 06:54:53.211398 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.\nI1103 06:54:53.212298 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.\nI1103 06:54:53.212322 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.\nI1103 06:54:53.215239 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.215308 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.555317 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.555441 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.570290 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.570656 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.631866 1 master.go:261] Using reconciler: lease\nI1103 06:54:53.632545 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.632586 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.650006 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.650055 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.663041 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.663314 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.674482 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.674542 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.688905 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.688947 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.706200 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.706382 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.721174 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.721228 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.739058 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.739122 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.750401 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.750721 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.761065 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.761109 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.775062 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.775122 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.791290 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.791569 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.803054 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.803250 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.823178 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.823243 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.839707 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.839759 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.851916 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.852210 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.868618 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.868661 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.880366 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:53.880409 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:53.892023 1 rest.go:115] the default service ipfamily for this cluster is: IPv4\nI1103 06:54:54.018906 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.018954 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.030410 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.030748 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.045561 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.045647 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.058874 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.058913 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.081082 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.081473 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.094161 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.094204 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.104250 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.104302 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.116382 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.116692 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.128949 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.129300 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.140299 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.140341 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.152265 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.152337 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.164890 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.164939 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.176474 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.176530 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.190502 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.190561 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.202116 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.202170 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.210816 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.210870 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.212515 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.212640 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.225591 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.225645 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.235091 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.235339 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.248131 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.248169 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.259122 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.259177 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.267920 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.267958 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.279559 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.279697 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.292084 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.292153 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.302849 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.302913 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.314718 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.314775 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.325330 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.325379 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.345292 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.345341 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.356829 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.356889 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.366615 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.366725 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.376612 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.376834 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.388931 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.388998 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.399851 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.399968 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.410354 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.410398 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.431204 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.431316 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.443547 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.443615 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.455915 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.455982 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.480243 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.480459 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.490642 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.490822 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.503492 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.503542 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.516003 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.516045 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nW1103 06:54:54.691764 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.\nW1103 06:54:54.739840 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.\nW1103 06:54:54.766497 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.\nW1103 06:54:54.770811 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.\nW1103 06:54:54.784549 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.\nW1103 06:54:54.806075 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.\nW1103 06:54:54.806125 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.\nI1103 06:54:54.817651 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.\nI1103 06:54:54.817691 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.\nI1103 06:54:54.819697 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.819747 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:54.830394 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:54:54.830839 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:54:57.320703 1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nI1103 06:54:57.320920 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nI1103 06:54:57.321239 1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key\nI1103 06:54:57.321473 1 secure_serving.go:174] Serving securely on [::]:6443\nI1103 06:54:57.321589 1 autoregister_controller.go:140] Starting autoregister controller\nI1103 06:54:57.321598 1 cache.go:32] Waiting for caches to sync for autoregister controller\nI1103 06:54:57.321620 1 tlsconfig.go:220] Starting DynamicServingCertificateController\nI1103 06:54:57.321949 1 crd_finalizer.go:263] Starting CRDFinalizer\nI1103 06:54:57.322131 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController\nI1103 06:54:57.321958 1 crdregistration_controller.go:111] Starting crd-autoregister controller\nI1103 06:54:57.322311 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister\nI1103 06:54:57.321991 1 naming_controller.go:288] Starting NamingConditionController\nI1103 06:54:57.322002 1 customresource_discovery_controller.go:208] Starting DiscoveryController\nI1103 06:54:57.322019 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController\nI1103 06:54:57.322032 1 controller.go:85] Starting OpenAPI controller\nI1103 06:54:57.322038 1 establishing_controller.go:73] Starting EstablishingController\nE1103 06:54:57.327856 1 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: \nI1103 06:54:57.328447 1 controller.go:81] Starting OpenAPI AggregationController\nI1103 06:54:57.328644 1 apiservice_controller.go:94] Starting APIServiceRegistrationController\nI1103 06:54:57.328674 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller\nI1103 06:54:57.329469 1 available_controller.go:386] Starting AvailableConditionController\nI1103 06:54:57.329497 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller\nI1103 06:54:57.421898 1 cache.go:39] Caches are synced for autoregister controller\nI1103 06:54:57.422576 1 shared_informer.go:204] Caches are synced for crd-autoregister \nI1103 06:54:57.429305 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller\nI1103 06:54:57.429679 1 cache.go:39] Caches are synced for AvailableConditionController controller\nI1103 06:54:58.320495 1 controller.go:107] OpenAPI AggregationController: Processing item \nI1103 06:54:58.320545 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).\nI1103 06:54:58.320561 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).\nI1103 06:54:58.337672 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000\nI1103 06:54:58.344138 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000\nI1103 06:54:58.344456 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.\nI1103 06:54:58.811688 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io\nI1103 06:54:58.863087 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io\nW1103 06:54:59.016938 1 lease.go:222] Resetting endpoints for master service \"kubernetes\" to [172.17.0.3]\nI1103 06:54:59.018139 1 controller.go:606] quota admission added evaluator for: endpoints\nI1103 06:54:59.056120 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io\nI1103 06:54:59.651971 1 controller.go:606] quota admission added evaluator for: serviceaccounts\nI1103 06:55:00.154832 1 controller.go:606] quota admission added evaluator for: deployments.apps\nI1103 06:55:00.464343 1 controller.go:606] quota admission added evaluator for: daemonsets.apps\nI1103 06:55:16.126041 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps\nI1103 06:55:16.177129 1 controller.go:606] quota admission added evaluator for: replicasets.apps\nI1103 06:56:04.002997 1 trace.go:116] Trace[788679680]: \"GuaranteedUpdate etcd3\" type:*coordination.Lease (started: 2019-11-03 06:56:03.271415619 +0000 UTC m=+70.741116273) (total time: 731.50551ms):\nTrace[788679680]: [731.432837ms] [731.099632ms] Transaction committed\nI1103 06:56:04.006541 1 trace.go:116] Trace[904619808]: \"Update\" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-worker2,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:172.17.0.2 (started: 2019-11-03 06:56:03.271161395 +0000 UTC m=+70.740862188) (total time: 735.322465ms):\nTrace[904619808]: [735.208405ms] [735.037878ms] Object stored in database\nI1103 06:56:04.005631 1 trace.go:116] Trace[890076088]: \"GuaranteedUpdate etcd3\" type:*coordination.Lease (started: 2019-11-03 06:56:03.292110543 +0000 UTC m=+70.761811316) (total time: 713.466985ms):\nTrace[890076088]: [713.447113ms] [713.092906ms] Transaction committed\nI1103 06:56:04.013329 1 trace.go:116] Trace[1756529954]: \"Update\" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-worker,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:172.17.0.4 (started: 2019-11-03 06:56:03.274623935 +0000 UTC m=+70.744324572) (total time: 738.627303ms):\nTrace[1756529954]: [738.409687ms] [738.279436ms] Object stored in database\nI1103 06:56:04.400458 1 trace.go:116] Trace[55977623]: \"GuaranteedUpdate etcd3\" type:*coordination.Lease (started: 2019-11-03 06:56:03.499897951 +0000 UTC m=+70.969598619) (total time: 900.492572ms):\nTrace[55977623]: [900.415547ms] [900.114ms] Transaction committed\nI1103 06:56:04.400893 1 trace.go:116] Trace[1433746017]: \"Update\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/0c66e64/leader-election,client:172.17.0.3 (started: 2019-11-03 06:56:03.49964301 +0000 UTC m=+70.969343696) (total time: 901.203721ms):\nTrace[1433746017]: [901.018733ms] [900.846212ms] Object stored in database\nI1103 06:56:04.873657 1 trace.go:116] Trace[2003891656]: \"Get\" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/0c66e64/leader-election,client:172.17.0.3 (started: 2019-11-03 06:56:04.149401144 +0000 UTC m=+71.619101867) (total time: 724.195053ms):\nTrace[2003891656]: [724.12416ms] [724.089407ms] About to write a response\nI1103 06:56:05.614286 1 trace.go:116] Trace[694256808]: \"List etcd3\" key:/resourcequotas/kube-system,resourceVersion:,limit:0,continue: (started: 2019-11-03 06:56:04.862366708 +0000 UTC m=+72.332067366) (total time: 751.868844ms):\nTrace[694256808]: [751.868844ms] [751.868844ms] END\nI1103 06:56:05.614440 1 trace.go:116] Trace[107551381]: \"List\" url:/api/v1/namespaces/kube-system/resourcequotas,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:127.0.0.1 (started: 2019-11-03 06:56:04.86232238 +0000 UTC m=+72.332023112) (total time: 752.087701ms):\nTrace[107551381]: [751.992882ms] [751.964316ms] Listing from storage done\nI1103 06:56:05.629840 1 trace.go:116] Trace[1294060770]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/0c66e64/leader-election,client:172.17.0.3 (started: 2019-11-03 06:56:05.053265037 +0000 UTC m=+72.522965729) (total time: 576.495167ms):\nTrace[1294060770]: [576.414451ms] [576.378126ms] About to write a response\nI1103 06:56:06.186606 1 trace.go:116] Trace[1537966706]: \"Create\" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:172.17.0.3 (started: 2019-11-03 06:56:04.852408732 +0000 UTC m=+72.322109472) (total time: 1.334116177s):\nTrace[1537966706]: [1.333935907s] [1.333466884s] Object stored in database\nI1103 06:56:06.813639 1 trace.go:116] Trace[736706684]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/0c66e64/leader-election,client:172.17.0.3 (started: 2019-11-03 06:56:05.985651074 +0000 UTC m=+73.455351718) (total time: 827.915566ms):\nTrace[736706684]: [827.816641ms] [827.778768ms] About to write a response\nI1103 06:56:07.707088 1 trace.go:116] Trace[260140383]: \"GuaranteedUpdate etcd3\" type:*core.Endpoints (started: 2019-11-03 06:56:06.824060049 +0000 UTC m=+74.293760702) (total time: 882.975321ms):\nTrace[260140383]: [882.912313ms] [882.568859ms] Transaction committed\nI1103 06:56:07.707250 1 trace.go:116] Trace[131915681]: \"Update\" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/0c66e64/leader-election,client:172.17.0.3 (started: 2019-11-03 06:56:06.823818681 +0000 UTC m=+74.293519321) (total time: 883.398202ms):\nTrace[131915681]: [883.3126ms] [883.164217ms] Object stored in database\nI1103 06:56:08.157153 1 trace.go:116] Trace[1967419825]: \"GuaranteedUpdate etcd3\" type:*core.Pod (started: 2019-11-03 06:56:07.127255167 +0000 UTC m=+74.596955824) (total time: 1.029839343s):\nTrace[1967419825]: [1.029747532s] [1.002585809s] Transaction committed\nI1103 06:56:08.170125 1 trace.go:116] Trace[944693875]: \"Patch\" url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-kind-control-plane/status,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:172.17.0.3 (started: 2019-11-03 06:56:07.103197906 +0000 UTC m=+74.572898703) (total time: 1.066861374s):\nTrace[944693875]: [1.066481433s] [1.041169306s] Object stored in database\nI1103 06:56:08.159912 1 trace.go:116] Trace[1386374450]: \"GuaranteedUpdate etcd3\" type:*coordination.Lease (started: 2019-11-03 06:56:06.949228173 +0000 UTC m=+74.418928828) (total time: 1.21064732s):\nTrace[1386374450]: [1.210606434s] [1.210305032s] Transaction committed\nI1103 06:56:08.170521 1 trace.go:116] Trace[58210338]: \"Update\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/0c66e64/leader-election,client:172.17.0.3 (started: 2019-11-03 06:56:06.949001696 +0000 UTC m=+74.418702665) (total time: 1.221488034s):\nTrace[58210338]: [1.221407273s] [1.221257616s] Object stored in database\nI1103 06:56:09.211261 1 trace.go:116] Trace[2074149153]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/0c66e64/leader-election,client:172.17.0.3 (started: 2019-11-03 06:56:07.74747099 +0000 UTC m=+75.217171635) (total time: 1.463713526s):\nTrace[2074149153]: [1.463608334s] [1.463558619s] About to write a response\nI1103 06:56:09.219659 1 trace.go:116] Trace[978272135]: \"Get\" url:/api/v1/namespaces/kube-system/pods/etcd-kind-control-plane,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:172.17.0.3 (started: 2019-11-03 06:56:08.173132504 +0000 UTC m=+75.642833141) (total time: 1.046464097s):\nTrace[978272135]: [1.046286076s] [1.046274985s] About to write a response\nI1103 06:56:09.342567 1 trace.go:116] Trace[235998267]: \"Create\" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:172.17.0.3 (started: 2019-11-03 06:56:08.6825314 +0000 UTC m=+76.152232050) (total time: 659.961857ms):\nTrace[235998267]: [659.798779ms] [648.416574ms] Object stored in database\nI1103 06:56:09.633472 1 trace.go:116] Trace[284098946]: \"Get\" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:127.0.0.1 (started: 2019-11-03 06:56:09.03162754 +0000 UTC m=+76.501328371) (total time: 601.770368ms):\nTrace[284098946]: [601.66781ms] [601.638752ms] About to write a response\nI1103 06:56:09.839332 1 trace.go:116] Trace[968840527]: \"GuaranteedUpdate etcd3\" type:*coordination.Lease (started: 2019-11-03 06:56:09.220030286 +0000 UTC m=+76.689730941) (total time: 619.227762ms):\nTrace[968840527]: [619.145599ms] [618.838416ms] Transaction committed\nI1103 06:56:09.839534 1 trace.go:116] Trace[1135026595]: \"Update\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/0c66e64/leader-election,client:172.17.0.3 (started: 2019-11-03 06:56:09.219838158 +0000 UTC m=+76.689538805) (total time: 619.66077ms):\nTrace[1135026595]: [619.539621ms] [619.42008ms] Object stored in database\nI1103 06:56:10.138337 1 trace.go:116] Trace[2015494194]: \"GuaranteedUpdate etcd3\" type:*core.Pod (started: 2019-11-03 06:56:09.317094236 +0000 UTC m=+76.786794896) (total time: 821.184144ms):\nTrace[2015494194]: [821.061667ms] [819.433308ms] Transaction committed\nI1103 06:56:10.140427 1 trace.go:116] Trace[2123518142]: \"Patch\" url:/api/v1/namespaces/kube-system/pods/etcd-kind-control-plane/status,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:172.17.0.3 (started: 2019-11-03 06:56:09.27196486 +0000 UTC m=+76.741665653) (total time: 868.419098ms):\nTrace[2123518142]: [868.187157ms] [821.920487ms] Object stored in database\nI1103 06:56:10.138915 1 trace.go:116] Trace[834733523]: \"GuaranteedUpdate etcd3\" type:*coordination.Lease (started: 2019-11-03 06:56:09.227636323 +0000 UTC m=+76.697336977) (total time: 911.247931ms):\nTrace[834733523]: [911.223295ms] [905.919113ms] Transaction committed\nI1103 06:56:10.145082 1 trace.go:116] Trace[62518438]: \"Update\" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-control-plane,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:172.17.0.3 (started: 2019-11-03 06:56:09.227395388 +0000 UTC m=+76.697096172) (total time: 917.636036ms):\nTrace[62518438]: [917.513444ms] [917.352267ms] Object stored in database\nI1103 06:56:10.200174 1 trace.go:116] Trace[675762936]: \"Get\" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:127.0.0.1 (started: 2019-11-03 06:56:09.64370218 +0000 UTC m=+77.113402831) (total time: 556.402348ms):\nTrace[675762936]: [556.062917ms] [556.047715ms] About to write a response\nI1103 06:56:10.288090 1 trace.go:116] Trace[1070033231]: \"List etcd3\" key:/minions,resourceVersion:,limit:0,continue: (started: 2019-11-03 06:56:07.688161216 +0000 UTC m=+75.157862166) (total time: 2.599878225s):\nTrace[1070033231]: [2.599878225s] [2.599878225s] END\nI1103 06:56:10.294053 1 trace.go:116] Trace[412443707]: \"List\" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.2 (started: 2019-11-03 06:56:07.688121063 +0000 UTC m=+75.157821811) (total time: 2.605836873s):\nTrace[412443707]: [2.602871962s] [2.602848577s] Listing from storage done\nI1103 06:56:10.306558 1 trace.go:116] Trace[1248014841]: \"List etcd3\" key:/minions,resourceVersion:,limit:0,continue: (started: 2019-11-03 06:56:07.720269722 +0000 UTC m=+75.189970372) (total time: 2.564981445s):\nTrace[1248014841]: [2.564981445s] [2.564981445s] END\nI1103 06:56:10.310107 1 trace.go:116] Trace[1682619875]: \"List\" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.4 (started: 2019-11-03 06:56:07.720243871 +0000 UTC m=+75.189944507) (total time: 2.589799555s):\nTrace[1682619875]: [2.586347608s] [2.586335005s] Listing from storage done\nI1103 06:56:40.763401 1 controller.go:606] quota admission added evaluator for: statefulsets.apps\nI1103 06:56:40.866146 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:56:40.866525 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:56:42.853578 1 controller.go:606] quota admission added evaluator for: namespaces\nI1103 06:56:47.555331 1 trace.go:116] Trace[973422888]: \"Get\" url:/api/v1/namespaces/dns-6138/pods/dns-test-19bb20a7-9ca3-455a-b873-75aa75d43cd9,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/0c66e64,client:172.17.0.2 (started: 2019-11-03 06:56:47.000928433 +0000 UTC m=+114.470629110) (total time: 554.338429ms):\nTrace[973422888]: [554.222167ms] [554.207635ms] About to write a response\nI1103 06:56:48.227611 1 trace.go:116] Trace[1234200196]: \"Get\" url:/apis/apps/v1/namespaces/webhook-8732/deployments/sample-webhook-deployment,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance],client:172.17.0.1 (started: 2019-11-03 06:56:47.659551723 +0000 UTC m=+115.129252373) (total time: 568.002731ms):\nTrace[1234200196]: [567.879977ms] [567.862289ms] About to write a response\nI1103 06:56:48.229245 1 trace.go:116] Trace[2132686272]: \"List etcd3\" key:/persistentvolumeclaims/resourcequota-8261,resourceVersion:,limit:0,continue: (started: 2019-11-03 06:56:47.645905157 +0000 UTC m=+115.115605813) (total time: 583.28119ms):\nTrace[2132686272]: [583.28119ms] [583.28119ms] END\nI1103 06:56:48.229391 1 trace.go:116] Trace[1777873431]: \"Delete\" url:/api/v1/namespaces/resourcequota-8261/persistentvolumeclaims (started: 2019-11-03 06:56:47.645263282 +0000 UTC m=+115.114963938) (total time: 584.105752ms):\nTrace[1777873431]: [584.105752ms] [584.105752ms] END\nW1103 06:56:52.611161 1 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured\nE1103 06:56:54.103543 1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.3:46488->172.17.0.2:10250: write: broken pipe\nW1103 06:56:54.950261 1 dispatcher.go:141] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW1103 06:56:54.989294 1 dispatcher.go:141] rejected by webhook \"deny-unwanted-pod-container-name-and-label.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-pod-container-name-and-label.k8s.io\\\" denied the request: the pod contains unwanted label; the pod contains unwanted container name;\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nE1103 06:56:56.020128 1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.3:42290->172.17.0.4:10250: write: broken pipe\nE1103 06:56:56.020655 1 upgradeaware.go:371] Error proxying data from backend to client: tls: use of closed connection\nE1103 06:57:02.357386 1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.3:42484->172.17.0.4:10250: write: broken pipe\nI1103 06:57:05.002451 1 trace.go:116] Trace[208775033]: \"Call validating webhook\" configuration:webhook-8732,webhook:deny-unwanted-pod-container-name-and-label.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:cac18d9e-d8f9-4a41-9d15-42a71d3a5fb1 (started: 2019-11-03 06:56:55.001702767 +0000 UTC m=+122.471403422) (total time: 10.000676268s):\nTrace[208775033]: [10.000676268s] [10.000676268s] END\nW1103 06:57:05.002510 1 dispatcher.go:133] Failed calling webhook, failing closed deny-unwanted-pod-container-name-and-label.k8s.io: failed calling webhook \"deny-unwanted-pod-container-name-and-label.k8s.io\": Post https://e2e-test-webhook.webhook-8732.svc:8443/pods?timeout=10s: context deadline exceeded\nI1103 06:57:05.002907 1 trace.go:116] Trace[144160233]: \"Create\" url:/api/v1/namespaces/webhook-8732/pods,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance],client:172.17.0.1 (started: 2019-11-03 06:56:55.001117587 +0000 UTC m=+122.470818245) (total time: 10.001755538s):\nTrace[144160233]: [10.001755538s] [10.001431152s] END\nW1103 06:57:05.028961 1 dispatcher.go:141] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW1103 06:57:05.098991 1 dispatcher.go:128] Failed calling webhook, failing open fail-open.k8s.io: failed calling webhook \"fail-open.k8s.io\": Post https://e2e-test-webhook.webhook-8732.svc:8443/configmaps?timeout=10s: x509: certificate signed by unknown authority\nE1103 06:57:05.099253 1 dispatcher.go:129] failed calling webhook \"fail-open.k8s.io\": Post https://e2e-test-webhook.webhook-8732.svc:8443/configmaps?timeout=10s: x509: certificate signed by unknown authority\nW1103 06:57:05.109367 1 dispatcher.go:128] Failed calling webhook, failing open fail-open.k8s.io: failed calling webhook \"fail-open.k8s.io\": Post https://e2e-test-webhook.webhook-8732.svc:8443/configmaps?timeout=10s: x509: certificate signed by unknown authority\nE1103 06:57:05.109413 1 dispatcher.go:129] failed calling webhook \"fail-open.k8s.io\": Post https://e2e-test-webhook.webhook-8732.svc:8443/configmaps?timeout=10s: x509: certificate signed by unknown authority\nW1103 06:57:05.122261 1 dispatcher.go:141] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW1103 06:57:05.127148 1 dispatcher.go:141] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW1103 06:57:05.132516 1 dispatcher.go:141] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW1103 06:57:05.135583 1 dispatcher.go:141] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW1103 06:57:05.167470 1 dispatcher.go:128] Failed calling webhook, failing open fail-open.k8s.io: failed calling webhook \"fail-open.k8s.io\": Post https://e2e-test-webhook.webhook-8732.svc:8443/configmaps?timeout=10s: x509: certificate signed by unknown authority\nE1103 06:57:05.167532 1 dispatcher.go:129] failed calling webhook \"fail-open.k8s.io\": Post https://e2e-test-webhook.webhook-8732.svc:8443/configmaps?timeout=10s: x509: certificate signed by unknown authority\nI1103 06:57:05.266046 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:57:05.266616 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:57:05.298143 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:57:05.298927 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:57:05.331849 1 controller.go:606] quota admission added evaluator for: e2e-test-webhook-9779-crds.webhook.example.com\nI1103 06:57:05.457026 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:57:05.457169 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:57:05.473423 1 client.go:361] parsed scheme: \"endpoint\"\nI1103 06:57:05.474312 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]\nI1103 06:57:06.084760 1 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy\nI1103 06:57:11.515293 1 trace.go:116] Trace[1090175573]: \"Delete\" url:/api/v1/namespaces/sysctl-2435/events (started: 2019-11-03 06:57:10.942767861 +0000 UTC m=+138.412468491) (total time: 572.48776ms):\nTrace[1090175573]: [572.48776ms] [572.48776ms] END\nI1103 06:57:12.151681 1 trace.go:116] Trace[796183036]: \"Delete\" url:/apis/events.k8s.io/v1beta1/namespaces/webhook-2475/events (started: 2019-11-03 06:57:11.443930198 +0000 UTC m=+138.913630847) (total time: 707.700901ms):\nTrace[796183036]: [707.700901ms] [707.700901ms] END\nI1103 06:57:12.375708 1 trace.go:116] Trace[600410248]: \"Delete\" url:/apis/events.k8s.io/v1beta1/namespaces/webhook-8732/events (started: 2019-11-03 06:57:11.868958417 +0000 UTC m=+139.338659047) (total time: 506.71153ms):\nTrace[600410248]: [506.71153ms] [506.71153ms] END\n==== END logs for container kube-apiserver of pod kube-system/kube-apiserver-kind-control-plane ====\n==== START logs for container kube-controller-manager of pod kube-system/kube-controller-manager-kind-control-plane ====\nI1103 06:54:53.093422 1 serving.go:312] Generated self-signed cert in-memory\nI1103 06:54:53.891897 1 controllermanager.go:161] Version: v1.18.0-alpha.0.178+0c66e64b140011\nI1103 06:54:53.892745 1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nI1103 06:54:53.893292 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nI1103 06:54:53.893356 1 secure_serving.go:174] Serving securely on 127.0.0.1:10257\nI1103 06:54:53.894044 1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252\nI1103 06:54:53.894277 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-controller-manager...\nI1103 06:54:53.893376 1 tlsconfig.go:220] Starting DynamicServingCertificateController\nE1103 06:54:57.358259 1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: endpoints \"kube-controller-manager\" is forbidden: User \"system:kube-controller-manager\" cannot get resource \"endpoints\" in API group \"\" in the namespace \"kube-system\"\nI1103 06:54:59.424720 1 leaderelection.go:252] successfully acquired lease kube-system/kube-controller-manager\nI1103 06:54:59.424846 1 event.go:281] Event(v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"kube-system\", Name:\"kube-controller-manager\", UID:\"2fc77524-998a-4908-ad5c-b35d6eb77ea0\", APIVersion:\"v1\", ResourceVersion:\"156\", FieldPath:\"\"}): type: 'Normal' reason: 'LeaderElection' kind-control-plane_628d308b-676a-4b24-89d4-a22b72af91ad became leader\nI1103 06:54:59.425113 1 event.go:281] Event(v1.ObjectReference{Kind:\"Lease\", Namespace:\"kube-system\", Name:\"kube-controller-manager\", UID:\"2e734402-ef1d-49c9-a2ec-ccd698d810fc\", APIVersion:\"coordination.k8s.io/v1\", ResourceVersion:\"157\", FieldPath:\"\"}): type: 'Normal' reason: 'LeaderElection' kind-control-plane_628d308b-676a-4b24-89d4-a22b72af91ad became leader\nI1103 06:54:59.641396 1 plugins.go:100] No cloud provider specified.\nI1103 06:54:59.643197 1 shared_informer.go:197] Waiting for caches to sync for tokens\nI1103 06:54:59.743658 1 shared_informer.go:204] Caches are synced for tokens \nI1103 06:54:59.964496 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps\nI1103 06:54:59.964562 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps\nW1103 06:54:59.964637 1 shared_informer.go:415] resyncPeriod 46266807907987 is smaller than resyncCheckPeriod 83226957317372 and the informer has already started. Changing it to 83226957317372\nI1103 06:54:59.964693 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts\nI1103 06:54:59.964723 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps\nI1103 06:54:59.964777 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges\nI1103 06:54:59.964930 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch\nI1103 06:54:59.964988 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io\nI1103 06:54:59.965024 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch\nI1103 06:54:59.965061 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io\nI1103 06:54:59.965093 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy\nI1103 06:54:59.965124 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io\nI1103 06:54:59.965174 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps\nI1103 06:54:59.965200 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps\nI1103 06:54:59.965229 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling\nI1103 06:54:59.965264 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates\nI1103 06:54:59.965401 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints\nI1103 06:54:59.965465 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io\nI1103 06:54:59.965501 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io\nI1103 06:54:59.965595 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io\nW1103 06:54:59.965612 1 shared_informer.go:415] resyncPeriod 49385717565036 is smaller than resyncCheckPeriod 83226957317372 and the informer has already started. Changing it to 83226957317372\nI1103 06:54:59.965710 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions\nI1103 06:54:59.965764 1 controllermanager.go:534] Started \"resourcequota\"\nI1103 06:54:59.966063 1 resource_quota_controller.go:271] Starting resource quota controller\nI1103 06:54:59.966161 1 shared_informer.go:197] Waiting for caches to sync for resource quota\nI1103 06:54:59.966243 1 resource_quota_monitor.go:303] QuotaMonitor running\nI1103 06:54:59.990614 1 controllermanager.go:534] Started \"csrcleaner\"\nI1103 06:54:59.990724 1 cleaner.go:81] Starting CSR cleaner controller\nI1103 06:55:00.008892 1 node_ipam_controller.go:94] Sending events to api server.\nI1103 06:55:10.014276 1 range_allocator.go:82] Sending events to api server.\nI1103 06:55:10.014503 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.\nI1103 06:55:10.014602 1 controllermanager.go:534] Started \"nodeipam\"\nI1103 06:55:10.014703 1 node_ipam_controller.go:154] Starting ipam controller\nI1103 06:55:10.014749 1 shared_informer.go:197] Waiting for caches to sync for node\nI1103 06:55:10.028193 1 node_lifecycle_controller.go:329] Sending events to api server.\nI1103 06:55:10.028715 1 node_lifecycle_controller.go:361] Controller is using taint based evictions.\nI1103 06:55:10.028997 1 taint_manager.go:162] Sending events to api server.\nI1103 06:55:10.029237 1 node_lifecycle_controller.go:455] Controller will reconcile labels.\nI1103 06:55:10.030350 1 controllermanager.go:534] Started \"nodelifecycle\"\nW1103 06:55:10.030616 1 core.go:216] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.\nW1103 06:55:10.030665 1 controllermanager.go:526] Skipping \"route\"\nI1103 06:55:10.030955 1 node_lifecycle_controller.go:488] Starting node controller\nI1103 06:55:10.031004 1 shared_informer.go:197] Waiting for caches to sync for taint\nI1103 06:55:10.058452 1 controllermanager.go:534] Started \"attachdetach\"\nI1103 06:55:10.058489 1 attach_detach_controller.go:323] Starting attach detach controller\nI1103 06:55:10.059033 1 shared_informer.go:197] Waiting for caches to sync for attach detach\nI1103 06:55:10.083934 1 controllermanager.go:534] Started \"statefulset\"\nI1103 06:55:10.084109 1 stateful_set.go:145] Starting stateful set controller\nI1103 06:55:10.084150 1 shared_informer.go:197] Waiting for caches to sync for stateful set\nI1103 06:55:10.095123 1 controllermanager.go:534] Started \"csrsigning\"\nI1103 06:55:10.095293 1 certificate_controller.go:118] Starting certificate controller \"csrsigning\"\nI1103 06:55:10.095350 1 shared_informer.go:197] Waiting for caches to sync for certificate-csrsigning\nI1103 06:55:10.117422 1 controllermanager.go:534] Started \"ttl\"\nI1103 06:55:10.117599 1 ttl_controller.go:116] Starting TTL controller\nI1103 06:55:10.118550 1 shared_informer.go:197] Waiting for caches to sync for TTL\nI1103 06:55:10.146740 1 controllermanager.go:534] Started \"tokencleaner\"\nI1103 06:55:10.147174 1 tokencleaner.go:117] Starting token cleaner controller\nI1103 06:55:10.147391 1 shared_informer.go:197] Waiting for caches to sync for token_cleaner\nI1103 06:55:10.147567 1 shared_informer.go:204] Caches are synced for token_cleaner \nI1103 06:55:10.168095 1 node_lifecycle_controller.go:77] Sending events to api server\nE1103 06:55:10.168159 1 core.go:202] failed to start cloud node lifecycle controller: no cloud provider provided\nW1103 06:55:10.168175 1 controllermanager.go:526] Skipping \"cloud-node-lifecycle\"\nI1103 06:55:10.218378 1 controllermanager.go:534] Started \"persistentvolume-binder\"\nW1103 06:55:10.219271 1 controllermanager.go:526] Skipping \"ttl-after-finished\"\nW1103 06:55:10.219474 1 controllermanager.go:513] \"endpointslice\" is disabled\nI1103 06:55:10.218578 1 pv_controller_base.go:289] Starting persistent volume controller\nI1103 06:55:10.219950 1 shared_informer.go:197] Waiting for caches to sync for persistent volume\nI1103 06:55:10.482960 1 controllermanager.go:534] Started \"namespace\"\nI1103 06:55:10.483485 1 namespace_controller.go:200] Starting namespace controller\nI1103 06:55:10.483665 1 shared_informer.go:197] Waiting for caches to sync for namespace\nI1103 06:55:10.718954 1 controllermanager.go:534] Started \"daemonset\"\nI1103 06:55:10.719018 1 daemon_controller.go:255] Starting daemon sets controller\nI1103 06:55:10.719038 1 shared_informer.go:197] Waiting for caches to sync for daemon sets\nI1103 06:55:10.867269 1 controllermanager.go:534] Started \"csrapproving\"\nI1103 06:55:10.867311 1 certificate_controller.go:118] Starting certificate controller \"csrapproving\"\nI1103 06:55:10.867458 1 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving\nI1103 06:55:11.121681 1 controllermanager.go:534] Started \"persistentvolume-expander\"\nI1103 06:55:11.121985 1 expand_controller.go:308] Starting expand controller\nI1103 06:55:11.122321 1 shared_informer.go:197] Waiting for caches to sync for expand\nI1103 06:55:11.368326 1 controllermanager.go:534] Started \"clusterrole-aggregation\"\nI1103 06:55:11.368474 1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator\nI1103 06:55:11.369121 1 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator\nI1103 06:55:11.619461 1 controllermanager.go:534] Started \"replicationcontroller\"\nI1103 06:55:11.619712 1 replica_set.go:183] Starting replicationcontroller controller\nI1103 06:55:11.620086 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController\nI1103 06:55:11.868324 1 controllermanager.go:534] Started \"podgc\"\nI1103 06:55:11.868386 1 gc_controller.go:88] Starting GC controller\nI1103 06:55:11.868406 1 shared_informer.go:197] Waiting for caches to sync for GC\nI1103 06:55:12.118408 1 controllermanager.go:534] Started \"cronjob\"\nI1103 06:55:12.118454 1 cronjob_controller.go:96] Starting CronJob Manager\nI1103 06:55:12.368890 1 controllermanager.go:534] Started \"serviceaccount\"\nI1103 06:55:12.368911 1 serviceaccounts_controller.go:116] Starting service account controller\nI1103 06:55:12.369602 1 shared_informer.go:197] Waiting for caches to sync for service account\nI1103 06:55:12.618871 1 controllermanager.go:534] Started \"bootstrapsigner\"\nI1103 06:55:12.618966 1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer\nI1103 06:55:12.868760 1 controllermanager.go:534] Started \"pvc-protection\"\nW1103 06:55:12.868923 1 controllermanager.go:526] Skipping \"root-ca-cert-publisher\"\nI1103 06:55:12.868979 1 pvc_protection_controller.go:100] Starting PVC protection controller\nI1103 06:55:12.869007 1 shared_informer.go:197] Waiting for caches to sync for PVC protection\nE1103 06:55:13.118624 1 core.go:80] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail\nW1103 06:55:13.118659 1 controllermanager.go:526] Skipping \"service\"\nI1103 06:55:13.369141 1 controllermanager.go:534] Started \"endpoint\"\nI1103 06:55:13.369193 1 endpoints_controller.go:175] Starting endpoint controller\nI1103 06:55:13.369741 1 shared_informer.go:197] Waiting for caches to sync for endpoint\nI1103 06:55:13.617689 1 controllermanager.go:534] Started \"job\"\nI1103 06:55:13.617755 1 job_controller.go:143] Starting job controller\nI1103 06:55:13.617774 1 shared_informer.go:197] Waiting for caches to sync for job\nI1103 06:55:13.868626 1 controllermanager.go:534] Started \"deployment\"\nI1103 06:55:13.868710 1 deployment_controller.go:152] Starting deployment controller\nI1103 06:55:13.869302 1 shared_informer.go:197] Waiting for caches to sync for deployment\nI1103 06:55:14.119376 1 controllermanager.go:534] Started \"replicaset\"\nI1103 06:55:14.119431 1 replica_set.go:183] Starting replicaset controller\nI1103 06:55:14.119468 1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet\nI1103 06:55:14.823173 1 controllermanager.go:534] Started \"horizontalpodautoscaling\"\nI1103 06:55:14.823324 1 horizontal.go:156] Starting HPA controller\nI1103 06:55:14.823366 1 shared_informer.go:197] Waiting for caches to sync for HPA\nI1103 06:55:15.218402 1 controllermanager.go:534] Started \"disruption\"\nI1103 06:55:15.218530 1 disruption.go:330] Starting disruption controller\nI1103 06:55:15.218565 1 shared_informer.go:197] Waiting for caches to sync for disruption\nI1103 06:55:16.025683 1 controllermanager.go:534] Started \"garbagecollector\"\nI1103 06:55:16.025748 1 garbagecollector.go:130] Starting garbage collector controller\nI1103 06:55:16.025943 1 shared_informer.go:197] Waiting for caches to sync for garbage collector\nI1103 06:55:16.026011 1 graph_builder.go:282] GraphBuilder running\nI1103 06:55:16.051661 1 controllermanager.go:534] Started \"pv-protection\"\nI1103 06:55:16.051704 1 pv_protection_controller.go:81] Starting PV protection controller\nI1103 06:55:16.051746 1 shared_informer.go:197] Waiting for caches to sync for PV protection\nI1103 06:55:16.052457 1 shared_informer.go:197] Waiting for caches to sync for resource quota\nW1103 06:55:16.062059 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kind-control-plane\" does not exist\nI1103 06:55:16.084338 1 shared_informer.go:204] Caches are synced for namespace \nI1103 06:55:16.095661 1 shared_informer.go:204] Caches are synced for certificate-csrsigning \nI1103 06:55:16.115173 1 shared_informer.go:204] Caches are synced for node \nI1103 06:55:16.115347 1 range_allocator.go:172] Starting range CIDR allocator\nI1103 06:55:16.115431 1 shared_informer.go:197] Waiting for caches to sync for cidrallocator\nI1103 06:55:16.115479 1 shared_informer.go:204] Caches are synced for cidrallocator \nI1103 06:55:16.118895 1 shared_informer.go:204] Caches are synced for TTL \nI1103 06:55:16.119257 1 shared_informer.go:204] Caches are synced for daemon sets \nI1103 06:55:16.120320 1 shared_informer.go:204] Caches are synced for ReplicationController \nI1103 06:55:16.121163 1 shared_informer.go:204] Caches are synced for persistent volume \nI1103 06:55:16.122233 1 shared_informer.go:204] Caches are synced for ReplicaSet \nI1103 06:55:16.122651 1 shared_informer.go:204] Caches are synced for expand \nI1103 06:55:16.124921 1 range_allocator.go:359] Set node kind-control-plane PodCIDR to [10.244.0.0/24]\nI1103 06:55:16.151921 1 shared_informer.go:204] Caches are synced for PV protection \nI1103 06:55:16.156537 1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kindnet\", UID:\"6205a1e4-b14b-4964-91f6-b11c04209bbb\", APIVersion:\"apps/v1\", ResourceVersion:\"223\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-c744w\nI1103 06:55:16.159307 1 shared_informer.go:204] Caches are synced for attach detach \nI1103 06:55:16.162135 1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kube-proxy\", UID:\"84867c7a-1a39-453d-9710-49aa83ccb389\", APIVersion:\"apps/v1\", ResourceVersion:\"200\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-5zhtl\nI1103 06:55:16.168562 1 shared_informer.go:204] Caches are synced for GC \nI1103 06:55:16.169171 1 shared_informer.go:204] Caches are synced for PVC protection \nI1103 06:55:16.169249 1 shared_informer.go:204] Caches are synced for certificate-csrapproving \nI1103 06:55:16.170021 1 shared_informer.go:204] Caches are synced for service account \nI1103 06:55:16.170508 1 shared_informer.go:204] Caches are synced for deployment \nI1103 06:55:16.170805 1 shared_informer.go:204] Caches are synced for endpoint \nI1103 06:55:16.170623 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator \nI1103 06:55:16.183011 1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"af23a4c9-64e5-408e-8737-ca144be79102\", APIVersion:\"apps/v1\", ResourceVersion:\"192\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2\nE1103 06:55:16.192219 1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\", UID:\"84867c7a-1a39-453d-9710-49aa83ccb389\", ResourceVersion:\"200\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708360900, loc:(*time.Location)(0x772f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00165fca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001763900), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00165fcc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00165fce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001798dc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001636ce8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"beta.kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001771380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"CriticalAddonsOnly\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0017861b0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001636d28)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again\nI1103 06:55:16.207148 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kube-system\", Name:\"coredns-5644d7b6d9\", UID:\"00f25722-b999-4b82-a73f-0c99be5b1e9d\", APIVersion:\"apps/v1\", ResourceVersion:\"351\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-j9fqd\nE1103 06:55:16.218585 1 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io \"view\": the object has been modified; please apply your changes to the latest version and try again\nI1103 06:55:16.222217 1 shared_informer.go:204] Caches are synced for bootstrap_signer \nI1103 06:55:16.222735 1 log.go:172] [INFO] signed certificate with serial number 176375283172015308389181634311007707037442175852\nI1103 06:55:16.231909 1 shared_informer.go:204] Caches are synced for taint \nI1103 06:55:16.232301 1 node_lifecycle_controller.go:1282] Initializing eviction metric for zone: \nI1103 06:55:16.232505 1 taint_manager.go:186] Starting NoExecuteTaintManager\nW1103 06:55:16.232541 1 node_lifecycle_controller.go:978] Missing timestamp for Node kind-control-plane. Assuming now as a timestamp.\nI1103 06:55:16.232658 1 node_lifecycle_controller.go:1132] Controller detected that all Nodes are not-Ready. Entering master disruption mode.\nI1103 06:55:16.232773 1 event.go:281] Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"kind-control-plane\", UID:\"23a11f0f-cb4a-4387-9c1c-7a5ccad4b305\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'RegisteredNode' Node kind-control-plane event: Registered Node kind-control-plane in Controller\nI1103 06:55:16.237522 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kube-system\", Name:\"coredns-5644d7b6d9\", UID:\"00f25722-b999-4b82-a73f-0c99be5b1e9d\", APIVersion:\"apps/v1\", ResourceVersion:\"351\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-58dzr\nE1103 06:55:16.249732 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io \"edit\": the object has been modified; please apply your changes to the latest version and try again\nI1103 06:55:16.323625 1 shared_informer.go:204] Caches are synced for HPA \nI1103 06:55:16.484965 1 shared_informer.go:204] Caches are synced for stateful set \nI1103 06:55:16.519484 1 shared_informer.go:204] Caches are synced for disruption \nI1103 06:55:16.519517 1 disruption.go:338] Sending events to api server.\nI1103 06:55:16.568214 1 shared_informer.go:204] Caches are synced for resource quota \nI1103 06:55:16.618010 1 shared_informer.go:204] Caches are synced for job \nI1103 06:55:16.626393 1 shared_informer.go:204] Caches are synced for garbage collector \nI1103 06:55:16.626443 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage\nI1103 06:55:16.652939 1 shared_informer.go:204] Caches are synced for resource quota \nI1103 06:55:17.521271 1 shared_informer.go:197] Waiting for caches to sync for garbage collector\nI1103 06:55:17.521382 1 shared_informer.go:204] Caches are synced for garbage collector \nI1103 06:55:20.469074 1 log.go:172] [INFO] signed certificate with serial number 462727349527519373582352682231189738549768792359\nI1103 06:55:20.523273 1 log.go:172] [INFO] signed certificate with serial number 6884556846898883449466248193697248392254926066\nW1103 06:55:33.215106 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kind-worker2\" does not exist\nI1103 06:55:33.225363 1 range_allocator.go:359] Set node kind-worker2 PodCIDR to [10.244.1.0/24]\nI1103 06:55:33.233686 1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kindnet\", UID:\"6205a1e4-b14b-4964-91f6-b11c04209bbb\", APIVersion:\"apps/v1\", ResourceVersion:\"416\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-9g7zl\nI1103 06:55:33.233760 1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kube-proxy\", UID:\"84867c7a-1a39-453d-9710-49aa83ccb389\", APIVersion:\"apps/v1\", ResourceVersion:\"405\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-xzk56\nW1103 06:55:33.245339 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kind-worker\" does not exist\nI1103 06:55:33.259495 1 range_allocator.go:359] Set node kind-worker PodCIDR to [10.244.2.0/24]\nI1103 06:55:33.260948 1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kube-proxy\", UID:\"84867c7a-1a39-453d-9710-49aa83ccb389\", APIVersion:\"apps/v1\", ResourceVersion:\"464\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-5qht6\nI1103 06:55:33.261886 1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kindnet\", UID:\"6205a1e4-b14b-4964-91f6-b11c04209bbb\", APIVersion:\"apps/v1\", ResourceVersion:\"416\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-zlgk8\nE1103 06:55:33.276294 1 daemon_controller.go:290] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\", UID:\"6205a1e4-b14b-4964-91f6-b11c04209bbb\", ResourceVersion:\"416\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708360902, loc:(*time.Location)(0x772f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001704580), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0017045a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0017045c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0017045e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001704600)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001704640)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc000cd6b90), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc0017ca5f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001a13980), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e058)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0017ca640)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again\nW1103 06:55:36.234663 1 node_lifecycle_controller.go:978] Missing timestamp for Node kind-worker2. Assuming now as a timestamp.\nW1103 06:55:36.235238 1 node_lifecycle_controller.go:978] Missing timestamp for Node kind-worker. Assuming now as a timestamp.\nI1103 06:55:36.234663 1 event.go:281] Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"kind-worker2\", UID:\"322cad9e-96cc-464c-9003-9b9c8297934c\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker2 event: Registered Node kind-worker2 in Controller\nI1103 06:55:36.235424 1 event.go:281] Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"kind-worker\", UID:\"c5cf28bd-4520-4c92-95da-3a6d0e344375\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker event: Registered Node kind-worker in Controller\nI1103 06:56:01.238282 1 node_lifecycle_controller.go:1159] Controller detected that some Nodes are Ready. Exiting master disruption mode.\nI1103 06:56:40.826041 1 event.go:281] Event(v1.ObjectReference{Kind:\"StatefulSet\", Namespace:\"statefulset-4439\", Name:\"ss2\", UID:\"6255c3e0-193d-4a84-869a-70ba4b29e500\", APIVersion:\"apps/v1\", ResourceVersion:\"791\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' create Pod ss2-0 in StatefulSet ss2 successful\nI1103 06:56:40.879270 1 resource_quota_controller.go:305] Resource quota has been deleted resourcequota-8261/test-quota\nI1103 06:56:41.077698 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-nj5zc\nI1103 06:56:41.115639 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-6qvbm\nI1103 06:56:41.116252 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-mwplf\nI1103 06:56:41.127351 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-92pkc\nI1103 06:56:41.136283 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-pw54n\nI1103 06:56:41.137302 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-72kcb\nI1103 06:56:41.138407 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-mv7b4\nI1103 06:56:41.160955 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-l6hhk\nI1103 06:56:41.161626 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-khfvn\nI1103 06:56:41.163398 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-872\", Name:\"simpletest.rc\", UID:\"1f9b245c-344e-4604-9f26-42f5d109194e\", APIVersion:\"v1\", ResourceVersion:\"853\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-jd28r\nI1103 06:56:43.571441 1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"webhook-8732\", Name:\"sample-webhook-deployment\", UID:\"ca4830fe-b0a8-44df-90ad-00b9da95c7e9\", APIVersion:\"apps/v1\", ResourceVersion:\"1029\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set sample-webhook-deployment-86d95b659d to 1\nI1103 06:56:43.591261 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"webhook-8732\", Name:\"sample-webhook-deployment-86d95b659d\", UID:\"c2136106-55dc-4fd7-9752-5422a6b9fc12\", APIVersion:\"apps/v1\", ResourceVersion:\"1030\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: sample-webhook-deployment-86d95b659d-vhqpl\nI1103 06:56:48.438246 1 shared_informer.go:197] Waiting for caches to sync for garbage collector\nE1103 06:56:48.456826 1 tokens_controller.go:260] error synchronizing serviceaccount volume-placement-9859/default: secrets \"default-token-9dg8t\" is forbidden: unable to create new content in namespace volume-placement-9859 because it is being terminated\nE1103 06:56:48.494596 1 tokens_controller.go:260] error synchronizing serviceaccount volume-placement-9859/default: serviceaccounts \"default\" not found\nI1103 06:56:48.538694 1 shared_informer.go:204] Caches are synced for garbage collector \nE1103 06:56:50.816122 1 pv_controller.go:1329] error finding provisioning plugin for claim persistent-local-volumes-test-1732/pvc-nmcrl: no volume plugin matched\nI1103 06:56:50.816520 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"persistent-local-volumes-test-1732\", Name:\"pvc-nmcrl\", UID:\"67c5e36c-6bf7-4098-89d6-e0cd33ca7eda\", APIVersion:\"v1\", ResourceVersion:\"1178\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched\nE1103 06:56:51.140468 1 pv_controller.go:1329] error finding provisioning plugin for claim provisioning-4701/pvc-trfwl: storageclass.storage.k8s.io \"provisioning-4701\" not found\nI1103 06:56:51.140858 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"provisioning-4701\", Name:\"pvc-trfwl\", UID:\"8ed845ce-1102-4f95-987a-d0dd4f8adb44\", APIVersion:\"v1\", ResourceVersion:\"1183\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"provisioning-4701\" not found\nI1103 06:56:52.181524 1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"webhook-2475\", Name:\"sample-webhook-deployment\", UID:\"41dfd0f0-6e39-41b4-b776-e42c3fba9395\", APIVersion:\"apps/v1\", ResourceVersion:\"1248\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set sample-webhook-deployment-86d95b659d to 1\nI1103 06:56:52.195692 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"webhook-2475\", Name:\"sample-webhook-deployment-86d95b659d\", UID:\"d427f69f-2970-47d1-8352-9186570de376\", APIVersion:\"apps/v1\", ResourceVersion:\"1249\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: sample-webhook-deployment-86d95b659d-6p6lp\nE1103 06:56:52.614376 1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:56:53.384734 1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"webhook-6882\", Name:\"sample-webhook-deployment\", UID:\"3b77c544-9901-4708-909a-648c476c1243\", APIVersion:\"apps/v1\", ResourceVersion:\"1328\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set sample-webhook-deployment-86d95b659d to 1\nI1103 06:56:53.525065 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"webhook-6882\", Name:\"sample-webhook-deployment-86d95b659d\", UID:\"bcf4f18d-112e-4514-86ac-b2c0d9c33b43\", APIVersion:\"apps/v1\", ResourceVersion:\"1330\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: sample-webhook-deployment-86d95b659d-lbxtd\nI1103 06:56:53.566286 1 namespace_controller.go:185] Namespace has been deleted resourcequota-8261\nE1103 06:56:53.618566 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:56:53.741172 1 namespace_controller.go:185] Namespace has been deleted volume-placement-9859\nI1103 06:56:53.792516 1 event.go:281] Event(v1.ObjectReference{Kind:\"StatefulSet\", Namespace:\"csi-mock-volumes-2275\", Name:\"csi-mockplugin-resizer\", UID:\"af8fc1bf-63b1-476d-a5df-f8322f1187e7\", APIVersion:\"apps/v1\", ResourceVersion:\"1367\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\nI1103 06:56:53.792575 1 event.go:281] Event(v1.ObjectReference{Kind:\"StatefulSet\", Namespace:\"csi-mock-volumes-2275\", Name:\"csi-mockplugin\", UID:\"0630bb8d-28be-40d9-8532-efc7a4ffb6c2\", APIVersion:\"apps/v1\", ResourceVersion:\"1362\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\nI1103 06:56:53.792685 1 event.go:281] Event(v1.ObjectReference{Kind:\"StatefulSet\", Namespace:\"csi-mock-volumes-2275\", Name:\"csi-mockplugin-attacher\", UID:\"a5268496-8d55-45c7-a039-df3f7d615a13\", APIVersion:\"apps/v1\", ResourceVersion:\"1365\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\nI1103 06:56:53.901317 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"csi-mock-volumes-2275\", Name:\"pvc-hrtld\", UID:\"77f3c439-f5ae-4c7f-bc50-91ad16d4606d\", APIVersion:\"v1\", ResourceVersion:\"1387\", FieldPath:\"\"}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-2275\" or manually created by system administrator\nI1103 06:56:53.905197 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"csi-mock-volumes-2275\", Name:\"pvc-hrtld\", UID:\"77f3c439-f5ae-4c7f-bc50-91ad16d4606d\", APIVersion:\"v1\", ResourceVersion:\"1387\", FieldPath:\"\"}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-2275\" or manually created by system administrator\nE1103 06:56:54.110063 1 pv_controller.go:1329] error finding provisioning plugin for claim provisioning-9812/pvc-fr2db: storageclass.storage.k8s.io \"provisioning-9812\" not found\nI1103 06:56:54.110449 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"provisioning-9812\", Name:\"pvc-fr2db\", UID:\"51a4f948-7a73-4b80-8c21-dcf28cbb3b26\", APIVersion:\"v1\", ResourceVersion:\"1405\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"provisioning-9812\" not found\nE1103 06:56:54.621585 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:56:55.266724 1 pv_controller.go:1329] error finding provisioning plugin for claim volumemode-9001/pvc-85pxq: storageclass.storage.k8s.io \"volumemode-9001\" not found\nI1103 06:56:55.267216 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"volumemode-9001\", Name:\"pvc-85pxq\", UID:\"af75bde5-413a-462f-98a8-c0ba964f9f38\", APIVersion:\"v1\", ResourceVersion:\"1451\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"volumemode-9001\" not found\nE1103 06:56:55.626430 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:56:56.042452 1 pv_controller.go:1329] error finding provisioning plugin for claim persistent-local-volumes-test-2540/pvc-gfcqn: no volume plugin matched\nI1103 06:56:56.042832 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"persistent-local-volumes-test-2540\", Name:\"pvc-gfcqn\", UID:\"3a6ae72b-be1f-4157-ae8a-67832b0ad94b\", APIVersion:\"v1\", ResourceVersion:\"1473\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched\nE1103 06:56:56.305363 1 pv_controller.go:1329] error finding provisioning plugin for claim volumemode-7875/pvc-rhbb8: storageclass.storage.k8s.io \"volumemode-7875\" not found\nI1103 06:56:56.305821 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"volumemode-7875\", Name:\"pvc-rhbb8\", UID:\"154a8c15-2f1c-4796-9ec3-7736bb059842\", APIVersion:\"v1\", ResourceVersion:\"1478\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"volumemode-7875\" not found\nE1103 06:56:56.629516 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:56:56.748320 1 tokens_controller.go:260] error synchronizing serviceaccount projected-584/default: secrets \"default-token-vwdvj\" is forbidden: unable to create new content in namespace projected-584 because it is being terminated\nE1103 06:56:56.885072 1 tokens_controller.go:260] error synchronizing serviceaccount emptydir-8834/default: secrets \"default-token-rm7nr\" is forbidden: unable to create new content in namespace emptydir-8834 because it is being terminated\nE1103 06:56:56.951210 1 tokens_controller.go:260] error synchronizing serviceaccount emptydir-8834/default: secrets \"default-token-nsghr\" is forbidden: unable to create new content in namespace emptydir-8834 because it is being terminated\nE1103 06:56:56.973683 1 tokens_controller.go:260] error synchronizing serviceaccount emptydir-8834/default: secrets \"default-token-nbk97\" is forbidden: unable to create new content in namespace emptydir-8834 because it is being terminated\nE1103 06:56:57.006495 1 tokens_controller.go:260] error synchronizing serviceaccount emptydir-8834/default: secrets \"default-token-nnd7x\" is forbidden: unable to create new content in namespace emptydir-8834 because it is being terminated\nE1103 06:56:57.154405 1 tokens_controller.go:260] error synchronizing serviceaccount gc-9197/default: secrets \"default-token-8wkqc\" is forbidden: unable to create new content in namespace gc-9197 because it is being terminated\nE1103 06:56:57.169194 1 tokens_controller.go:260] error synchronizing serviceaccount gc-9197/default: secrets \"default-token-8s58t\" is forbidden: unable to create new content in namespace gc-9197 because it is being terminated\nE1103 06:56:57.191458 1 tokens_controller.go:260] error synchronizing serviceaccount gc-9197/default: secrets \"default-token-8sq4z\" is forbidden: unable to create new content in namespace gc-9197 because it is being terminated\nE1103 06:56:57.234405 1 tokens_controller.go:260] error synchronizing serviceaccount gc-9197/default: secrets \"default-token-qjkjq\" is forbidden: unable to create new content in namespace gc-9197 because it is being terminated\nE1103 06:56:57.631834 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:56:58.634119 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:56:59.636702 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:00.313079 1 tokens_controller.go:260] error synchronizing serviceaccount security-context-test-3748/default: secrets \"default-token-fq5fv\" is forbidden: unable to create new content in namespace security-context-test-3748 because it is being terminated\nE1103 06:57:00.410150 1 tokens_controller.go:260] error synchronizing serviceaccount security-context-test-3748/default: secrets \"default-token-fnhkz\" is forbidden: unable to create new content in namespace security-context-test-3748 because it is being terminated\nE1103 06:57:00.429604 1 tokens_controller.go:260] error synchronizing serviceaccount security-context-test-3748/default: secrets \"default-token-nh5v8\" is forbidden: unable to create new content in namespace security-context-test-3748 because it is being terminated\nE1103 06:57:00.639726 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:57:01.238736 1 event.go:281] Event(v1.ObjectReference{Kind:\"StatefulSet\", Namespace:\"statefulset-4439\", Name:\"ss2\", UID:\"6255c3e0-193d-4a84-869a-70ba4b29e500\", APIVersion:\"apps/v1\", ResourceVersion:\"807\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' create Pod ss2-1 in StatefulSet ss2 successful\nI1103 06:57:01.327906 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"csi-mock-volumes-2275\", Name:\"pvc-hrtld\", UID:\"77f3c439-f5ae-4c7f-bc50-91ad16d4606d\", APIVersion:\"v1\", ResourceVersion:\"1387\", FieldPath:\"\"}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-2275\" or manually created by system administrator\nI1103 06:57:01.576655 1 namespace_controller.go:185] Namespace has been deleted emptydir-4492\nE1103 06:57:01.649553 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:02.007022 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-96/default: secrets \"default-token-ggcdf\" is forbidden: unable to create new content in namespace kubectl-96 because it is being terminated\nE1103 06:57:02.025441 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-96/default: secrets \"default-token-pslcq\" is forbidden: unable to create new content in namespace kubectl-96 because it is being terminated\nE1103 06:57:02.052175 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-96/default: secrets \"default-token-kqsxk\" is forbidden: unable to create new content in namespace kubectl-96 because it is being terminated\nE1103 06:57:02.084564 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-96/default: secrets \"default-token-wwwz9\" is forbidden: unable to create new content in namespace kubectl-96 because it is being terminated\nI1103 06:57:02.088972 1 namespace_controller.go:185] Namespace has been deleted emptydir-8834\nE1103 06:57:02.137015 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-96/default: secrets \"default-token-gnrwb\" is forbidden: unable to create new content in namespace kubectl-96 because it is being terminated\nI1103 06:57:02.145166 1 namespace_controller.go:185] Namespace has been deleted projected-584\nE1103 06:57:02.230951 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-96/default: secrets \"default-token-bxd4q\" is forbidden: unable to create new content in namespace kubectl-96 because it is being terminated\nI1103 06:57:02.393151 1 namespace_controller.go:185] Namespace has been deleted gc-9197\nE1103 06:57:02.651470 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:03.653247 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:57:04.209224 1 namespace_controller.go:185] Namespace has been deleted secrets-7025\nI1103 06:57:04.595441 1 namespace_controller.go:185] Namespace has been deleted emptydir-1334\nE1103 06:57:04.655417 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:05.318693 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-r4mgr\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:05.340157 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-jclhr\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:05.364420 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-2lpnq\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:05.411479 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-x987n\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:05.482700 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-6ppbr\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:05.585082 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-7r9z2\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:05.687119 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:57:05.706284 1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-5531\", Name:\"test-new-deployment\", UID:\"115a5014-5154-4e56-9d85-45681fe9e214\", APIVersion:\"apps/v1\", ResourceVersion:\"1881\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-new-deployment-595b5b9587 to 1\nI1103 06:57:05.752087 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-5531\", Name:\"test-new-deployment-595b5b9587\", UID:\"3553951a-04bf-4f9b-8ef9-5ebe57d737b2\", APIVersion:\"apps/v1\", ResourceVersion:\"1884\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-new-deployment-595b5b9587-hc286\nE1103 06:57:05.798426 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-t7nfz\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:06.131616 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-bwwx6\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:06.688926 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:57:06.700119 1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3239\nE1103 06:57:06.791513 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-2czr2\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nI1103 06:57:07.318914 1 namespace_controller.go:185] Namespace has been deleted kubectl-96\nI1103 06:57:07.502603 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-tzjdh\nI1103 06:57:07.522855 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-9rfvw\nI1103 06:57:07.523544 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-jqrcb\nI1103 06:57:07.543274 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-gcttm\nI1103 06:57:07.554079 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-djl8k\nI1103 06:57:07.554192 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-lt5qk\nI1103 06:57:07.554814 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-g6p6j\nI1103 06:57:07.582875 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-ww2nc\nI1103 06:57:07.583205 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-fskwj\nI1103 06:57:07.583911 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-9nckk\nI1103 06:57:07.584114 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-zj5ds\nI1103 06:57:07.585959 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-6wj82\nI1103 06:57:07.591256 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-mcdtr\nI1103 06:57:07.591508 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-pkps4\nI1103 06:57:07.615440 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-mqkcp\nI1103 06:57:07.689050 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-rrpkn\nI1103 06:57:07.689876 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-wjb2w\nI1103 06:57:07.690100 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-cj99l\nI1103 06:57:07.690281 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-ql6k5\nI1103 06:57:07.690305 1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-9511\", Name:\"cleanup20-d9990ddd-00cb-499c-a521-caf29622374f\", UID:\"d533b438-40c1-40a0-9179-8cc0afe3dd0b\", APIVersion:\"v1\", ResourceVersion:\"2000\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-d9990ddd-00cb-499c-a521-caf29622374f-js6vc\nE1103 06:57:07.701135 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:08.263094 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-6s4kt\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:08.356447 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-9640/default: secrets \"default-token-tmllb\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9640 because it is being terminated\nE1103 06:57:08.429356 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-9640/default: secrets \"default-token-fp5m6\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9640 because it is being terminated\nE1103 06:57:08.468036 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-2540/default: secrets \"default-token-wvdph\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2540 because it is being terminated\nE1103 06:57:08.533197 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-9640/default: secrets \"default-token-qzsxq\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9640 because it is being terminated\nE1103 06:57:08.537305 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-2540/default: secrets \"default-token-hvvv9\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2540 because it is being terminated\nE1103 06:57:08.559327 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-2540/default: secrets \"default-token-j4sp2\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2540 because it is being terminated\nE1103 06:57:08.594154 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-9640/default: secrets \"default-token-tnqkd\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9640 because it is being terminated\nE1103 06:57:08.599768 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-2540/default: secrets \"default-token-2flm2\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2540 because it is being terminated\nE1103 06:57:08.707876 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-2540/default: secrets \"default-token-679ss\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2540 because it is being terminated\nE1103 06:57:08.708387 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:08.708706 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-9640/default: secrets \"default-token-cm9sl\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9640 because it is being terminated\nE1103 06:57:08.817631 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-2540/default: secrets \"default-token-xsrtw\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2540 because it is being terminated\nE1103 06:57:08.819607 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-9640/default: secrets \"default-token-ggdp7\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9640 because it is being terminated\nE1103 06:57:09.002466 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-9640/default: secrets \"default-token-5wbl5\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9640 because it is being terminated\nE1103 06:57:09.003452 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-2540/default: secrets \"default-token-qdfg4\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2540 because it is being terminated\nE1103 06:57:09.339997 1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-2540/default: secrets \"default-token-8h269\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2540 because it is being terminated\nE1103 06:57:09.710813 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:10.719477 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:10.850750 1 tokens_controller.go:260] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-69hr9\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE1103 06:57:11.371511 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-8732-markers/default: secrets \"default-token-bhlz4\" is forbidden: unable to create new content in namespace webhook-8732-markers because it is being terminated\nE1103 06:57:11.428460 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-8732-markers/default: secrets \"default-token-jxjgx\" is forbidden: unable to create new content in namespace webhook-8732-markers because it is being terminated\nE1103 06:57:11.505594 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-6882-markers/default: secrets \"default-token-wjdh9\" is forbidden: unable to create new content in namespace webhook-6882-markers because it is being terminated\nE1103 06:57:11.508369 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-8732-markers/default: secrets \"default-token-dm7kd\" is forbidden: unable to create new content in namespace webhook-8732-markers because it is being terminated\nE1103 06:57:11.572063 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-6882-markers/default: secrets \"default-token-4f67k\" is forbidden: unable to create new content in namespace webhook-6882-markers because it is being terminated\nE1103 06:57:11.572633 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-8732-markers/default: secrets \"default-token-m5xbn\" is forbidden: unable to create new content in namespace webhook-8732-markers because it is being terminated\nE1103 06:57:11.651308 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-6882-markers/default: secrets \"default-token-bx8gw\" is forbidden: unable to create new content in namespace webhook-6882-markers because it is being terminated\nE1103 06:57:11.685356 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-8732-markers/default: secrets \"default-token-fknnl\" is forbidden: unable to create new content in namespace webhook-8732-markers because it is being terminated\nE1103 06:57:11.702385 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-6882-markers/default: secrets \"default-token-pp2x4\" is forbidden: unable to create new content in namespace webhook-6882-markers because it is being terminated\nE1103 06:57:11.723241 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:11.773541 1 tokens_controller.go:260] error synchronizing serviceaccount custom-resource-definition-3847/default: secrets \"default-token-p2rrt\" is forbidden: unable to create new content in namespace custom-resource-definition-3847 because it is being terminated\nE1103 06:57:11.773692 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-6882-markers/default: secrets \"default-token-c5hm7\" is forbidden: unable to create new content in namespace webhook-6882-markers because it is being terminated\nE1103 06:57:11.814185 1 tokens_controller.go:260] error synchronizing serviceaccount custom-resource-definition-3847/default: secrets \"default-token-rtxfn\" is forbidden: unable to create new content in namespace custom-resource-definition-3847 because it is being terminated\nE1103 06:57:11.814231 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-2475-markers/default: secrets \"default-token-4kb8l\" is forbidden: unable to create new content in namespace webhook-2475-markers because it is being terminated\nE1103 06:57:11.816273 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-8732-markers/default: secrets \"default-token-2b88m\" is forbidden: unable to create new content in namespace webhook-8732-markers because it is being terminated\nE1103 06:57:11.945038 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-2475-markers/default: secrets \"default-token-8mtqh\" is forbidden: unable to create new content in namespace webhook-2475-markers because it is being terminated\nE1103 06:57:11.993558 1 tokens_controller.go:260] error synchronizing serviceaccount custom-resource-definition-3847/default: secrets \"default-token-bs68s\" is forbidden: unable to create new content in namespace custom-resource-definition-3847 because it is being terminated\nE1103 06:57:12.096659 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-6882-markers/default: secrets \"default-token-hpztx\" is forbidden: unable to create new content in namespace webhook-6882-markers because it is being terminated\nE1103 06:57:12.235301 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-8732-markers/default: serviceaccounts \"default\" not found\nE1103 06:57:12.288280 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-2475-markers/default: secrets \"default-token-t66f2\" is forbidden: unable to create new content in namespace webhook-2475-markers because it is being terminated\nE1103 06:57:12.589462 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-6882-markers/default: secrets \"default-token-6wh5t\" is forbidden: unable to create new content in namespace webhook-6882-markers because it is being terminated\nE1103 06:57:12.639835 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-2475-markers/default: secrets \"default-token-4prcg\" is forbidden: unable to create new content in namespace webhook-2475-markers because it is being terminated\nE1103 06:57:12.727557 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:12.780047 1 tokens_controller.go:260] error synchronizing serviceaccount webhook-2475-markers/default: secrets \"default-token-rfcvz\" is forbidden: unable to create new content in namespace webhook-2475-markers because it is being terminated\nE1103 06:57:13.474598 1 tokens_controller.go:260] error synchronizing serviceaccount zone-support-5395/default: secrets \"default-token-6n5vt\" is forbidden: unable to create new content in namespace zone-support-5395 because it is being terminated\nE1103 06:57:13.504693 1 tokens_controller.go:260] error synchronizing serviceaccount zone-support-5395/default: secrets \"default-token-bdrjf\" is forbidden: unable to create new content in namespace zone-support-5395 because it is being terminated\nE1103 06:57:13.547253 1 tokens_controller.go:260] error synchronizing serviceaccount zone-support-5395/default: secrets \"default-token-g6gtk\" is forbidden: unable to create new content in namespace zone-support-5395 because it is being terminated\nE1103 06:57:13.586451 1 tokens_controller.go:260] error synchronizing serviceaccount zone-support-5395/default: secrets \"default-token-vbzfw\" is forbidden: unable to create new content in namespace zone-support-5395 because it is being terminated\nE1103 06:57:13.653540 1 tokens_controller.go:260] error synchronizing serviceaccount zone-support-5395/default: secrets \"default-token-bj2q2\" is forbidden: unable to create new content in namespace zone-support-5395 because it is being terminated\nE1103 06:57:13.730449 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:13.783569 1 tokens_controller.go:260] error synchronizing serviceaccount zone-support-5395/default: secrets \"default-token-6827m\" is forbidden: unable to create new content in namespace zone-support-5395 because it is being terminated\nE1103 06:57:14.025308 1 tokens_controller.go:260] error synchronizing serviceaccount zone-support-5395/default: secrets \"default-token-h6jfj\" is forbidden: unable to create new content in namespace zone-support-5395 because it is being terminated\nI1103 06:57:14.506327 1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2540\nI1103 06:57:14.586351 1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-9640\nE1103 06:57:14.733256 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:57:15.330770 1 namespace_controller.go:185] Namespace has been deleted downward-api-2755\nE1103 06:57:15.736006 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:57:15.830988 1 resource_quota_controller.go:305] Resource quota has been deleted resourcequota-3610/test-quota\nE1103 06:57:15.850673 1 tokens_controller.go:260] error synchronizing serviceaccount secrets-915/default: secrets \"default-token-q5n8s\" is forbidden: unable to create new content in namespace secrets-915 because it is being terminated\nE1103 06:57:15.868807 1 tokens_controller.go:260] error synchronizing serviceaccount secrets-915/default: secrets \"default-token-tpkwq\" is forbidden: unable to create new content in namespace secrets-915 because it is being terminated\nE1103 06:57:15.889920 1 tokens_controller.go:260] error synchronizing serviceaccount secrets-915/default: secrets \"default-token-6gzb2\" is forbidden: unable to create new content in namespace secrets-915 because it is being terminated\nE1103 06:57:15.925749 1 tokens_controller.go:260] error synchronizing serviceaccount secrets-915/default: secrets \"default-token-ll2b8\" is forbidden: unable to create new content in namespace secrets-915 because it is being terminated\nI1103 06:57:16.130690 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"csi-mock-volumes-2275\", Name:\"pvc-hrtld\", UID:\"77f3c439-f5ae-4c7f-bc50-91ad16d4606d\", APIVersion:\"v1\", ResourceVersion:\"1387\", FieldPath:\"\"}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-2275\" or manually created by system administrator\nE1103 06:57:16.391500 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-1706/default: secrets \"default-token-l5x7g\" is forbidden: unable to create new content in namespace kubectl-1706 because it is being terminated\nE1103 06:57:16.410614 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-1706/default: secrets \"default-token-x4vjk\" is forbidden: unable to create new content in namespace kubectl-1706 because it is being terminated\nE1103 06:57:16.439976 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-1706/default: secrets \"default-token-499tz\" is forbidden: unable to create new content in namespace kubectl-1706 because it is being terminated\nE1103 06:57:16.474866 1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-1706/default: secrets \"default-token-qmmq9\" is forbidden: unable to create new content in namespace kubectl-1706 because it is being terminated\nE1103 06:57:16.744386 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:57:17.400773 1 namespace_controller.go:185] Namespace has been deleted webhook-8732-markers\nE1103 06:57:17.746644 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:57:17.966166 1 namespace_controller.go:185] Namespace has been deleted exempted-namesapce\nI1103 06:57:17.977072 1 namespace_controller.go:185] Namespace has been deleted webhook-2475-markers\nI1103 06:57:18.046735 1 namespace_controller.go:185] Namespace has been deleted webhook-8732\nI1103 06:57:18.222299 1 namespace_controller.go:185] Namespace has been deleted sysctl-2435\nI1103 06:57:18.405227 1 namespace_controller.go:185] Namespace has been deleted webhook-6882-markers\nI1103 06:57:18.408054 1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-3847\nI1103 06:57:18.741208 1 shared_informer.go:197] Waiting for caches to sync for garbage collector\nI1103 06:57:18.741297 1 shared_informer.go:204] Caches are synced for garbage collector \nE1103 06:57:18.748968 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1103 06:57:18.804123 1 namespace_controller.go:185] Namespace has been deleted configmap-5652\nI1103 06:57:18.850662 1 namespace_controller.go:185] Namespace has been deleted webhook-6882\nI1103 06:57:19.238086 1 namespace_controller.go:185] Namespace has been deleted containers-6399\nI1103 06:57:19.325429 1 namespace_controller.go:185] Namespace has been deleted zone-support-5395\nE1103 06:57:19.735625 1 namespace_controller.go:162] deletion of namespace svcaccounts-3966 failed: unexpected items still remain in namespace: svcaccounts-3966 for gvr: /v1, Resource=pods\nE1103 06:57:19.752455 1 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1103 06:57:19.944932 1 namespace_controller.go:162] deletion of namespace svcaccounts-3966 failed: unexpected items still remain in namespace: svcaccounts-3966 for gvr: /v1, Resource=pods\nE1103 06:57:20.200024 1 namespace_controller.go:162] deletion of namespace svcaccounts-3966 failed: unexpected items still remain in namespace: svcaccounts-3966 for gvr: /v1, Resource=pods\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-kind-control-plane ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-5qht6 ====\nW1103 06:55:36.043020 1 server_others.go:323] Unknown proxy mode \"\", assuming iptables proxy\nE1103 06:55:36.067675 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:37.091934 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:39.150501 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:43.293812 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:52.303690 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nI1103 06:55:52.303725 1 server_others.go:140] can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag\nI1103 06:55:52.303741 1 server_others.go:145] Using iptables Proxier.\nI1103 06:55:52.304756 1 server.go:570] Version: v1.18.0-alpha.0.178+0c66e64b140011\nI1103 06:55:52.305921 1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1103 06:55:52.306088 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1103 06:55:52.306142 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1103 06:55:52.306366 1 config.go:131] Starting endpoints config controller\nI1103 06:55:52.306387 1 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI1103 06:55:52.306573 1 config.go:313] Starting service config controller\nI1103 06:55:52.306633 1 shared_informer.go:197] Waiting for caches to sync for service config\nI1103 06:55:52.406570 1 shared_informer.go:204] Caches are synced for endpoints config \nI1103 06:55:52.406962 1 shared_informer.go:204] Caches are synced for service config \n==== END logs for container kube-proxy of pod kube-system/kube-proxy-5qht6 ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-5zhtl ====\nW1103 06:55:17.577505 1 server_others.go:323] Unknown proxy mode \"\", assuming iptables proxy\nE1103 06:55:17.589291 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:18.607915 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:21.010568 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:25.063731 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:34.323198 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nI1103 06:55:34.323583 1 server_others.go:140] can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag\nI1103 06:55:34.323942 1 server_others.go:145] Using iptables Proxier.\nI1103 06:55:34.324977 1 server.go:570] Version: v1.18.0-alpha.0.178+0c66e64b140011\nI1103 06:55:34.326345 1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1103 06:55:34.326719 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1103 06:55:34.326956 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1103 06:55:34.327614 1 config.go:131] Starting endpoints config controller\nI1103 06:55:34.327662 1 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI1103 06:55:34.327682 1 config.go:313] Starting service config controller\nI1103 06:55:34.327765 1 shared_informer.go:197] Waiting for caches to sync for service config\nI1103 06:55:34.427968 1 shared_informer.go:204] Caches are synced for endpoints config \nI1103 06:55:34.428296 1 shared_informer.go:204] Caches are synced for service config \n==== END logs for container kube-proxy of pod kube-system/kube-proxy-5zhtl ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-xzk56 ====\nW1103 06:55:36.043021 1 server_others.go:323] Unknown proxy mode \"\", assuming iptables proxy\nE1103 06:55:36.064681 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:37.227350 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:39.583412 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:44.159107 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nE1103 06:55:53.149312 1 node.go:124] Failed to retrieve node info: nodes \"$(node_name)\" not found\nI1103 06:55:53.149375 1 server_others.go:140] can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag\nI1103 06:55:53.149393 1 server_others.go:145] Using iptables Proxier.\nI1103 06:55:53.149968 1 server.go:570] Version: v1.18.0-alpha.0.178+0c66e64b140011\nI1103 06:55:53.152356 1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1103 06:55:53.152849 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1103 06:55:53.153017 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1103 06:55:53.154414 1 config.go:313] Starting service config controller\nI1103 06:55:53.154449 1 shared_informer.go:197] Waiting for caches to sync for service config\nI1103 06:55:53.154526 1 config.go:131] Starting endpoints config controller\nI1103 06:55:53.154545 1 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI1103 06:55:53.254902 1 shared_informer.go:204] Caches are synced for endpoints config \nI1103 06:55:53.254912 1 shared_informer.go:204] Caches are synced for service config \n==== END logs for container kube-proxy of pod kube-system/kube-proxy-xzk56 ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-kind-control-plane ====\nI1103 06:54:53.070540 1 serving.go:312] Generated self-signed cert in-memory\nW1103 06:54:57.367612 1 authentication.go:332] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nW1103 06:54:57.367665 1 authentication.go:259] Error looking up in-cluster authentication configuration: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nW1103 06:54:57.367681 1 authentication.go:260] Continuing without authentication configuration. This may treat all requests as anonymous.\nW1103 06:54:57.367694 1 authentication.go:261] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false\nW1103 06:54:57.392777 1 authorization.go:47] Authorization is disabled\nW1103 06:54:57.392822 1 authentication.go:92] Authentication is disabled\nI1103 06:54:57.392840 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251\nI1103 06:54:57.394511 1 secure_serving.go:174] Serving securely on 127.0.0.1:10259\nI1103 06:54:57.395504 1 tlsconfig.go:220] Starting DynamicServingCertificateController\nE1103 06:54:57.397600 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nE1103 06:54:57.397696 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope\nE1103 06:54:57.399945 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nE1103 06:54:57.400402 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nE1103 06:54:57.400469 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nE1103 06:54:57.400617 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE1103 06:54:57.400705 1 reflector.go:153] cmd/kube-scheduler/app/server.go:244: Failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE1103 06:54:57.400710 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\nE1103 06:54:57.400718 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE1103 06:54:57.401463 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nE1103 06:54:57.401494 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope\nE1103 06:54:58.399733 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nE1103 06:54:58.400452 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope\nE1103 06:54:58.401238 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nE1103 06:54:58.401976 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nE1103 06:54:58.403219 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nE1103 06:54:58.405142 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE1103 06:54:58.408002 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\nE1103 06:54:58.410212 1 reflector.go:153] cmd/kube-scheduler/app/server.go:244: Failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE1103 06:54:58.410263 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE1103 06:54:58.411391 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nE1103 06:54:58.411474 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope\nI1103 06:54:59.495190 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...\nI1103 06:54:59.507933 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler\nE1103 06:55:16.247315 1 factory.go:668] pod is already present in the activeQ\nE1103 06:55:34.408504 1 factory.go:668] pod is already present in the activeQ\n==== END logs for container kube-scheduler of pod kube-system/kube-scheduler-kind-control-plane ====\n{\n \"kind\": \"EventList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/default/events\",\n \"resourceVersion\": \"2552\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"$(node_name).15d394a5b6115f03\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/$%28node_name%29.15d394a5b6115f03\",\n \"uid\": \"b25e1b15-c3ef-4b90-851e-5f9fc1c7746e\",\n \"resourceVersion\": \"500\",\n \"creationTimestamp\": \"2019-11-03T06:55:34Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"$(node_name)\",\n \"uid\": \"$(node_name)\"\n },\n \"reason\": \"Starting\",\n \"message\": \"Starting kube-proxy.\",\n \"source\": {\n \"component\": \"kube-proxy\",\n \"host\": \"$(node_name)\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:34Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:34Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"$(node_name).15d394a9e5afb880\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/$%28node_name%29.15d394a9e5afb880\",\n \"uid\": \"8eefcd29-d479-485e-add6-be6694c611d1\",\n \"resourceVersion\": \"567\",\n \"creationTimestamp\": \"2019-11-03T06:55:52Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"$(node_name)\",\n \"uid\": \"$(node_name)\"\n },\n \"reason\": \"Starting\",\n \"message\": \"Starting kube-proxy.\",\n \"source\": {\n \"component\": \"kube-proxy\",\n \"host\": \"$(node_name)\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:52Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:52Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"$(node_name).15d394aa182cf3b1\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/$%28node_name%29.15d394aa182cf3b1\",\n \"uid\": \"b02e7b78-6948-4800-8e3d-7719173dafd9\",\n \"resourceVersion\": \"570\",\n \"creationTimestamp\": \"2019-11-03T06:55:53Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"$(node_name)\",\n \"uid\": \"$(node_name)\"\n },\n \"reason\": \"Starting\",\n \"message\": \"Starting kube-proxy.\",\n \"source\": {\n \"component\": \"kube-proxy\",\n \"host\": \"$(node_name)\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:53Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:53Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kind-control-plane.15d3949a93c116ac\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15d3949a93c116ac\",\n \"uid\": \"9cdb9f8a-98a9-44ca-877b-8d74dd6a0849\",\n \"resourceVersion\": \"217\",\n \"creationTimestamp\": \"2019-11-03T06:54:59Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"kind-control-plane\",\n \"uid\": \"kind-control-plane\"\n },\n \"reason\": \"NodeHasSufficientMemory\",\n \"message\": \"Node kind-control-plane status is now: NodeHasSufficientMemory\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:46Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:47Z\",\n \"count\": 8,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kind-control-plane.15d3949a93c17da7\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15d3949a93c17da7\",\n \"uid\": \"cf5b71d1-3e83-4f5b-b190-98b2881ddb09\",\n \"resourceVersion\": \"215\",\n \"creationTimestamp\": \"2019-11-03T06:54:59Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"kind-control-plane\",\n \"uid\": \"kind-control-plane\"\n },\n \"reason\": \"NodeHasNoDiskPressure\",\n \"message\": \"Node kind-control-plane status is now: NodeHasNoDiskPressure\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:46Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:47Z\",\n \"count\": 7,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kind-control-plane.15d3949a93c194f9\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15d3949a93c194f9\",\n \"uid\": \"733396eb-44eb-4e38-906f-1a8a2480907b\",\n \"resourceVersion\": \"216\",\n \"creationTimestamp\": \"2019-11-03T06:55:00Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"kind-control-plane\",\n \"uid\": \"kind-control-plane\"\n },\n \"reason\": \"NodeHasSufficientPID\",\n \"message\": \"Node kind-control-plane status is now: NodeHasSufficientPID\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-control-plane\"\n },\n \"firstTimestamp\": \"2019-11-03T06:54:46Z\",\n \"lastTimestamp\": \"2019-11-03T06:54:47Z\",\n \"count\": 7,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kind-control-plane.15d394a17f817ead\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15d394a17f817ead\",\n \"uid\": \"42e7ba5b-1707-43f2-aecf-007c97a13181\",\n \"resourceVersion\": \"375\",\n \"creationTimestamp\": \"2019-11-03T06:55:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"kind-control-plane\",\n \"uid\": \"23a11f0f-cb4a-4387-9c1c-7a5ccad4b305\"\n },\n \"reason\": \"RegisteredNode\",\n \"message\": \"Node kind-control-plane event: Registered Node kind-control-plane in Controller\",\n \"source\": {\n \"component\": \"node-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:16Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kind-worker.15d394a283e07518\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker.15d394a283e07518\",\n \"uid\": \"07828c7c-1c36-4130-8fa2-6a3954053244\",\n \"resourceVersion\": \"472\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"kind-worker\",\n \"uid\": \"kind-worker\"\n },\n \"reason\": \"NodeHasSufficientMemory\",\n \"message\": \"Node kind-worker status is now: NodeHasSufficientMemory\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:20Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 8,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kind-worker.15d394a627bb0602\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker.15d394a627bb0602\",\n \"uid\": \"8019a22a-ba2c-4c10-920f-46805958f79a\",\n \"resourceVersion\": \"513\",\n \"creationTimestamp\": \"2019-11-03T06:55:36Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"kind-worker\",\n \"uid\": \"c5cf28bd-4520-4c92-95da-3a6d0e344375\"\n },\n \"reason\": \"RegisteredNode\",\n \"message\": \"Node kind-worker event: Registered Node kind-worker in Controller\",\n \"source\": {\n \"component\": \"node-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:36Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:36Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kind-worker2.15d394a2829a822b\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker2.15d394a2829a822b\",\n \"uid\": \"11d4fcb1-e6b6-4f6f-a75a-81b469e04a6c\",\n \"resourceVersion\": \"455\",\n \"creationTimestamp\": \"2019-11-03T06:55:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"kind-worker2\",\n \"uid\": \"kind-worker2\"\n },\n \"reason\": \"NodeHasSufficientMemory\",\n \"message\": \"Node kind-worker2 status is now: NodeHasSufficientMemory\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kind-worker2\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:20Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:33Z\",\n \"count\": 8,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kind-worker2.15d394a627ba6b91\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker2.15d394a627ba6b91\",\n \"uid\": \"23a17805-1f85-4592-81ca-d51d6b4be534\",\n \"resourceVersion\": \"512\",\n \"creationTimestamp\": \"2019-11-03T06:55:36Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Node\",\n \"name\": \"kind-worker2\",\n \"uid\": \"322cad9e-96cc-464c-9003-9b9c8297934c\"\n },\n \"reason\": \"RegisteredNode\",\n \"message\": \"Node kind-worker2 event: Registered Node kind-worker2 in Controller\",\n \"source\": {\n \"component\": \"node-controller\"\n },\n \"firstTimestamp\": \"2019-11-03T06:55:36Z\",\n \"lastTimestamp\": \"2019-11-03T06:55:36Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n }\n ]\n}\n{\n \"kind\": \"ReplicationControllerList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/default/replicationcontrollers\",\n \"resourceVersion\": \"2552\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/default/services\",\n \"resourceVersion\": \"2554\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kubernetes\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/services/kubernetes\",\n \"uid\": \"5fc50040-a756-45a9-92a9-18a4f8c415bb\",\n \"resourceVersion\": \"146\",\n \"creationTimestamp\": \"2019-11-03T06:54:58Z\",\n \"labels\": {\n \"component\": \"apiserver\",\n \"provider\": \"kubernetes\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"https\",\n \"protocol\": \"TCP\",\n \"port\": 443,\n \"targetPort\": 6443\n }\n ],\n \"clusterIP\": \"10.96.0.1\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n }\n ]\n}\n{\n \"kind\": \"DaemonSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/default/daemonsets\",\n \"resourceVersion\": \"2555\"\n },\n \"items\": []\n}\n{\n \"kind\": \"DeploymentList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/default/deployments\",\n \"resourceVersion\": \"2556\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ReplicaSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/default/replicasets\",\n \"resourceVersion\": \"2556\"\n },\n \"items\": []\n}\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/default/pods\",\n \"resourceVersion\": \"2557\"\n },\n \"items\": []\n}\nCluster info dumped to standard output\n" | |
[AfterEach] [sig-cli] Kubectl client | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:20.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "kubectl-9109" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:21.100: INFO: Distro debian doesn't support ntfs -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:21.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gcepd] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDistro debian doesn't support ntfs -- skipping[0m | |
test/e2e/storage/testsuites/base.go:163 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] Garbage collector | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.435: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename gc | |
Nov 3 06:56:40.825: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.907: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-872 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should orphan pods created by rc if delete options say so [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: create the rc | |
[1mSTEP[0m: delete the rc | |
[1mSTEP[0m: wait for the rc to be deleted | |
[1mSTEP[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods | |
[1mSTEP[0m: Gathering metrics | |
W1103 06:57:21.181140 15537 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
Nov 3 06:57:21.182: INFO: For apiserver_request_total: | |
For apiserver_request_latencies_summary: | |
For apiserver_init_events_total: | |
For garbage_collector_attempt_to_delete_queue_latency: | |
For garbage_collector_attempt_to_delete_work_duration: | |
For garbage_collector_attempt_to_orphan_queue_latency: | |
For garbage_collector_attempt_to_orphan_work_duration: | |
For garbage_collector_dirty_processing_latency_microseconds: | |
For garbage_collector_event_processing_latency_microseconds: | |
For garbage_collector_graph_changes_queue_latency: | |
For garbage_collector_graph_changes_work_duration: | |
For garbage_collector_orphan_processing_latency_microseconds: | |
For namespace_queue_latency: | |
For namespace_queue_latency_sum: | |
For namespace_queue_latency_count: | |
For namespace_retries: | |
For namespace_work_duration: | |
For namespace_work_duration_sum: | |
For namespace_work_duration_count: | |
For function_duration_seconds: | |
For errors_total: | |
For evicted_pods_total: | |
[AfterEach] [sig-api-machinery] Garbage collector | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:21.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "gc-872" for this suite. | |
[32m• [SLOW TEST:40.790 seconds][0m | |
[sig-api-machinery] Garbage collector | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should orphan pods created by rc if delete options say so [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.484: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename provisioning | |
Nov 3 06:56:40.934: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.979: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-9812 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should support non-existent path | |
test/e2e/storage/testsuites/subpath.go:189 | |
Nov 3 06:56:41.125: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/local-volume | |
Nov 3 06:56:53.193: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-c4d5a4e5-83c5-4003-96a7-8e1787f136b7] Namespace:provisioning-9812 PodName:hostexec-kind-worker2-74fgs ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:53.193: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:54.102: INFO: Creating resource for pre-provisioned PV | |
Nov 3 06:56:54.102: INFO: Creating PVC and PV | |
[1mSTEP[0m: Creating a PVC followed by a PV | |
Nov 3 06:56:54.115: INFO: Waiting for PV local-xt54q to bind to PVC pvc-fr2db | |
Nov 3 06:56:54.115: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-fr2db] to have phase Bound | |
Nov 3 06:56:54.122: INFO: PersistentVolumeClaim pvc-fr2db found but phase is Pending instead of Bound. | |
Nov 3 06:56:56.145: INFO: PersistentVolumeClaim pvc-fr2db found but phase is Pending instead of Bound. | |
Nov 3 06:56:58.149: INFO: PersistentVolumeClaim pvc-fr2db found but phase is Pending instead of Bound. | |
Nov 3 06:57:00.156: INFO: PersistentVolumeClaim pvc-fr2db found but phase is Pending instead of Bound. | |
Nov 3 06:57:02.160: INFO: PersistentVolumeClaim pvc-fr2db found and phase=Bound (8.0452853s) | |
Nov 3 06:57:02.160: INFO: Waiting up to 3m0s for PersistentVolume local-xt54q to have phase Bound | |
Nov 3 06:57:02.181: INFO: PersistentVolume local-xt54q found and phase=Bound (20.436074ms) | |
[1mSTEP[0m: Creating pod pod-subpath-test-local-preprovisionedpv-dnhn | |
[1mSTEP[0m: Creating a pod to test subpath | |
Nov 3 06:57:02.209: INFO: Waiting up to 5m0s for pod "pod-subpath-test-local-preprovisionedpv-dnhn" in namespace "provisioning-9812" to be "success or failure" | |
Nov 3 06:57:02.242: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Pending", Reason="", readiness=false. Elapsed: 32.898396ms | |
Nov 3 06:57:04.256: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047177992s | |
Nov 3 06:57:06.268: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059226074s | |
Nov 3 06:57:08.372: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163682871s | |
Nov 3 06:57:10.426: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217029158s | |
Nov 3 06:57:12.432: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.222854156s | |
Nov 3 06:57:14.446: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.236889718s | |
Nov 3 06:57:16.453: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.244202432s | |
Nov 3 06:57:18.459: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.249908678s | |
Nov 3 06:57:20.466: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.257337925s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:20.466: INFO: Pod "pod-subpath-test-local-preprovisionedpv-dnhn" satisfied condition "success or failure" | |
Nov 3 06:57:20.471: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-local-preprovisionedpv-dnhn container test-container-volume-local-preprovisionedpv-dnhn: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:20.560: INFO: Waiting for pod pod-subpath-test-local-preprovisionedpv-dnhn to disappear | |
Nov 3 06:57:20.577: INFO: Pod pod-subpath-test-local-preprovisionedpv-dnhn no longer exists | |
[1mSTEP[0m: Deleting pod pod-subpath-test-local-preprovisionedpv-dnhn | |
Nov 3 06:57:20.577: INFO: Deleting pod "pod-subpath-test-local-preprovisionedpv-dnhn" in namespace "provisioning-9812" | |
[1mSTEP[0m: Deleting pod | |
Nov 3 06:57:20.632: INFO: Deleting pod "pod-subpath-test-local-preprovisionedpv-dnhn" in namespace "provisioning-9812" | |
[1mSTEP[0m: Deleting pv and pvc | |
Nov 3 06:57:20.654: INFO: Deleting PersistentVolumeClaim "pvc-fr2db" | |
Nov 3 06:57:20.688: INFO: Deleting PersistentVolume "local-xt54q" | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:57:20.711: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-c4d5a4e5-83c5-4003-96a7-8e1787f136b7] Namespace:provisioning-9812 PodName:hostexec-kind-worker2-74fgs ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:20.711: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Deleting pod hostexec-kind-worker2-74fgs in namespace provisioning-9812 | |
Nov 3 06:57:21.233: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:21.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "provisioning-9812" for this suite. | |
[32m• [SLOW TEST:40.794 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
should support non-existent path | |
[90mtest/e2e/storage/testsuites/subpath.go:189[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Security Context | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:21.227: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename security-context-test | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-574 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Security Context | |
test/e2e/common/security_context.go:39 | |
[It] should not run without a specified user ID | |
test/e2e/common/security_context.go:152 | |
[AfterEach] [k8s.io] Security Context | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:25.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "security-context-test-574" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-network] [sig-windows] Networking | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:25.573: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename pod-network-test | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-2052 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-network] [sig-windows] Networking | |
test/e2e/windows/networking.go:33 | |
Nov 3 06:57:25.739: INFO: Only supported for node OS distro [windows] (not debian) | |
[AfterEach] [sig-network] [sig-windows] Networking | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:25.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "pod-network-test-2052" for this suite. | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.178 seconds][0m | |
[sig-network] [sig-windows] Networking | |
[90mtest/e2e/windows/networking.go:30[0m | |
[36m[1mGranular Checks: Pods [BeforeEach][0m | |
[90mtest/e2e/windows/networking.go:38[0m | |
should function for intra-pod communication: udp | |
[90mtest/e2e/windows/networking.go:62[0m | |
[36mOnly supported for node OS distro [windows] (not debian)[0m | |
test/e2e/windows/networking.go:35 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:25.754: INFO: Only supported for providers [azure] (not skeleton) | |
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:25.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mOnly supported for providers [azure] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1449 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-network] Services | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:25.761: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename services | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2076 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-network] Services | |
test/e2e/network/service.go:92 | |
[It] should check NodePort out-of-range | |
test/e2e/network/service.go:1210 | |
[1mSTEP[0m: creating service nodeport-range-test with type NodePort in namespace services-2076 | |
[1mSTEP[0m: changing service nodeport-range-test to out-of-range NodePort 15218 | |
[1mSTEP[0m: deleting original service nodeport-range-test | |
[1mSTEP[0m: creating service nodeport-range-test with out-of-range NodePort 15218 | |
[AfterEach] [sig-network] Services | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:25.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "services-2076" for this suite. | |
[AfterEach] [sig-network] Services | |
test/e2e/network/service.go:96 | |
[32m•[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-apps] DisruptionController | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:05.882: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename disruption | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-6726 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-apps] DisruptionController | |
test/e2e/apps/disruption.go:52 | |
[It] should update PodDisruptionBudget status | |
test/e2e/apps/disruption.go:61 | |
[1mSTEP[0m: Waiting for the pdb to be processed | |
[1mSTEP[0m: Waiting for all pods to be running | |
Nov 3 06:57:08.528: INFO: running pods: 0 < 3 | |
Nov 3 06:57:10.576: INFO: running pods: 0 < 3 | |
Nov 3 06:57:12.608: INFO: running pods: 0 < 3 | |
Nov 3 06:57:14.540: INFO: running pods: 0 < 3 | |
Nov 3 06:57:16.534: INFO: running pods: 0 < 3 | |
Nov 3 06:57:18.536: INFO: running pods: 0 < 3 | |
Nov 3 06:57:20.539: INFO: running pods: 0 < 3 | |
Nov 3 06:57:22.684: INFO: running pods: 0 < 3 | |
Nov 3 06:57:24.532: INFO: running pods: 0 < 3 | |
Nov 3 06:57:26.541: INFO: running pods: 2 < 3 | |
[AfterEach] [sig-apps] DisruptionController | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:28.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "disruption-6726" for this suite. | |
[32m• [SLOW TEST:22.674 seconds][0m | |
[sig-apps] DisruptionController | |
[90mtest/e2e/apps/framework.go:23[0m | |
should update PodDisruptionBudget status | |
[90mtest/e2e/apps/disruption.go:61[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.488: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test | |
Nov 3 06:56:41.700: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:41.772: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-1732 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/storage/persistent_volumes-local.go:153 | |
[BeforeEach] [Volume type: dir-link-bindmounted] | |
test/e2e/storage/persistent_volumes-local.go:189 | |
[1mSTEP[0m: Initializing test volumes | |
Nov 3 06:56:50.242: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-6118ec22-7a67-448e-9da2-6ed35a43d5bc-backend && mount --bind /tmp/local-volume-test-6118ec22-7a67-448e-9da2-6ed35a43d5bc-backend /tmp/local-volume-test-6118ec22-7a67-448e-9da2-6ed35a43d5bc-backend && ln -s /tmp/local-volume-test-6118ec22-7a67-448e-9da2-6ed35a43d5bc-backend /tmp/local-volume-test-6118ec22-7a67-448e-9da2-6ed35a43d5bc] Namespace:persistent-local-volumes-test-1732 PodName:hostexec-kind-worker-b9qqd ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:50.242: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Creating local PVCs and PVs | |
Nov 3 06:56:50.787: INFO: Creating a PV followed by a PVC | |
Nov 3 06:56:50.816: INFO: Waiting for PV local-pv7gbnz to bind to PVC pvc-nmcrl | |
Nov 3 06:56:50.816: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-nmcrl] to have phase Bound | |
Nov 3 06:56:50.824: INFO: PersistentVolumeClaim pvc-nmcrl found but phase is Pending instead of Bound. | |
Nov 3 06:56:52.834: INFO: PersistentVolumeClaim pvc-nmcrl found but phase is Pending instead of Bound. | |
Nov 3 06:56:54.839: INFO: PersistentVolumeClaim pvc-nmcrl found but phase is Pending instead of Bound. | |
Nov 3 06:56:56.848: INFO: PersistentVolumeClaim pvc-nmcrl found but phase is Pending instead of Bound. | |
Nov 3 06:56:58.852: INFO: PersistentVolumeClaim pvc-nmcrl found but phase is Pending instead of Bound. | |
Nov 3 06:57:00.856: INFO: PersistentVolumeClaim pvc-nmcrl found but phase is Pending instead of Bound. | |
Nov 3 06:57:02.864: INFO: PersistentVolumeClaim pvc-nmcrl found and phase=Bound (12.04808506s) | |
Nov 3 06:57:02.864: INFO: Waiting up to 3m0s for PersistentVolume local-pv7gbnz to have phase Bound | |
Nov 3 06:57:02.870: INFO: PersistentVolume local-pv7gbnz found and phase=Bound (5.899879ms) | |
[BeforeEach] One pod requesting one prebound PVC | |
test/e2e/storage/persistent_volumes-local.go:209 | |
[1mSTEP[0m: Creating pod1 | |
[1mSTEP[0m: Creating a pod | |
Nov 3 06:57:30.921: INFO: pod "security-context-31750654-ebfb-4d7b-9130-5dffe27174e8" created on Node "kind-worker" | |
[1mSTEP[0m: Writing in pod1 | |
Nov 3 06:57:30.921: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config exec --namespace=persistent-local-volumes-test-1732 security-context-31750654-ebfb-4d7b-9130-5dffe27174e8 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' | |
Nov 3 06:57:31.293: INFO: stderr: "" | |
Nov 3 06:57:31.293: INFO: stdout: "" | |
Nov 3 06:57:31.293: INFO: podRWCmdExec out: "" err: <nil> | |
[It] should be able to mount volume and write from pod1 | |
test/e2e/storage/persistent_volumes-local.go:232 | |
Nov 3 06:57:31.293: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config exec --namespace=persistent-local-volumes-test-1732 security-context-31750654-ebfb-4d7b-9130-5dffe27174e8 -- /bin/sh -c cat /mnt/volume1/test-file' | |
Nov 3 06:57:31.619: INFO: stderr: "" | |
Nov 3 06:57:31.619: INFO: stdout: "test-file-content\n" | |
Nov 3 06:57:31.619: INFO: podRWCmdExec out: "test-file-content\n" err: <nil> | |
[1mSTEP[0m: Writing in pod1 | |
Nov 3 06:57:31.619: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config exec --namespace=persistent-local-volumes-test-1732 security-context-31750654-ebfb-4d7b-9130-5dffe27174e8 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-6118ec22-7a67-448e-9da2-6ed35a43d5bc > /mnt/volume1/test-file' | |
Nov 3 06:57:31.954: INFO: stderr: "" | |
Nov 3 06:57:31.954: INFO: stdout: "" | |
Nov 3 06:57:31.954: INFO: podRWCmdExec out: "" err: <nil> | |
[AfterEach] One pod requesting one prebound PVC | |
test/e2e/storage/persistent_volumes-local.go:221 | |
[1mSTEP[0m: Deleting pod1 | |
[1mSTEP[0m: Deleting pod security-context-31750654-ebfb-4d7b-9130-5dffe27174e8 in namespace persistent-local-volumes-test-1732 | |
[AfterEach] [Volume type: dir-link-bindmounted] | |
test/e2e/storage/persistent_volumes-local.go:198 | |
[1mSTEP[0m: Cleaning up PVC and PV | |
Nov 3 06:57:31.975: INFO: Deleting PersistentVolumeClaim "pvc-nmcrl" | |
Nov 3 06:57:31.985: INFO: Deleting PersistentVolume "local-pv7gbnz" | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:57:32.001: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-6118ec22-7a67-448e-9da2-6ed35a43d5bc && umount /tmp/local-volume-test-6118ec22-7a67-448e-9da2-6ed35a43d5bc-backend && rm -r /tmp/local-volume-test-6118ec22-7a67-448e-9da2-6ed35a43d5bc-backend] Namespace:persistent-local-volumes-test-1732 PodName:hostexec-kind-worker-b9qqd ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:32.001: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[AfterEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:32.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "persistent-local-volumes-test-1732" for this suite. | |
[32m• [SLOW TEST:51.774 seconds][0m | |
[sig-storage] PersistentVolumes-local | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Volume type: dir-link-bindmounted] | |
[90mtest/e2e/storage/persistent_volumes-local.go:186[0m | |
One pod requesting one prebound PVC | |
[90mtest/e2e/storage/persistent_volumes-local.go:203[0m | |
should be able to mount volume and write from pod1 | |
[90mtest/e2e/storage/persistent_volumes-local.go:232[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-apps] Deployment | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:05.380: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename deployment | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-5531 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-apps] Deployment | |
test/e2e/apps/deployment.go:69 | |
[It] deployment reaping should cascade to its replica sets and pods | |
test/e2e/apps/deployment.go:74 | |
Nov 3 06:57:05.635: INFO: Creating simple deployment test-new-deployment | |
Nov 3 06:57:05.736: INFO: deployment "test-new-deployment" doesn't have the required revision set | |
Nov 3 06:57:07.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:09.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:11.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:13.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:15.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:17.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:19.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:21.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:23.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:25.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:27.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:29.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361025, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:31.854: INFO: Deleting deployment test-new-deployment | |
[1mSTEP[0m: deleting Deployment.apps test-new-deployment in namespace deployment-5531, will wait for the garbage collector to delete the pods | |
Nov 3 06:57:31.919: INFO: Deleting Deployment.apps test-new-deployment took: 9.952038ms | |
Nov 3 06:57:32.220: INFO: Terminating Deployment.apps test-new-deployment pods took: 300.860896ms | |
Nov 3 06:57:32.220: INFO: Ensuring deployment test-new-deployment was deleted | |
Nov 3 06:57:32.243: INFO: Ensuring deployment test-new-deployment's RSes were deleted | |
Nov 3 06:57:32.248: INFO: Ensuring deployment test-new-deployment's Pods were deleted | |
[AfterEach] [sig-apps] Deployment | |
test/e2e/apps/deployment.go:63 | |
Nov 3 06:57:32.256: INFO: Log out all the ReplicaSets if there is no deployment created | |
[AfterEach] [sig-apps] Deployment | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:32.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "deployment-5531" for this suite. | |
[32m• [SLOW TEST:26.894 seconds][0m | |
[sig-apps] Deployment | |
[90mtest/e2e/apps/framework.go:23[0m | |
deployment reaping should cascade to its replica sets and pods | |
[90mtest/e2e/apps/deployment.go:74[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Zone Support | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:32.288: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename zone-support | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in zone-support-3481 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Zone Support | |
test/e2e/storage/vsphere/vsphere_zone_support.go:101 | |
Nov 3 06:57:32.465: INFO: Only supported for providers [vsphere] (not skeleton) | |
[AfterEach] [sig-storage] Zone Support | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:32.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "zone-support-3481" for this suite. | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.192 seconds][0m | |
[sig-storage] Zone Support | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[36m[1mVerify PVC creation with incompatible zone along with compatible storagePolicy and datastore combination specified in storage class fails [BeforeEach][0m | |
[90mtest/e2e/storage/vsphere/vsphere_zone_support.go:220[0m | |
[36mOnly supported for providers [vsphere] (not skeleton)[0m | |
test/e2e/storage/vsphere/vsphere_zone_support.go:102 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] [sig-node] AppArmor | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:32.265: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename apparmor | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in apparmor-1207 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] load AppArmor profiles | |
test/e2e/node/apparmor.go:30 | |
Nov 3 06:57:32.466: INFO: Only supported for node OS distro [gci ubuntu] (not debian) | |
[AfterEach] load AppArmor profiles | |
test/e2e/node/apparmor.go:34 | |
[AfterEach] [k8s.io] [sig-node] AppArmor | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:32.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "apparmor-1207" for this suite. | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.221 seconds][0m | |
[k8s.io] [sig-node] AppArmor | |
[90mtest/e2e/framework/framework.go:683[0m | |
load AppArmor profiles | |
[90mtest/e2e/node/apparmor.go:29[0m | |
[36m[1mshould enforce an AppArmor profile [BeforeEach][0m | |
[90mtest/e2e/node/apparmor.go:41[0m | |
[36mOnly supported for node OS distro [gci ubuntu] (not debian)[0m | |
test/e2e/common/apparmor.go:48 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Ephemeralstorage | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.491: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename pv | |
Nov 3 06:56:41.000: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:41.025: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-9548 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Ephemeralstorage | |
test/e2e/storage/ephemeral_volume.go:49 | |
[It] should allow deletion of pod with invalid volume : configmap | |
test/e2e/storage/ephemeral_volume.go:55 | |
Nov 3 06:57:11.226: INFO: Deleting pod "pv-9548"/"pod-ephm-test-projected-8hlx" | |
Nov 3 06:57:11.226: INFO: Deleting pod "pod-ephm-test-projected-8hlx" in namespace "pv-9548" | |
Nov 3 06:57:11.326: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-8hlx" to be fully deleted | |
[AfterEach] [sig-storage] Ephemeralstorage | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:33.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "pv-9548" for this suite. | |
[32m• [SLOW TEST:52.870 seconds][0m | |
[sig-storage] Ephemeralstorage | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
When pod refers to non-existent ephemeral storage | |
[90mtest/e2e/storage/ephemeral_volume.go:53[0m | |
should allow deletion of pod with invalid volume : configmap | |
[90mtest/e2e/storage/ephemeral_volume.go:55[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:33.364: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:33.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:55.029: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename provisioning | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-6095 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should support readOnly file specified in the volumeMount [LinuxOnly] | |
test/e2e/storage/testsuites/subpath.go:374 | |
Nov 3 06:56:55.258: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path | |
Nov 3 06:56:55.263: INFO: Creating resource for inline volume | |
[1mSTEP[0m: Creating pod pod-subpath-test-hostpath-2phd | |
[1mSTEP[0m: Creating a pod to test subpath | |
Nov 3 06:56:55.288: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpath-2phd" in namespace "provisioning-6095" to be "success or failure" | |
Nov 3 06:56:55.291: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.332992ms | |
Nov 3 06:56:57.304: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016070113s | |
Nov 3 06:56:59.308: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020458565s | |
Nov 3 06:57:01.319: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03078973s | |
Nov 3 06:57:03.324: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035860786s | |
Nov 3 06:57:05.339: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050731828s | |
Nov 3 06:57:07.355: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.066818261s | |
Nov 3 06:57:09.361: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.073412696s | |
Nov 3 06:57:11.442: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.154376087s | |
Nov 3 06:57:13.472: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.184178136s | |
Nov 3 06:57:15.490: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.202554529s | |
Nov 3 06:57:17.496: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.207967735s | |
Nov 3 06:57:19.501: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.213594922s | |
Nov 3 06:57:21.512: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.224307895s | |
Nov 3 06:57:23.519: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.230725844s | |
Nov 3 06:57:25.531: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.243233767s | |
Nov 3 06:57:27.548: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.260313156s | |
Nov 3 06:57:29.551: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.263695763s | |
Nov 3 06:57:31.559: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.270986911s | |
Nov 3 06:57:33.603: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Pending", Reason="", readiness=false. Elapsed: 38.31480022s | |
Nov 3 06:57:35.608: INFO: Pod "pod-subpath-test-hostpath-2phd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.320004552s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:35.608: INFO: Pod "pod-subpath-test-hostpath-2phd" satisfied condition "success or failure" | |
Nov 3 06:57:35.611: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-hostpath-2phd container test-container-subpath-hostpath-2phd: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:35.654: INFO: Waiting for pod pod-subpath-test-hostpath-2phd to disappear | |
Nov 3 06:57:35.658: INFO: Pod pod-subpath-test-hostpath-2phd no longer exists | |
[1mSTEP[0m: Deleting pod pod-subpath-test-hostpath-2phd | |
Nov 3 06:57:35.658: INFO: Deleting pod "pod-subpath-test-hostpath-2phd" in namespace "provisioning-6095" | |
[1mSTEP[0m: Deleting pod | |
Nov 3 06:57:35.662: INFO: Deleting pod "pod-subpath-test-hostpath-2phd" in namespace "provisioning-6095" | |
Nov 3 06:57:35.665: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:35.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "provisioning-6095" for this suite. | |
[32m• [SLOW TEST:40.649 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPath] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
should support readOnly file specified in the volumeMount [LinuxOnly] | |
[90mtest/e2e/storage/testsuites/subpath.go:374[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:35.688: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:35.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-apps] DisruptionController | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:35.697: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename disruption | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-1432 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-apps] DisruptionController | |
test/e2e/apps/disruption.go:52 | |
[It] should create a PodDisruptionBudget | |
test/e2e/apps/disruption.go:57 | |
[1mSTEP[0m: Waiting for the pdb to be processed | |
[AfterEach] [sig-apps] DisruptionController | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:37.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "disruption-1432" for this suite. | |
[32m•[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:37.954: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:37.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: aws] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly directory specified in the volumeMount [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:359[0m | |
[36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:37.974: INFO: Driver local doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:37.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-link-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:37.984: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename tables | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-5007 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation | |
test/e2e/apimachinery/table_conversion.go:46 | |
[It] should return chunks of table results for list calls | |
test/e2e/apimachinery/table_conversion.go:77 | |
[1mSTEP[0m: creating a large number of resources | |
[AfterEach] [sig-api-machinery] Servers with support for Table transformation | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:38.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "tables-5007" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-network] DNS | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:28.565: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename dns | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3586 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should provide DNS for the cluster [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/[email protected];podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3586.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done | |
[1mSTEP[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/[email protected];check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/[email protected];podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3586.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done | |
[1mSTEP[0m: creating a pod to probe DNS | |
[1mSTEP[0m: submitting the pod to kubernetes | |
[1mSTEP[0m: retrieving the pod | |
[1mSTEP[0m: looking for the results for each expected name from probers | |
Nov 3 06:57:39.076: INFO: DNS probes using dns-3586/dns-test-01f79bec-3174-418f-931e-55efc2154756 succeeded | |
[1mSTEP[0m: deleting the pod | |
[AfterEach] [sig-network] DNS | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:39.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "dns-3586" for this suite. | |
[32m• [SLOW TEST:10.626 seconds][0m | |
[sig-network] DNS | |
[90mtest/e2e/network/framework.go:23[0m | |
should provide DNS for the cluster [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Downward API volume | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:32.496: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename downward-api | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6230 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Downward API volume | |
test/e2e/common/downwardapi_volume.go:40 | |
[It] should update annotations on modification [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating the pod | |
Nov 3 06:57:37.246: INFO: Successfully updated pod "annotationupdateecc70963-9ae7-4569-8a3c-b28947cc4a47" | |
[AfterEach] [sig-storage] Downward API volume | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:39.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "downward-api-6230" for this suite. | |
[32m• [SLOW TEST:6.899 seconds][0m | |
[sig-storage] Downward API volume | |
[90mtest/e2e/common/downwardapi_volume.go:35[0m | |
should update annotations on modification [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:39.405: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:39.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPathSymlink] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould be able to unmount after the subpath directory is deleted [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:437[0m | |
[36mDriver hostPathSymlink doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:39.412: INFO: Only supported for providers [aws] (not skeleton) | |
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:39.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: aws] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mOnly supported for providers [aws] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1590 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:39.423: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:39.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: emptydir] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver emptydir doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:39.450: INFO: Driver local doesn't support ntfs -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:39.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: block] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ntfs -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/storage/testsuites/base.go:99 | |
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.446: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename volumemode | |
Nov 3 06:56:40.852: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.928: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volumemode-7875 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should not mount / map unused volumes in a pod | |
test/e2e/storage/testsuites/volumemode.go:334 | |
Nov 3 06:56:41.050: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/local-volume | |
[1mSTEP[0m: Creating block device on node "kind-worker2" using path "/tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb" | |
Nov 3 06:56:55.122: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb && dd if=/dev/zero of=/tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb/file bs=4096 count=5120 && losetup -f /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb/file] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-2c9lb ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:55.122: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:55.573: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-2c9lb ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:55.573: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:55.826: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb && chmod o+rwx /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-2c9lb ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:55.826: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:56.289: INFO: Creating resource for pre-provisioned PV | |
Nov 3 06:56:56.289: INFO: Creating PVC and PV | |
[1mSTEP[0m: Creating a PVC followed by a PV | |
Nov 3 06:56:56.312: INFO: Waiting for PV local-jl45c to bind to PVC pvc-rhbb8 | |
Nov 3 06:56:56.312: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-rhbb8] to have phase Bound | |
Nov 3 06:56:56.324: INFO: PersistentVolumeClaim pvc-rhbb8 found but phase is Pending instead of Bound. | |
Nov 3 06:56:58.328: INFO: PersistentVolumeClaim pvc-rhbb8 found but phase is Pending instead of Bound. | |
Nov 3 06:57:00.386: INFO: PersistentVolumeClaim pvc-rhbb8 found but phase is Pending instead of Bound. | |
Nov 3 06:57:02.444: INFO: PersistentVolumeClaim pvc-rhbb8 found and phase=Bound (6.131858145s) | |
Nov 3 06:57:02.444: INFO: Waiting up to 3m0s for PersistentVolume local-jl45c to have phase Bound | |
Nov 3 06:57:02.452: INFO: PersistentVolume local-jl45c found and phase=Bound (8.379725ms) | |
[1mSTEP[0m: Creating pod | |
[1mSTEP[0m: Listing mounted volumes in the pod | |
Nov 3 06:57:24.580: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -d /var/lib/kubelet/pods/8d4b7c0c-bc0c-4bab-82fa-f20144981db9/volumes] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-f4txp ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:24.580: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:24.838: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c find /var/lib/kubelet/pods/8d4b7c0c-bc0c-4bab-82fa-f20144981db9/volumes -mindepth 2 -maxdepth 2] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-f4txp ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:24.838: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:25.031: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -d /var/lib/kubelet/pods/8d4b7c0c-bc0c-4bab-82fa-f20144981db9/volumeDevices] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-f4txp ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:25.031: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Checking that volume plugin kubernetes.io/local-volume is not used in pod directory | |
[1mSTEP[0m: Deleting pod hostexec-kind-worker2-f4txp in namespace volumemode-7875 | |
Nov 3 06:57:25.234: INFO: Deleting pod "security-context-ccdbe79d-278a-4ad0-91ab-17c070cd67be" in namespace "volumemode-7875" | |
Nov 3 06:57:25.241: INFO: Wait up to 5m0s for pod "security-context-ccdbe79d-278a-4ad0-91ab-17c070cd67be" to be fully deleted | |
[1mSTEP[0m: Deleting pv and pvc | |
Nov 3 06:57:39.253: INFO: Deleting PersistentVolumeClaim "pvc-rhbb8" | |
Nov 3 06:57:39.281: INFO: Deleting PersistentVolume "local-jl45c" | |
Nov 3 06:57:39.313: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-2c9lb ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:39.313: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:39.673: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-2c9lb ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:39.673: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Tear down block device "/dev/loop0" on node "kind-worker2" at path /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb/file | |
Nov 3 06:57:39.999: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-2c9lb ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:39.999: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Removing the test directory /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb | |
Nov 3 06:57:40.434: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-d84604f2-9d33-477a-95b6-322a5f8d2bbb] Namespace:volumemode-7875 PodName:hostexec-kind-worker2-2c9lb ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:40.434: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Deleting pod hostexec-kind-worker2-2c9lb in namespace volumemode-7875 | |
Nov 3 06:57:41.027: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics | |
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:41.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "volumemode-7875" for this suite. | |
[32m• [SLOW TEST:60.622 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
should not mount / map unused volumes in a pod | |
[90mtest/e2e/storage/testsuites/volumemode.go:334[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:41.072: INFO: Driver csi-hostpath-v0 doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:41.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver csi-hostpath-v0 doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:11.736: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename volume | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-1903 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should allow exec of files on the volume | |
test/e2e/storage/testsuites/volumes.go:191 | |
Nov 3 06:57:12.602: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/local-volume | |
[1mSTEP[0m: Creating tmpfs mount point on node "kind-worker" at path "/tmp/local-driver-8048822e-1a2c-4f23-a26b-1e03ae76a9c2" | |
Nov 3 06:57:26.822: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-driver-8048822e-1a2c-4f23-a26b-1e03ae76a9c2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-driver-8048822e-1a2c-4f23-a26b-1e03ae76a9c2" "/tmp/local-driver-8048822e-1a2c-4f23-a26b-1e03ae76a9c2"] Namespace:volume-1903 PodName:hostexec-kind-worker-rzmlf ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:26.822: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:27.109: INFO: Creating resource for pre-provisioned PV | |
Nov 3 06:57:27.110: INFO: Creating PVC and PV | |
[1mSTEP[0m: Creating a PVC followed by a PV | |
Nov 3 06:57:27.205: INFO: Waiting for PV local-g59f4 to bind to PVC pvc-nzt9z | |
Nov 3 06:57:27.205: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-nzt9z] to have phase Bound | |
Nov 3 06:57:27.219: INFO: PersistentVolumeClaim pvc-nzt9z found but phase is Pending instead of Bound. | |
Nov 3 06:57:29.223: INFO: PersistentVolumeClaim pvc-nzt9z found but phase is Pending instead of Bound. | |
Nov 3 06:57:31.242: INFO: PersistentVolumeClaim pvc-nzt9z found and phase=Bound (4.037295007s) | |
Nov 3 06:57:31.242: INFO: Waiting up to 3m0s for PersistentVolume local-g59f4 to have phase Bound | |
Nov 3 06:57:31.247: INFO: PersistentVolume local-g59f4 found and phase=Bound (4.532404ms) | |
[1mSTEP[0m: Creating pod exec-volume-test-local-preprovisionedpv-2nc8 | |
[1mSTEP[0m: Creating a pod to test exec-volume-test | |
Nov 3 06:57:31.265: INFO: Waiting up to 5m0s for pod "exec-volume-test-local-preprovisionedpv-2nc8" in namespace "volume-1903" to be "success or failure" | |
Nov 3 06:57:31.282: INFO: Pod "exec-volume-test-local-preprovisionedpv-2nc8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.780365ms | |
Nov 3 06:57:33.288: INFO: Pod "exec-volume-test-local-preprovisionedpv-2nc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022232458s | |
Nov 3 06:57:35.293: INFO: Pod "exec-volume-test-local-preprovisionedpv-2nc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027331275s | |
Nov 3 06:57:37.301: INFO: Pod "exec-volume-test-local-preprovisionedpv-2nc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035462876s | |
Nov 3 06:57:39.308: INFO: Pod "exec-volume-test-local-preprovisionedpv-2nc8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042543044s | |
Nov 3 06:57:41.320: INFO: Pod "exec-volume-test-local-preprovisionedpv-2nc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054439987s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:41.320: INFO: Pod "exec-volume-test-local-preprovisionedpv-2nc8" satisfied condition "success or failure" | |
Nov 3 06:57:41.397: INFO: Trying to get logs from node kind-worker pod exec-volume-test-local-preprovisionedpv-2nc8 container exec-container-local-preprovisionedpv-2nc8: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:41.476: INFO: Waiting for pod exec-volume-test-local-preprovisionedpv-2nc8 to disappear | |
Nov 3 06:57:41.479: INFO: Pod exec-volume-test-local-preprovisionedpv-2nc8 no longer exists | |
[1mSTEP[0m: Deleting pod exec-volume-test-local-preprovisionedpv-2nc8 | |
Nov 3 06:57:41.479: INFO: Deleting pod "exec-volume-test-local-preprovisionedpv-2nc8" in namespace "volume-1903" | |
[1mSTEP[0m: Deleting pv and pvc | |
Nov 3 06:57:41.482: INFO: Deleting PersistentVolumeClaim "pvc-nzt9z" | |
Nov 3 06:57:41.489: INFO: Deleting PersistentVolume "local-g59f4" | |
[1mSTEP[0m: Unmount tmpfs mount point on node "kind-worker" at path "/tmp/local-driver-8048822e-1a2c-4f23-a26b-1e03ae76a9c2" | |
Nov 3 06:57:41.503: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-driver-8048822e-1a2c-4f23-a26b-1e03ae76a9c2"] Namespace:volume-1903 PodName:hostexec-kind-worker-rzmlf ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:41.503: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:57:41.849: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-8048822e-1a2c-4f23-a26b-1e03ae76a9c2] Namespace:volume-1903 PodName:hostexec-kind-worker-rzmlf ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:41.849: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Deleting pod hostexec-kind-worker-rzmlf in namespace volume-1903 | |
Nov 3 06:57:42.233: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:42.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "volume-1903" for this suite. | |
[32m• [SLOW TEST:30.542 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: tmpfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
should allow exec of files on the volume | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Container Runtime | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:19.326: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename container-runtime | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-8745 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: create the container | |
[1mSTEP[0m: wait for the container to reach Failed | |
[1mSTEP[0m: get the container status | |
[1mSTEP[0m: the container should be terminated | |
[1mSTEP[0m: the termination message should be set | |
Nov 3 06:57:42.908: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- | |
[1mSTEP[0m: delete the container | |
[AfterEach] [k8s.io] Container Runtime | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:43.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "container-runtime-8745" for this suite. | |
[32m• [SLOW TEST:23.720 seconds][0m | |
[k8s.io] Container Runtime | |
[90mtest/e2e/framework/framework.go:683[0m | |
blackbox test | |
[90mtest/e2e/common/runtime.go:38[0m | |
on terminated container | |
[90mtest/e2e/common/runtime.go:131[0m | |
should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:43.055: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:43.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPathSymlink] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver hostPathSymlink doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:43.059: INFO: Only supported for providers [azure] (not skeleton) | |
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:43.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mOnly supported for providers [azure] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1449 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Zone Support | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:43.063: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename zone-support | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in zone-support-656 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Zone Support | |
test/e2e/storage/vsphere/vsphere_zone_support.go:101 | |
Nov 3 06:57:43.369: INFO: Only supported for providers [vsphere] (not skeleton) | |
[AfterEach] [sig-storage] Zone Support | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:43.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "zone-support-656" for this suite. | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.432 seconds][0m | |
[sig-storage] Zone Support | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[36m[1mVerify a pod is created and attached to a dynamically created PV, based on multiple zones specified in storage class [BeforeEach][0m | |
[90mtest/e2e/storage/vsphere/vsphere_zone_support.go:150[0m | |
[36mOnly supported for providers [vsphere] (not skeleton)[0m | |
test/e2e/storage/vsphere/vsphere_zone_support.go:102 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:43.505: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:43.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: emptydir] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould not mount / map unused volumes in a pod [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumemode.go:334[0m | |
[36mDriver emptydir doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Pods | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:10.695: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename pods | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-975 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Pods | |
test/e2e/common/pods.go:181 | |
[It] should be submitted and removed [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: creating the pod | |
[1mSTEP[0m: setting up watch | |
[1mSTEP[0m: submitting the pod to kubernetes | |
Nov 3 06:57:11.073: INFO: observed the pod list | |
[1mSTEP[0m: verifying the pod is in kubernetes | |
[1mSTEP[0m: verifying pod creation was observed | |
[1mSTEP[0m: deleting the pod gracefully | |
[1mSTEP[0m: verifying the kubelet observed the termination notice | |
[1mSTEP[0m: verifying pod deletion was observed | |
[AfterEach] [k8s.io] Pods | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:46.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "pods-975" for this suite. | |
[32m• [SLOW TEST:35.538 seconds][0m | |
[k8s.io] Pods | |
[90mtest/e2e/framework/framework.go:683[0m | |
should be submitted and removed [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:46.236: INFO: Only supported for providers [openstack] (not skeleton) | |
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:46.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: cinder] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mOnly supported for providers [openstack] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1019 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:46.245: INFO: Only supported for providers [azure] (not skeleton) | |
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:46.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould not mount / map unused volumes in a pod [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumemode.go:334[0m | |
[36mOnly supported for providers [azure] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1449 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:46.252: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:46.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-link-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly directory specified in the volumeMount [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:359[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-network] DNS | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.475: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename dns | |
Nov 3 06:56:41.754: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:41.774: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-6138 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should provide DNS for ExternalName services [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a test externalName service | |
[1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6138.svc.cluster.local CNAME > /results/[email protected]; sleep 1; done | |
[1mSTEP[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6138.svc.cluster.local CNAME > /results/[email protected]; sleep 1; done | |
[1mSTEP[0m: creating a pod to probe DNS | |
[1mSTEP[0m: submitting the pod to kubernetes | |
[1mSTEP[0m: retrieving the pod | |
[1mSTEP[0m: looking for the results for each expected name from probers | |
Nov 3 06:57:16.020: INFO: DNS probes using dns-test-19bb20a7-9ca3-455a-b873-75aa75d43cd9 succeeded | |
[1mSTEP[0m: deleting the pod | |
[1mSTEP[0m: changing the externalName to bar.example.com | |
[1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6138.svc.cluster.local CNAME > /results/[email protected]; sleep 1; done | |
[1mSTEP[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6138.svc.cluster.local CNAME > /results/[email protected]; sleep 1; done | |
[1mSTEP[0m: creating a second pod to probe DNS | |
[1mSTEP[0m: submitting the pod to kubernetes | |
[1mSTEP[0m: retrieving the pod | |
[1mSTEP[0m: looking for the results for each expected name from probers | |
Nov 3 06:57:30.110: INFO: DNS probes using dns-test-2bd4442a-62d4-4570-ad68-2753800849b7 succeeded | |
[1mSTEP[0m: deleting the pod | |
[1mSTEP[0m: changing the service to type=ClusterIP | |
[1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6138.svc.cluster.local A > /results/[email protected]; sleep 1; done | |
[1mSTEP[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6138.svc.cluster.local A > /results/[email protected]; sleep 1; done | |
[1mSTEP[0m: creating a third pod to probe DNS | |
[1mSTEP[0m: submitting the pod to kubernetes | |
[1mSTEP[0m: retrieving the pod | |
[1mSTEP[0m: looking for the results for each expected name from probers | |
Nov 3 06:57:46.212: INFO: DNS probes using dns-test-e9266a70-8647-445e-a14a-0e7efb63293d succeeded | |
[1mSTEP[0m: deleting the pod | |
[1mSTEP[0m: deleting the test externalName service | |
[AfterEach] [sig-network] DNS | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:46.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "dns-6138" for this suite. | |
[32m• [SLOW TEST:65.804 seconds][0m | |
[sig-network] DNS | |
[90mtest/e2e/network/framework.go:23[0m | |
should provide DNS for ExternalName services [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] PersistentVolumes GCEPD | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:46.283: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename pv | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-4206 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] PersistentVolumes GCEPD | |
test/e2e/storage/persistent_volumes-gce.go:75 | |
Nov 3 06:57:46.437: INFO: Only supported for providers [gce gke] (not skeleton) | |
[AfterEach] [sig-storage] PersistentVolumes GCEPD | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:46.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "pv-4206" for this suite. | |
[AfterEach] [sig-storage] PersistentVolumes GCEPD | |
test/e2e/storage/persistent_volumes-gce.go:108 | |
Nov 3 06:57:46.454: INFO: AfterEach: Cleaning up test resources | |
Nov 3 06:57:46.454: INFO: pvc is nil | |
Nov 3 06:57:46.454: INFO: pv is nil | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.172 seconds][0m | |
[sig-storage] PersistentVolumes GCEPD | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[36m[1mshould test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach][0m | |
[90mtest/e2e/storage/persistent_volumes-gce.go:139[0m | |
[36mOnly supported for providers [gce gke] (not skeleton)[0m | |
test/e2e/storage/persistent_volumes-gce.go:83 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:46.458: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:46.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: block] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.438: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename provisioning | |
Nov 3 06:56:40.666: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.700: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-4701 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should support file as subpath [LinuxOnly] | |
test/e2e/storage/testsuites/subpath.go:225 | |
Nov 3 06:56:40.879: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/local-volume | |
Nov 3 06:56:50.971: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-b4247a4e-e99f-4a7a-a8fc-8d218428ceae] Namespace:provisioning-4701 PodName:hostexec-kind-worker-rlg8k ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:50.971: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:51.135: INFO: Creating resource for pre-provisioned PV | |
Nov 3 06:56:51.135: INFO: Creating PVC and PV | |
[1mSTEP[0m: Creating a PVC followed by a PV | |
Nov 3 06:56:51.144: INFO: Waiting for PV local-h5pf9 to bind to PVC pvc-trfwl | |
Nov 3 06:56:51.144: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-trfwl] to have phase Bound | |
Nov 3 06:56:51.147: INFO: PersistentVolumeClaim pvc-trfwl found but phase is Pending instead of Bound. | |
Nov 3 06:56:53.171: INFO: PersistentVolumeClaim pvc-trfwl found but phase is Pending instead of Bound. | |
Nov 3 06:56:55.176: INFO: PersistentVolumeClaim pvc-trfwl found but phase is Pending instead of Bound. | |
Nov 3 06:56:57.181: INFO: PersistentVolumeClaim pvc-trfwl found but phase is Pending instead of Bound. | |
Nov 3 06:56:59.185: INFO: PersistentVolumeClaim pvc-trfwl found but phase is Pending instead of Bound. | |
Nov 3 06:57:01.189: INFO: PersistentVolumeClaim pvc-trfwl found but phase is Pending instead of Bound. | |
Nov 3 06:57:03.196: INFO: PersistentVolumeClaim pvc-trfwl found and phase=Bound (12.051766788s) | |
Nov 3 06:57:03.196: INFO: Waiting up to 3m0s for PersistentVolume local-h5pf9 to have phase Bound | |
Nov 3 06:57:03.280: INFO: PersistentVolume local-h5pf9 found and phase=Bound (83.917188ms) | |
[1mSTEP[0m: Creating pod pod-subpath-test-local-preprovisionedpv-79qq | |
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath | |
Nov 3 06:57:03.302: INFO: Waiting up to 5m0s for pod "pod-subpath-test-local-preprovisionedpv-79qq" in namespace "provisioning-4701" to be "success or failure" | |
Nov 3 06:57:03.308: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 5.084643ms | |
Nov 3 06:57:05.316: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013435026s | |
Nov 3 06:57:07.323: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020887033s | |
Nov 3 06:57:09.343: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040399547s | |
Nov 3 06:57:11.361: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058017528s | |
Nov 3 06:57:13.373: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.070648012s | |
Nov 3 06:57:15.379: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.076683453s | |
Nov 3 06:57:17.384: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.081435104s | |
Nov 3 06:57:19.390: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.087539268s | |
Nov 3 06:57:21.420: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117931769s | |
Nov 3 06:57:23.434: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.131296404s | |
Nov 3 06:57:25.439: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 22.136435929s | |
Nov 3 06:57:27.446: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 24.143538994s | |
Nov 3 06:57:29.453: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Pending", Reason="", readiness=false. Elapsed: 26.150452024s | |
Nov 3 06:57:31.460: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Running", Reason="", readiness=true. Elapsed: 28.157821832s | |
Nov 3 06:57:33.467: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Running", Reason="", readiness=true. Elapsed: 30.164845113s | |
Nov 3 06:57:35.472: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Running", Reason="", readiness=true. Elapsed: 32.169669688s | |
Nov 3 06:57:37.475: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Running", Reason="", readiness=true. Elapsed: 34.17288858s | |
Nov 3 06:57:39.507: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Running", Reason="", readiness=true. Elapsed: 36.204158695s | |
Nov 3 06:57:41.525: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Running", Reason="", readiness=true. Elapsed: 38.221996222s | |
Nov 3 06:57:43.573: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Running", Reason="", readiness=true. Elapsed: 40.270578223s | |
Nov 3 06:57:45.581: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Running", Reason="", readiness=true. Elapsed: 42.278136169s | |
Nov 3 06:57:47.587: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.284509631s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:47.587: INFO: Pod "pod-subpath-test-local-preprovisionedpv-79qq" satisfied condition "success or failure" | |
Nov 3 06:57:47.591: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-local-preprovisionedpv-79qq container test-container-subpath-local-preprovisionedpv-79qq: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:47.623: INFO: Waiting for pod pod-subpath-test-local-preprovisionedpv-79qq to disappear | |
Nov 3 06:57:47.627: INFO: Pod pod-subpath-test-local-preprovisionedpv-79qq no longer exists | |
[1mSTEP[0m: Deleting pod pod-subpath-test-local-preprovisionedpv-79qq | |
Nov 3 06:57:47.627: INFO: Deleting pod "pod-subpath-test-local-preprovisionedpv-79qq" in namespace "provisioning-4701" | |
[1mSTEP[0m: Deleting pod | |
Nov 3 06:57:47.629: INFO: Deleting pod "pod-subpath-test-local-preprovisionedpv-79qq" in namespace "provisioning-4701" | |
[1mSTEP[0m: Deleting pv and pvc | |
Nov 3 06:57:47.633: INFO: Deleting PersistentVolumeClaim "pvc-trfwl" | |
Nov 3 06:57:47.650: INFO: Deleting PersistentVolume "local-h5pf9" | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:57:47.665: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-b4247a4e-e99f-4a7a-a8fc-8d218428ceae] Namespace:provisioning-4701 PodName:hostexec-kind-worker-rlg8k ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:47.665: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Deleting pod hostexec-kind-worker-rlg8k in namespace provisioning-4701 | |
Nov 3 06:57:47.848: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:47.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "provisioning-4701" for this suite. | |
[32m• [SLOW TEST:67.419 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
should support file as subpath [LinuxOnly] | |
[90mtest/e2e/storage/testsuites/subpath.go:225[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:33.374: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename webhook | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9330 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:87 | |
[1mSTEP[0m: Setting up server cert | |
[1mSTEP[0m: Create role binding to let webhook read extension-apiserver-authentication | |
[1mSTEP[0m: Deploying the webhook pod | |
[1mSTEP[0m: Wait for the deployment to be ready | |
Nov 3 06:57:35.148: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Nov 3 06:57:37.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:39.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:41.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:43.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:45.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:47.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:49.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361055, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
[1mSTEP[0m: Deploying the webhook service | |
[1mSTEP[0m: Verifying the service has paired with the endpoint | |
Nov 3 06:57:52.194: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should include webhook resources in discovery documents [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: fetching the /apis discovery document | |
[1mSTEP[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document | |
[1mSTEP[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document | |
[1mSTEP[0m: fetching the /apis/admissionregistration.k8s.io discovery document | |
[1mSTEP[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document | |
[1mSTEP[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document | |
[1mSTEP[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:52.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "webhook-9330" for this suite. | |
[1mSTEP[0m: Destroying namespace "webhook-9330-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:102 | |
[32m• [SLOW TEST:18.972 seconds][0m | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should include webhook resources in discovery documents [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-network] Networking | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.514: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename nettest | |
Nov 3 06:56:41.550: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:41.585: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-9766 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-network] Networking | |
test/e2e/network/networking.go:39 | |
[1mSTEP[0m: Executing a successful http request from the external internet | |
[It] should function for client IP based session affinity: udp [LinuxOnly] | |
test/e2e/network/networking.go:228 | |
[1mSTEP[0m: Performing setup for networking test in namespace nettest-9766 | |
[1mSTEP[0m: creating a selector | |
[1mSTEP[0m: Creating the service pods in kubernetes | |
Nov 3 06:56:41.934: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable | |
[1mSTEP[0m: Creating test pods | |
[1mSTEP[0m: Getting node addresses | |
Nov 3 06:57:24.149: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable | |
[1mSTEP[0m: Creating the service on top of the pods in kubernetes | |
Nov 3 06:57:24.188: INFO: Service node-port-service in namespace nettest-9766 found. | |
Nov 3 06:57:24.252: INFO: Service session-affinity-service in namespace nettest-9766 found. | |
[1mSTEP[0m: dialing(udp) test-container-pod --> 10.96.235.192:90 | |
Nov 3 06:57:24.263: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:24.264: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:29.534: INFO: Tries: 10, in try: 0, stdout: {"errors":["reading from udp connection failed. err:'read udp 10.244.1.50:44626-\u003e10.96.235.192:90: i/o timeout'"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
Nov 3 06:57:31.540: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:31.540: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:31.794: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
Nov 3 06:57:33.801: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:33.801: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:34.211: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
Nov 3 06:57:36.221: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:36.221: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:36.422: INFO: Tries: 10, in try: 3, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
Nov 3 06:57:38.427: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:38.427: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:38.706: INFO: Tries: 10, in try: 4, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
Nov 3 06:57:40.735: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:40.735: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:41.082: INFO: Tries: 10, in try: 5, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
Nov 3 06:57:43.096: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:43.096: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:43.903: INFO: Tries: 10, in try: 6, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
Nov 3 06:57:45.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:45.909: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:46.096: INFO: Tries: 10, in try: 7, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
Nov 3 06:57:48.099: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:48.099: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:48.324: INFO: Tries: 10, in try: 8, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
Nov 3 06:57:50.330: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostName&protocol=udp&host=10.96.235.192&port=90&tries=1'] Namespace:nettest-9766 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} | |
Nov 3 06:57:50.330: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:50.516: INFO: Tries: 10, in try: 9, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9766", SelfLink:"/api/v1/namespaces/nettest-9766/pods/host-test-container-pod", UID:"044aa97d-f9a9-4ed2-b69d-d3afacffa944", ResourceVersion:"2644", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7gh6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001696c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7gh6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006d7b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002475920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006d7c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006d7c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006d7c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361041, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361038, loc:(*time.Location)(0x83e1840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"172.17.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.4"}}, StartTime:(*v1.Time)(0xc001e63c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e63cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"containerd://1efeadb2f1ba47feadaa53e002eec43b7b7d8820f5c49ebf8c7df960fbe418f0", Started:(*bool)(0xc0006d7d97)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} | |
[AfterEach] [sig-network] Networking | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:52.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "nettest-9766" for this suite. | |
[32m• [SLOW TEST:72.014 seconds][0m | |
[sig-network] Networking | |
[90mtest/e2e/network/framework.go:23[0m | |
Granular Checks: Services | |
[90mtest/e2e/network/networking.go:107[0m | |
should function for client IP based session affinity: udp [LinuxOnly] | |
[90mtest/e2e/network/networking.go:228[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-cli] Kubectl client | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:52.353: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename kubectl | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2746 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl client | |
test/e2e/kubectl/kubectl.go:269 | |
[It] should support proxy with --port 0 [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: starting the proxy server | |
Nov 3 06:57:52.558: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config proxy -p 0 --disable-filter' | |
[1mSTEP[0m: curling proxy /api/ output | |
[AfterEach] [sig-cli] Kubectl client | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:52.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "kubectl-2746" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:52.692: INFO: Driver azure doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:52.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver azure doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:52.696: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:52.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly directory specified in the volumeMount [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:359[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
test/e2e/storage/testsuites/base.go:99 | |
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:52.702: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename volumemode | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volumemode-4839 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should not mount / map unused volumes in a pod | |
test/e2e/storage/testsuites/volumemode.go:334 | |
Nov 3 06:57:52.858: INFO: Driver "local" does not provide raw block - skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:52.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "volumemode-4839" for this suite. | |
[36m[1mS [SKIPPING] [0.174 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (block volmode)] volumeMode | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould not mount / map unused volumes in a pod [It][0m | |
[90mtest/e2e/storage/testsuites/volumemode.go:334[0m | |
[36mDriver "local" does not provide raw block - skipping[0m | |
test/e2e/storage/testsuites/volumes.go:98 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/common/sysctl.go:34 | |
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:52.883: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename sysctl | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-1864 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/common/sysctl.go:63 | |
[It] should reject invalid sysctls | |
test/e2e/common/sysctl.go:153 | |
[1mSTEP[0m: Creating a pod with one valid and two invalid sysctls | |
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:53.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "sysctl-1864" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:53.082: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:53.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:53.088: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) | |
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:53.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gluster] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m | |
test/e2e/storage/drivers/in_tree.go:258 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-network] Firewall rule | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:53.099: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename firewall-test | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in firewall-test-2424 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-network] Firewall rule | |
test/e2e/network/firewall.go:55 | |
Nov 3 06:57:53.244: INFO: Only supported for providers [gce] (not skeleton) | |
[AfterEach] [sig-network] Firewall rule | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:53.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "firewall-test-2424" for this suite. | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.154 seconds][0m | |
[sig-network] Firewall rule | |
[90mtest/e2e/network/framework.go:23[0m | |
[36m[1mshould have correct firewall rules for e2e cluster [BeforeEach][0m | |
[90mtest/e2e/network/firewall.go:196[0m | |
[36mOnly supported for providers [gce] (not skeleton)[0m | |
test/e2e/network/firewall.go:56 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:11.317: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename volume | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-5478 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should allow exec of files on the volume | |
test/e2e/storage/testsuites/volumes.go:191 | |
Nov 3 06:57:11.716: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/local-volume | |
Nov 3 06:57:23.777: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-driver-97385aac-a85a-4937-b5b4-1e75e1d1232d-backend && mount --bind /tmp/local-driver-97385aac-a85a-4937-b5b4-1e75e1d1232d-backend /tmp/local-driver-97385aac-a85a-4937-b5b4-1e75e1d1232d-backend && ln -s /tmp/local-driver-97385aac-a85a-4937-b5b4-1e75e1d1232d-backend /tmp/local-driver-97385aac-a85a-4937-b5b4-1e75e1d1232d] Namespace:volume-5478 PodName:hostexec-kind-worker2-th989 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:23.777: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:57:24.023: INFO: Creating resource for pre-provisioned PV | |
Nov 3 06:57:24.023: INFO: Creating PVC and PV | |
[1mSTEP[0m: Creating a PVC followed by a PV | |
Nov 3 06:57:24.048: INFO: Waiting for PV local-7sgl2 to bind to PVC pvc-4rx26 | |
Nov 3 06:57:24.048: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-4rx26] to have phase Bound | |
Nov 3 06:57:24.067: INFO: PersistentVolumeClaim pvc-4rx26 found but phase is Pending instead of Bound. | |
Nov 3 06:57:26.080: INFO: PersistentVolumeClaim pvc-4rx26 found but phase is Pending instead of Bound. | |
Nov 3 06:57:28.089: INFO: PersistentVolumeClaim pvc-4rx26 found but phase is Pending instead of Bound. | |
Nov 3 06:57:30.094: INFO: PersistentVolumeClaim pvc-4rx26 found but phase is Pending instead of Bound. | |
Nov 3 06:57:32.108: INFO: PersistentVolumeClaim pvc-4rx26 found and phase=Bound (8.059808346s) | |
Nov 3 06:57:32.108: INFO: Waiting up to 3m0s for PersistentVolume local-7sgl2 to have phase Bound | |
Nov 3 06:57:32.120: INFO: PersistentVolume local-7sgl2 found and phase=Bound (11.395527ms) | |
[1mSTEP[0m: Creating pod exec-volume-test-local-preprovisionedpv-r7fw | |
[1mSTEP[0m: Creating a pod to test exec-volume-test | |
Nov 3 06:57:32.137: INFO: Waiting up to 5m0s for pod "exec-volume-test-local-preprovisionedpv-r7fw" in namespace "volume-5478" to be "success or failure" | |
Nov 3 06:57:32.142: INFO: Pod "exec-volume-test-local-preprovisionedpv-r7fw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.369789ms | |
Nov 3 06:57:34.154: INFO: Pod "exec-volume-test-local-preprovisionedpv-r7fw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015600819s | |
Nov 3 06:57:36.165: INFO: Pod "exec-volume-test-local-preprovisionedpv-r7fw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026453756s | |
Nov 3 06:57:38.176: INFO: Pod "exec-volume-test-local-preprovisionedpv-r7fw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03753174s | |
Nov 3 06:57:40.187: INFO: Pod "exec-volume-test-local-preprovisionedpv-r7fw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049228612s | |
Nov 3 06:57:42.237: INFO: Pod "exec-volume-test-local-preprovisionedpv-r7fw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098470889s | |
Nov 3 06:57:44.294: INFO: Pod "exec-volume-test-local-preprovisionedpv-r7fw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.155531331s | |
Nov 3 06:57:46.306: INFO: Pod "exec-volume-test-local-preprovisionedpv-r7fw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.167696985s | |
Nov 3 06:57:48.312: INFO: Pod "exec-volume-test-local-preprovision |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment