Last active
January 28, 2020 06:25
-
-
Save jayunit100/2660541e2502abae3a7bbcead0e8db1b to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| sonobuoy run --e2e-focus "NetworkPolicy" --e2e-skip "" --kubeconfig=.//out/workload-cluster-4/kubeconfig | |
| • Failure [68.954 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce updated policy [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [68.954 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3- | |
| beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce updated policy [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [195.193 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [185.464 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [195.193 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [85.832 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [79.669 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should allow ingress access from updated pod [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [185.464 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [140.377 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [85.832 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [124.318 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [79.669 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should allow ingress access from updated pod [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [137.296 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should allow ingress access from updated namespace [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [140.377 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [124.318 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [It] | |
| -- | |
| • Failure [137.296 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should allow ingress access from updated namespace [Feature:NetworkPolicy] [It] | |
| ========== RESULTS FROM UPSTREAM CALICP 3.10.3 | |
| ``` | |
| Summarizing 13 Failures: | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should allow ingress access from updated namespace [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should allow ingress access from namespace on one named port [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should allow ingress access from updated pod [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should stop enforcing policies after they are deleted [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should enforce updated policy [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1458 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| [Fail] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [It] should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ``` | |
| ==== TEST LOGS ======== . | |
| I0124 20:22:29.666283 22 test_context.go:414] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-418650170 | |
| I0124 20:22:29.666453 22 e2e.go:92] Starting e2e run "2cd45b71-13e6-44f9-b4dc-7270367c94d3" on Ginkgo node 1 | |
| Running Suite: Kubernetes e2e suite | |
| =================================== | |
| Random Seed: 1579897347 - Will randomize all specs | |
| Will run 23 of 4732 specs | |
| Jan 24 20:22:29.687: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| Jan 24 20:22:29.689: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable | |
| Jan 24 20:22:29.709: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready | |
| Jan 24 20:22:29.753: INFO: 27 / 27 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) | |
| Jan 24 20:22:29.753: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready. | |
| Jan 24 20:22:29.753: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start | |
| Jan 24 20:22:29.764: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) | |
| Jan 24 20:22:29.764: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) | |
| Jan 24 20:22:29.764: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'vsphere-cloud-controller-manager' (0 seconds elapsed) | |
| Jan 24 20:22:29.764: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'vsphere-csi-node' (0 seconds elapsed) | |
| Jan 24 20:22:29.764: INFO: e2e test version: v1.16.3 | |
| Jan 24 20:22:29.767: INFO: kube-apiserver version: v1.16.3 | |
| Jan 24 20:22:29.767: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| Jan 24 20:22:29.774: INFO: Cluster IP family: ipv4 | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should allow ingress access from updated namespace [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:774 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:22:29.779: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| Jan 24 20:22:29.831: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-8096 | |
| Jan 24 20:22:29.838: INFO: Created pod server-mth9x | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-8096 | |
| Jan 24 20:22:29.858: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:22:57.882: INFO: Waiting for client-can-connect-80-5x9dw to complete. | |
| Jan 24 20:23:03.895: INFO: Waiting for client-can-connect-80-5x9dw to complete. | |
| Jan 24 20:23:03.895: INFO: Waiting up to 5m0s for pod "client-can-connect-80-5x9dw" in namespace "network-policy-8096" to be "success or failure" | |
| Jan 24 20:23:03.899: INFO: Pod "client-can-connect-80-5x9dw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.727964ms | |
| STEP: Saw pod success | |
| Jan 24 20:23:03.899: INFO: Pod "client-can-connect-80-5x9dw" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-5x9dw | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:23:03.941: INFO: Waiting for client-can-connect-81-fc8jf to complete. | |
| Jan 24 20:23:07.950: INFO: Waiting for client-can-connect-81-fc8jf to complete. | |
| Jan 24 20:23:07.950: INFO: Waiting up to 5m0s for pod "client-can-connect-81-fc8jf" in namespace "network-policy-8096" to be "success or failure" | |
| Jan 24 20:23:07.953: INFO: Pod "client-can-connect-81-fc8jf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.236821ms | |
| STEP: Saw pod success | |
| Jan 24 20:23:07.953: INFO: Pod "client-can-connect-81-fc8jf" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-fc8jf | |
| [It] should allow ingress access from updated namespace [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:774 | |
| STEP: Creating a network policy for the server which allows traffic from namespace-b. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:23:08.130: INFO: Waiting for client-a-qxx6s to complete. | |
| Jan 24 20:23:08.130: INFO: Waiting up to 5m0s for pod "client-a-qxx6s" in namespace "network-policy-b-6398" to be "success or failure" | |
| Jan 24 20:23:08.145: INFO: Pod "client-a-qxx6s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.627198ms | |
| Jan 24 20:23:10.150: INFO: Pod "client-a-qxx6s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019887828s | |
| Jan 24 20:23:12.158: INFO: Pod "client-a-qxx6s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028072792s | |
| Jan 24 20:23:14.163: INFO: Pod "client-a-qxx6s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032539593s | |
| Jan 24 20:23:16.167: INFO: Pod "client-a-qxx6s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036805614s | |
| Jan 24 20:23:18.172: INFO: Pod "client-a-qxx6s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.041585584s | |
| Jan 24 20:23:20.177: INFO: Pod "client-a-qxx6s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.047063966s | |
| Jan 24 20:23:22.182: INFO: Pod "client-a-qxx6s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.051331944s | |
| Jan 24 20:23:24.185: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 16.05491116s | |
| Jan 24 20:23:26.189: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 18.059160888s | |
| Jan 24 20:23:28.194: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 20.063360055s | |
| Jan 24 20:23:30.201: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 22.071168196s | |
| Jan 24 20:23:32.207: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 24.076466249s | |
| Jan 24 20:23:34.213: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 26.083236014s | |
| Jan 24 20:23:36.218: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 28.088009078s | |
| Jan 24 20:23:38.222: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 30.092185402s | |
| Jan 24 20:23:40.227: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 32.096560159s | |
| Jan 24 20:23:42.231: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 34.101085853s | |
| Jan 24 20:23:44.237: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 36.107169484s | |
| Jan 24 20:23:46.243: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 38.113072563s | |
| Jan 24 20:23:48.250: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 40.119333967s | |
| Jan 24 20:23:50.273: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 42.142610961s | |
| Jan 24 20:23:52.277: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 44.146888222s | |
| Jan 24 20:23:54.283: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 46.153257209s | |
| Jan 24 20:23:56.289: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 48.158485664s | |
| Jan 24 20:23:58.294: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 50.163337289s | |
| Jan 24 20:24:00.298: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 52.167392832s | |
| Jan 24 20:24:02.302: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 54.172071005s | |
| Jan 24 20:24:04.306: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 56.175558855s | |
| Jan 24 20:24:06.310: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 58.179822965s | |
| Jan 24 20:24:08.315: INFO: Pod "client-a-qxx6s": Phase="Running", Reason="", readiness=true. Elapsed: 1m0.18479162s | |
| Jan 24 20:24:10.319: INFO: Pod "client-a-qxx6s": Phase="Failed", Reason="", readiness=false. Elapsed: 1m2.188616378s | |
| STEP: Cleaning up the pod client-a-qxx6s | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:24:10.375: INFO: Waiting for client-b-777ml to complete. | |
| Jan 24 20:24:58.392: INFO: Waiting for client-b-777ml to complete. | |
| Jan 24 20:24:58.392: INFO: Waiting up to 5m0s for pod "client-b-777ml" in namespace "network-policy-b-6398" to be "success or failure" | |
| Jan 24 20:24:58.397: INFO: Pod "client-b-777ml": Phase="Failed", Reason="", readiness=false. Elapsed: 4.445584ms | |
| Jan 24 20:24:58.401: FAIL: Error getting container logs: the server could not find the requested resource (get pods client-b-777ml) | |
| STEP: Cleaning up the pod client-b-777ml | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-8096". | |
| STEP: Found 19 events. | |
| Jan 24 20:24:58.539: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-5x9dw: {default-scheduler } Scheduled: Successfully assigned network-policy-8096/client-can-connect-80-5x9dw to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:24:58.539: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-fc8jf: {default-scheduler } Scheduled: Successfully assigned network-policy-8096/client-can-connect-81-fc8jf to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:24:58.539: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-mth9x: {default-scheduler } Scheduled: Successfully assigned network-policy-8096/server-mth9x to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:24:58.539: INFO: At 2020-01-24 20:22:30 +0000 UTC - event for server-mth9x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4ecdebaa3cefeb494826cada0491e3aebe6d967693f7537054dcab94c26d254b": error adding host side routes for interface: cali0de1653f4ff, error: route (Ifindex: 50, Dst: 192.168.225.129/32, Scope: 253) already exists for an interface other than 'cali0de1653f4ff' | |
| Jan 24 20:24:58.539: INFO: At 2020-01-24 20:22:46 +0000 UTC - event for server-mth9x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:24:58.539: INFO: At 2020-01-24 20:22:46 +0000 UTC - event for server-mth9x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:24:58.539: INFO: At 2020-01-24 20:22:46 +0000 UTC - event for server-mth9x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:24:58.539: INFO: At 2020-01-24 20:22:46 +0000 UTC - event for server-mth9x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:24:58.541: INFO: At 2020-01-24 20:22:46 +0000 UTC - event for server-mth9x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:24:58.542: INFO: At 2020-01-24 20:22:47 +0000 UTC - event for server-mth9x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:24:58.542: INFO: At 2020-01-24 20:22:58 +0000 UTC - event for client-can-connect-80-5x9dw: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulling: Pulling image "docker.io/library/busybox:1.29" | |
| Jan 24 20:24:58.543: INFO: At 2020-01-24 20:23:01 +0000 UTC - event for client-can-connect-80-5x9dw: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-can-connect-80-container | |
| Jan 24 20:24:58.543: INFO: At 2020-01-24 20:23:01 +0000 UTC - event for client-can-connect-80-5x9dw: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" | |
| Jan 24 20:24:58.543: INFO: At 2020-01-24 20:23:01 +0000 UTC - event for client-can-connect-80-5x9dw: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-can-connect-80-container | |
| Jan 24 20:24:58.543: INFO: At 2020-01-24 20:23:04 +0000 UTC - event for client-can-connect-81-fc8jf: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:24:58.543: INFO: At 2020-01-24 20:23:05 +0000 UTC - event for client-can-connect-81-fc8jf: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-81-container | |
| Jan 24 20:24:58.544: INFO: At 2020-01-24 20:23:05 +0000 UTC - event for client-can-connect-81-fc8jf: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-81-container | |
| Jan 24 20:24:58.545: INFO: At 2020-01-24 20:24:58 +0000 UTC - event for server-mth9x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:24:58.546: INFO: At 2020-01-24 20:24:58 +0000 UTC - event for server-mth9x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:24:58.560: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:24:58.561: INFO: server-mth9x workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:22:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:22:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:22:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:22:29 +0000 UTC }] | |
| Jan 24 20:24:58.562: INFO: | |
| Jan 24 20:24:58.575: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:24:58.589: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 34769 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:04 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:04 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:04 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:24:04 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:24:58.590: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:24:58.602: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:24:58.632: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:58.632: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:24:58.632: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:24:58.632: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:24:58.632: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:24:58.632: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:24:58.632: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:24:58.632: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:24:58.632: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:24:58.634: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:24:58.634: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:24:58.634: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:24:58.634: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:24:58.635: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:24:58.635: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:58.636: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:24:58.636: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:58.636: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:24:58.637: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:58.637: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:24:58.637: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:24:58.637: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:24:58.637: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:24:58.638: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:24:58.638: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:24:58.638: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| Jan 24 20:24:58.638: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:58.638: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:24:58.638: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:58.638: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| W0124 20:24:58.647294 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:24:58.812: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:24:58.812: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:24:58.816: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 34823 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:24:20 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:24:58.817: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:24:58.823: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:24:58.851: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:24:58.851: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:58.851: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:58.851: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:58.851: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:24:58.851: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:24:58.851: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:24:58.851: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:24:58.858817 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:24:58.965: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:24:58.965: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:24:58.972: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 34892 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:24:58.973: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:24:58.989: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:24:59.012: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:59.012: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:24:59.012: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:24:59.012: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:24:59.012: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:24:59.012: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:24:59.012: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:24:59.026569 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:24:59.150: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:24:59.150: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:24:59.155: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 34809 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:24:13 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:24:59.156: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:24:59.162: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:24:59.193: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:24:59.193: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:24:59.193: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:24:59.193: INFO: Container vsphere-csi-node ready: false, restart count 5 | |
| Jan 24 20:24:59.193: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:59.193: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:24:59.193: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:24:59.193: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:24:59.193: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:24:59.193: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:59.193: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:24:59.193: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:24:59.193: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:24:59.193: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:24:59.193: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:24:59.193: INFO: Container calico-node ready: true, restart count 0 | |
| W0124 20:24:59.200220 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:24:59.298: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:24:59.298: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:24:59.303: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 34890 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:24:59.303: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:24:59.311: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:24:59.339: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:24:59.339: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:24:59.339: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:24:59.339: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: server-mth9x started at 2020-01-24 20:22:29 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:24:59.339: INFO: Container server-container-80 ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: Container server-container-81 ready: true, restart count 0 | |
| Jan 24 20:24:59.339: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:59.339: INFO: Container kube-proxy ready: true, restart count 0 | |
| W0124 20:24:59.345619 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:24:59.437: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:24:59.438: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:24:59.442: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 34891 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:24:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:24:59.443: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:24:59.451: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:24:59.470: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:24:59.470: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:59.470: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:24:59.470: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:24:59.470: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:24:59.470: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:24:59.470: INFO: Container calico-node ready: true, restart count 0 | |
| W0124 20:24:59.476589 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:24:59.580: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:24:59.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-8096" for this suite. | |
| Jan 24 20:25:11.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:25:11.854: INFO: namespace network-policy-8096 deletion completed in 12.267516418s | |
| STEP: Destroying namespace "network-policy-b-6398" for this suite. | |
| Jan 24 20:25:17.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:25:17.996: INFO: namespace network-policy-b-6398 deletion completed in 6.141236037s | |
| • Failure [168.218 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should allow ingress access from updated namespace [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:774 | |
| Jan 24 20:24:58.401: Error getting container logs: the server could not find the requested resource (get pods client-b-777ml) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should allow ingress access from namespace on one named port [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:599 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:25:18.002: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-2084 | |
| Jan 24 20:25:18.138: INFO: Created pod server-np45l | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-2084 | |
| Jan 24 20:25:18.172: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:25:28.203: INFO: Waiting for client-can-connect-80-qdz2t to complete. | |
| Jan 24 20:25:32.211: INFO: Waiting for client-can-connect-80-qdz2t to complete. | |
| Jan 24 20:25:32.211: INFO: Waiting up to 5m0s for pod "client-can-connect-80-qdz2t" in namespace "network-policy-2084" to be "success or failure" | |
| Jan 24 20:25:32.215: INFO: Pod "client-can-connect-80-qdz2t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.410305ms | |
| STEP: Saw pod success | |
| Jan 24 20:25:32.215: INFO: Pod "client-can-connect-80-qdz2t" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-qdz2t | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:25:32.239: INFO: Waiting for client-can-connect-81-954hj to complete. | |
| Jan 24 20:26:02.262: INFO: Waiting for client-can-connect-81-954hj to complete. | |
| Jan 24 20:26:02.262: INFO: Waiting up to 5m0s for pod "client-can-connect-81-954hj" in namespace "network-policy-2084" to be "success or failure" | |
| Jan 24 20:26:02.266: INFO: Pod "client-can-connect-81-954hj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.966538ms | |
| STEP: Saw pod success | |
| Jan 24 20:26:02.266: INFO: Pod "client-can-connect-81-954hj" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-954hj | |
| [It] should allow ingress access from namespace on one named port [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:599 | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:26:02.344: INFO: Waiting for client-a-4hkkh to complete. | |
| Jan 24 20:26:02.344: INFO: Waiting up to 5m0s for pod "client-a-4hkkh" in namespace "network-policy-2084" to be "success or failure" | |
| Jan 24 20:26:02.358: INFO: Pod "client-a-4hkkh": Phase="Pending", Reason="", readiness=false. Elapsed: 13.578646ms | |
| Jan 24 20:26:04.365: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 2.020417109s | |
| Jan 24 20:26:06.369: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 4.02455972s | |
| Jan 24 20:26:08.374: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 6.0296679s | |
| Jan 24 20:26:10.378: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 8.033922721s | |
| Jan 24 20:26:12.383: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 10.038306931s | |
| Jan 24 20:26:14.388: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 12.043238676s | |
| Jan 24 20:26:16.392: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 14.047655343s | |
| Jan 24 20:26:18.396: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 16.051730033s | |
| Jan 24 20:26:20.401: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 18.0563494s | |
| Jan 24 20:26:22.406: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 20.061561387s | |
| Jan 24 20:26:24.410: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 22.065296304s | |
| Jan 24 20:26:26.414: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 24.069464519s | |
| Jan 24 20:26:28.417: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 26.072985761s | |
| Jan 24 20:26:30.424: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 28.079242878s | |
| Jan 24 20:26:32.436: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 30.092151586s | |
| Jan 24 20:26:34.441: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 32.097147695s | |
| Jan 24 20:26:36.446: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 34.101520285s | |
| Jan 24 20:26:38.451: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 36.106536323s | |
| Jan 24 20:26:40.455: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 38.110455539s | |
| Jan 24 20:26:42.459: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 40.114694289s | |
| Jan 24 20:26:44.463: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 42.118170659s | |
| Jan 24 20:26:46.466: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 44.122126173s | |
| Jan 24 20:26:48.473: INFO: Pod "client-a-4hkkh": Phase="Running", Reason="", readiness=true. Elapsed: 46.12837969s | |
| Jan 24 20:26:50.478: INFO: Pod "client-a-4hkkh": Phase="Failed", Reason="", readiness=false. Elapsed: 48.134038407s | |
| STEP: Cleaning up the pod client-a-4hkkh | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:26:50.511: INFO: Waiting for client-b-9tm6g to complete. | |
| Jan 24 20:27:38.525: INFO: Waiting for client-b-9tm6g to complete. | |
| Jan 24 20:27:38.525: INFO: Waiting up to 5m0s for pod "client-b-9tm6g" in namespace "network-policy-b-3540" to be "success or failure" | |
| Jan 24 20:27:38.530: INFO: Pod "client-b-9tm6g": Phase="Failed", Reason="", readiness=false. Elapsed: 5.005299ms | |
| Jan 24 20:27:38.534: FAIL: Error getting container logs: the server could not find the requested resource (get pods client-b-9tm6g) | |
| STEP: Cleaning up the pod client-b-9tm6g | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-2084". | |
| STEP: Found 23 events. | |
| Jan 24 20:27:38.655: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-4hkkh: {default-scheduler } Scheduled: Successfully assigned network-policy-2084/client-a-4hkkh to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:27:38.655: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-qdz2t: {default-scheduler } Scheduled: Successfully assigned network-policy-2084/client-can-connect-80-qdz2t to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:27:38.657: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-954hj: {default-scheduler } Scheduled: Successfully assigned network-policy-2084/client-can-connect-81-954hj to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:27:38.657: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-np45l: {default-scheduler } Scheduled: Successfully assigned network-policy-2084/server-np45l to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:19 +0000 UTC - event for server-np45l: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:19 +0000 UTC - event for server-np45l: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:19 +0000 UTC - event for server-np45l: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:19 +0000 UTC - event for server-np45l: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:19 +0000 UTC - event for server-np45l: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:19 +0000 UTC - event for server-np45l: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:30 +0000 UTC - event for client-can-connect-80-qdz2t: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-can-connect-80-container | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:30 +0000 UTC - event for client-can-connect-80-qdz2t: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:30 +0000 UTC - event for client-can-connect-80-qdz2t: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-can-connect-80-container | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:33 +0000 UTC - event for client-can-connect-81-954hj: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ca1db5235f85e557f87f7db51efa7a226087247a104da05b06ffdf00ab2c9864": error adding host side routes for interface: cali14f31f84762, error: route (Ifindex: 3, Dst: 192.168.222.4/32, Scope: 253) already exists for an interface other than 'cali14f31f84762' | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:45 +0000 UTC - event for client-can-connect-81-954hj: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7d467d7ed43ea7b252fff897703d7dd25234a7db8fa12646a803a8423de6bc96": error adding host side routes for interface: cali14f31f84762, error: route (Ifindex: 3, Dst: 192.168.222.5/32, Scope: 253) already exists for an interface other than 'cali14f31f84762' | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:25:59 +0000 UTC - event for client-can-connect-81-954hj: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:26:00 +0000 UTC - event for client-can-connect-81-954hj: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-can-connect-81-container | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:26:00 +0000 UTC - event for client-can-connect-81-954hj: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-can-connect-81-container | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:26:03 +0000 UTC - event for client-a-4hkkh: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-a-container | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:26:03 +0000 UTC - event for client-a-4hkkh: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-a-container | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:26:03 +0000 UTC - event for client-a-4hkkh: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:27:38 +0000 UTC - event for server-np45l: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:27:38.657: INFO: At 2020-01-24 20:27:38 +0000 UTC - event for server-np45l: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:27:38.663: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:27:38.663: INFO: server-np45l workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:25:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:25:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:25:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:25:18 +0000 UTC }] | |
| Jan 24 20:27:38.663: INFO: | |
| Jan 24 20:27:38.680: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:27:38.688: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 35367 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:27:04 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:27:04 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:27:04 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:27:04 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:27:38.689: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:27:38.698: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:27:38.719: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:27:38.719: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:27:38.719: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:27:38.719: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:27:38.719: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:27:38.719: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:27:38.719: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:27:38.719: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:27:38.719: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:38.719: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:27:38.719: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:38.719: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:27:38.720: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:38.720: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:27:38.720: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:38.720: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:27:38.720: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:27:38.720: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:27:38.720: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:27:38.720: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:27:38.720: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:38.720: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:27:38.720: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:38.720: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:27:38.720: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:27:38.720: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:27:38.720: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:27:38.720: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:27:38.720: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:27:38.721: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| W0124 20:27:38.732149 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:27:38.870: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:27:38.870: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:27:38.874: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 35402 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:27:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:27:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:27:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:27:20 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:27:38.875: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:27:38.881: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:27:38.906: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:38.906: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:27:38.906: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:27:38.906: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:27:38.906: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:27:38.906: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:27:38.906: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:27:38.906: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:27:38.906: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:27:38.906: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:27:38.906: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:27:38.906: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:27:38.907: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:27:38.907: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:27:38.907: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:38.907: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:27:38.908: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:38.908: INFO: Container coredns ready: true, restart count 0 | |
| W0124 20:27:38.913442 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:27:39.011: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:27:39.011: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:27:39.016: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 35321 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:27:39.017: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:27:39.031: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:27:39.054: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:39.054: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:27:39.054: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:27:39.054: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:27:39.054: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:27:39.054: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:27:39.054: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:27:39.058869 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:27:39.160: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:27:39.160: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:27:39.165: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 35387 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:27:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:27:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:27:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:27:13 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:27:39.166: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:27:39.173: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:27:39.195: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:27:39.195: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:27:39.195: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:27:39.196: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:27:39.196: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:27:39.196: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:27:39.196: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:27:39.196: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:27:39.196: INFO: Container vsphere-csi-node ready: false, restart count 6 | |
| Jan 24 20:27:39.196: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:39.196: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:27:39.196: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:27:39.196: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:27:39.196: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:27:39.196: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:39.196: INFO: Container kube-proxy ready: true, restart count 0 | |
| W0124 20:27:39.202442 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:27:39.304: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:27:39.304: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:27:39.308: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 35320 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:27:39.309: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:27:39.315: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:27:39.330: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:27:39.330: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: server-np45l started at 2020-01-24 20:25:18 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:27:39.330: INFO: Container server-container-80 ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: Container server-container-81 ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:39.330: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:27:39.330: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:27:39.330: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:27:39.330: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| W0124 20:27:39.336009 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:27:39.407: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:27:39.407: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:27:39.412: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 35319 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:26:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:27:39.412: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:27:39.419: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:27:39.438: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:39.438: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:27:39.438: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:27:39.438: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:27:39.438: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:27:39.438: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:27:39.438: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:27:39.443922 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:27:39.573: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:27:39.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-2084" for this suite. | |
| Jan 24 20:27:51.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:27:51.699: INFO: namespace network-policy-2084 deletion completed in 12.121319885s | |
| STEP: Destroying namespace "network-policy-b-3540" for this suite. | |
| Jan 24 20:27:57.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:27:57.827: INFO: namespace network-policy-b-3540 deletion completed in 6.127730064s | |
| • Failure [159.825 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should allow ingress access from namespace on one named port [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:599 | |
| Jan 24 20:27:38.534: Error getting container logs: the server could not find the requested resource (get pods client-b-9tm6g) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:489 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:27:57.830: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-6759 | |
| Jan 24 20:27:57.938: INFO: Created pod server-cff9n | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-6759 | |
| Jan 24 20:27:57.996: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:28:10.046: INFO: Waiting for client-can-connect-80-87mbq to complete. | |
| Jan 24 20:28:14.063: INFO: Waiting for client-can-connect-80-87mbq to complete. | |
| Jan 24 20:28:14.064: INFO: Waiting up to 5m0s for pod "client-can-connect-80-87mbq" in namespace "network-policy-6759" to be "success or failure" | |
| Jan 24 20:28:14.068: INFO: Pod "client-can-connect-80-87mbq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.260356ms | |
| STEP: Saw pod success | |
| Jan 24 20:28:14.068: INFO: Pod "client-can-connect-80-87mbq" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-87mbq | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:28:14.111: INFO: Waiting for client-can-connect-81-jlqqs to complete. | |
| Jan 24 20:28:18.139: INFO: Waiting for client-can-connect-81-jlqqs to complete. | |
| Jan 24 20:28:18.140: INFO: Waiting up to 5m0s for pod "client-can-connect-81-jlqqs" in namespace "network-policy-6759" to be "success or failure" | |
| Jan 24 20:28:18.145: INFO: Pod "client-can-connect-81-jlqqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.618573ms | |
| STEP: Saw pod success | |
| Jan 24 20:28:18.146: INFO: Pod "client-can-connect-81-jlqqs" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-jlqqs | |
| [It] should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:489 | |
| STEP: Creating a network policy for the Service which allows traffic only to one port. | |
| STEP: Creating a network policy for the Service which allows traffic only to another port. | |
| STEP: Testing pods can connect to both ports when both policies are present. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:28:18.206: INFO: Waiting for client-a-57pgp to complete. | |
| Jan 24 20:28:20.229: INFO: Waiting for client-a-57pgp to complete. | |
| Jan 24 20:28:20.229: INFO: Waiting up to 5m0s for pod "client-a-57pgp" in namespace "network-policy-6759" to be "success or failure" | |
| Jan 24 20:28:20.235: INFO: Pod "client-a-57pgp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.406292ms | |
| STEP: Saw pod success | |
| Jan 24 20:28:20.235: INFO: Pod "client-a-57pgp" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-57pgp | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:28:20.271: INFO: Waiting for client-b-jtsjw to complete. | |
| Jan 24 20:28:22.285: INFO: Waiting for client-b-jtsjw to complete. | |
| Jan 24 20:28:22.285: INFO: Waiting up to 5m0s for pod "client-b-jtsjw" in namespace "network-policy-6759" to be "success or failure" | |
| Jan 24 20:28:22.291: INFO: Pod "client-b-jtsjw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.846424ms | |
| STEP: Saw pod success | |
| Jan 24 20:28:22.291: INFO: Pod "client-b-jtsjw" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-b-jtsjw | |
| STEP: Cleaning up the policy. | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| Jan 24 20:28:22.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-6759" for this suite. | |
| Jan 24 20:28:28.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:28:28.589: INFO: namespace network-policy-6759 deletion completed in 6.150828424s | |
| • [SLOW TEST:30.759 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:489 | |
| ------------------------------ | |
| SSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should allow ingress access on one named port [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:566 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:28:28.591: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-429 | |
| Jan 24 20:28:28.664: INFO: Created pod server-88wjs | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-429 | |
| Jan 24 20:28:28.700: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:28:38.735: INFO: Waiting for client-can-connect-80-qkthn to complete. | |
| Jan 24 20:28:42.762: INFO: Waiting for client-can-connect-80-qkthn to complete. | |
| Jan 24 20:28:42.762: INFO: Waiting up to 5m0s for pod "client-can-connect-80-qkthn" in namespace "network-policy-429" to be "success or failure" | |
| Jan 24 20:28:42.767: INFO: Pod "client-can-connect-80-qkthn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.842406ms | |
| STEP: Saw pod success | |
| Jan 24 20:28:42.767: INFO: Pod "client-can-connect-80-qkthn" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-qkthn | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:28:42.816: INFO: Waiting for client-can-connect-81-nlf8l to complete. | |
| Jan 24 20:28:46.844: INFO: Waiting for client-can-connect-81-nlf8l to complete. | |
| Jan 24 20:28:46.844: INFO: Waiting up to 5m0s for pod "client-can-connect-81-nlf8l" in namespace "network-policy-429" to be "success or failure" | |
| Jan 24 20:28:46.848: INFO: Pod "client-can-connect-81-nlf8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205543ms | |
| STEP: Saw pod success | |
| Jan 24 20:28:46.849: INFO: Pod "client-can-connect-81-nlf8l" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-nlf8l | |
| [It] should allow ingress access on one named port [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:566 | |
| STEP: Creating client-a which should be able to contact the server. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:28:46.889: INFO: Waiting for client-a-vqqlp to complete. | |
| Jan 24 20:28:50.907: INFO: Waiting for client-a-vqqlp to complete. | |
| Jan 24 20:28:50.907: INFO: Waiting up to 5m0s for pod "client-a-vqqlp" in namespace "network-policy-429" to be "success or failure" | |
| Jan 24 20:28:50.911: INFO: Pod "client-a-vqqlp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.414498ms | |
| STEP: Saw pod success | |
| Jan 24 20:28:50.911: INFO: Pod "client-a-vqqlp" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-vqqlp | |
| STEP: Creating client-b which should not be able to contact the server on port 81. | |
| STEP: Creating client pod client-b that should not be able to connect to svc-server. | |
| Jan 24 20:28:50.940: INFO: Waiting for client-b-6tshm to complete. | |
| Jan 24 20:28:50.940: INFO: Waiting up to 5m0s for pod "client-b-6tshm" in namespace "network-policy-429" to be "success or failure" | |
| Jan 24 20:28:50.950: INFO: Pod "client-b-6tshm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.590682ms | |
| Jan 24 20:28:52.955: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 2.015530417s | |
| Jan 24 20:28:54.960: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 4.02073035s | |
| Jan 24 20:28:56.965: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 6.025478448s | |
| Jan 24 20:28:58.970: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 8.030214443s | |
| Jan 24 20:29:00.976: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 10.036163963s | |
| Jan 24 20:29:02.980: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 12.040737616s | |
| Jan 24 20:29:04.986: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 14.04665129s | |
| Jan 24 20:29:06.990: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 16.050755265s | |
| Jan 24 20:29:08.995: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 18.054889172s | |
| Jan 24 20:29:10.998: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 20.058251286s | |
| Jan 24 20:29:13.003: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 22.063273528s | |
| Jan 24 20:29:15.010: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 24.070132031s | |
| Jan 24 20:29:17.016: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 26.076202755s | |
| Jan 24 20:29:19.021: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 28.081531028s | |
| Jan 24 20:29:21.026: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 30.086120137s | |
| Jan 24 20:29:23.030: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 32.090081247s | |
| Jan 24 20:29:25.036: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 34.095875508s | |
| Jan 24 20:29:27.041: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 36.100964192s | |
| Jan 24 20:29:29.044: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 38.104639682s | |
| Jan 24 20:29:31.049: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 40.109123439s | |
| Jan 24 20:29:33.053: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 42.112932588s | |
| Jan 24 20:29:35.059: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 44.118878973s | |
| Jan 24 20:29:37.064: INFO: Pod "client-b-6tshm": Phase="Running", Reason="", readiness=true. Elapsed: 46.124021412s | |
| Jan 24 20:29:39.068: INFO: Pod "client-b-6tshm": Phase="Failed", Reason="", readiness=false. Elapsed: 48.128200513s | |
| STEP: Cleaning up the pod client-b-6tshm | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| Jan 24 20:29:39.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-429" for this suite. | |
| Jan 24 20:29:51.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:29:51.347: INFO: namespace network-policy-429 deletion completed in 12.15895133s | |
| • [SLOW TEST:82.756 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should allow ingress access on one named port [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:566 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should allow ingress access from updated pod [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:824 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:29:51.351: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-7959 | |
| Jan 24 20:29:51.405: INFO: Created pod server-hl4lx | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-7959 | |
| Jan 24 20:29:51.461: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:30:03.522: INFO: Waiting for client-can-connect-80-5jkxm to complete. | |
| Jan 24 20:30:07.555: INFO: Waiting for client-can-connect-80-5jkxm to complete. | |
| Jan 24 20:30:07.555: INFO: Waiting up to 5m0s for pod "client-can-connect-80-5jkxm" in namespace "network-policy-7959" to be "success or failure" | |
| Jan 24 20:30:07.563: INFO: Pod "client-can-connect-80-5jkxm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.448452ms | |
| STEP: Saw pod success | |
| Jan 24 20:30:07.563: INFO: Pod "client-can-connect-80-5jkxm" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-5jkxm | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:30:07.603: INFO: Waiting for client-can-connect-81-k8nfc to complete. | |
| Jan 24 20:30:09.624: INFO: Waiting for client-can-connect-81-k8nfc to complete. | |
| Jan 24 20:30:09.625: INFO: Waiting up to 5m0s for pod "client-can-connect-81-k8nfc" in namespace "network-policy-7959" to be "success or failure" | |
| Jan 24 20:30:09.629: INFO: Pod "client-can-connect-81-k8nfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150943ms | |
| STEP: Saw pod success | |
| Jan 24 20:30:09.629: INFO: Pod "client-can-connect-81-k8nfc" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-k8nfc | |
| [It] should allow ingress access from updated pod [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:824 | |
| STEP: Creating a network policy for the server which allows traffic from client-a-updated. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:30:09.678: INFO: Waiting for client-a-dz6wx to complete. | |
| Jan 24 20:30:09.678: INFO: Waiting up to 5m0s for pod "client-a-dz6wx" in namespace "network-policy-7959" to be "success or failure" | |
| Jan 24 20:30:09.708: INFO: Pod "client-a-dz6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 29.195304ms | |
| Jan 24 20:30:11.713: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 2.034654667s | |
| Jan 24 20:30:13.717: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 4.038971234s | |
| Jan 24 20:30:15.722: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 6.043188456s | |
| Jan 24 20:30:17.726: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 8.047264387s | |
| Jan 24 20:30:19.730: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 10.051976629s | |
| Jan 24 20:30:21.741: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 12.06245676s | |
| Jan 24 20:30:23.746: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 14.067986314s | |
| Jan 24 20:30:25.753: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 16.074572446s | |
| Jan 24 20:30:27.757: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 18.078773588s | |
| Jan 24 20:30:29.762: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 20.083561626s | |
| Jan 24 20:30:31.779: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 22.100594077s | |
| Jan 24 20:30:33.792: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 24.113621338s | |
| Jan 24 20:30:35.799: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 26.120365252s | |
| Jan 24 20:30:37.803: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 28.124415034s | |
| Jan 24 20:30:39.807: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 30.128436228s | |
| Jan 24 20:30:41.821: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 32.142103293s | |
| Jan 24 20:30:43.825: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 34.146534072s | |
| Jan 24 20:30:45.829: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 36.150801461s | |
| Jan 24 20:30:47.833: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 38.154951617s | |
| Jan 24 20:30:49.840: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 40.161738152s | |
| Jan 24 20:30:51.846: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 42.167799045s | |
| Jan 24 20:30:53.850: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 44.171599387s | |
| Jan 24 20:30:55.855: INFO: Pod "client-a-dz6wx": Phase="Running", Reason="", readiness=true. Elapsed: 46.176449247s | |
| Jan 24 20:30:57.859: INFO: Pod "client-a-dz6wx": Phase="Failed", Reason="", readiness=false. Elapsed: 48.180653375s | |
| STEP: Updating client pod client-a-dz6wx that should successfully connect to svc-server. | |
| Jan 24 20:30:57.870: INFO: Waiting for client-a-dz6wx to complete. | |
| Jan 24 20:30:57.877: INFO: Waiting for client-a-dz6wx to complete. | |
| Jan 24 20:30:57.877: INFO: Waiting up to 5m0s for pod "client-a-dz6wx" in namespace "network-policy-7959" to be "success or failure" | |
| Jan 24 20:30:57.881: INFO: Pod "client-a-dz6wx": Phase="Failed", Reason="", readiness=false. Elapsed: 3.287928ms | |
| Jan 24 20:30:57.884: FAIL: Error getting container logs: the server rejected our request for an unknown reason (get pods client-a-dz6wx) | |
| STEP: Cleaning up the pod client-a-dz6wx | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-7959". | |
| STEP: Found 21 events. | |
| Jan 24 20:30:58.022: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-dz6wx: {default-scheduler } Scheduled: Successfully assigned network-policy-7959/client-a-dz6wx to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:30:58.023: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-5jkxm: {default-scheduler } Scheduled: Successfully assigned network-policy-7959/client-can-connect-80-5jkxm to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:30:58.023: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-k8nfc: {default-scheduler } Scheduled: Successfully assigned network-policy-7959/client-can-connect-81-k8nfc to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:30:58.023: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-hl4lx: {default-scheduler } Scheduled: Successfully assigned network-policy-7959/server-hl4lx to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:29:52 +0000 UTC - event for server-hl4lx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:29:52 +0000 UTC - event for server-hl4lx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:29:52 +0000 UTC - event for server-hl4lx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:29:52 +0000 UTC - event for server-hl4lx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:29:52 +0000 UTC - event for server-hl4lx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:29:52 +0000 UTC - event for server-hl4lx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:30:04 +0000 UTC - event for client-can-connect-80-5jkxm: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-80-container | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:30:04 +0000 UTC - event for client-can-connect-80-5jkxm: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:30:04 +0000 UTC - event for client-can-connect-80-5jkxm: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-80-container | |
| Jan 24 20:30:58.023: INFO: At 2020-01-24 20:30:08 +0000 UTC - event for client-can-connect-81-k8nfc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:30:58.024: INFO: At 2020-01-24 20:30:08 +0000 UTC - event for client-can-connect-81-k8nfc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-can-connect-81-container | |
| Jan 24 20:30:58.024: INFO: At 2020-01-24 20:30:08 +0000 UTC - event for client-can-connect-81-k8nfc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-can-connect-81-container | |
| Jan 24 20:30:58.024: INFO: At 2020-01-24 20:30:10 +0000 UTC - event for client-a-dz6wx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-a-container | |
| Jan 24 20:30:58.024: INFO: At 2020-01-24 20:30:10 +0000 UTC - event for client-a-dz6wx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:30:58.024: INFO: At 2020-01-24 20:30:11 +0000 UTC - event for client-a-dz6wx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-a-container | |
| Jan 24 20:30:58.025: INFO: At 2020-01-24 20:30:57 +0000 UTC - event for server-hl4lx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:30:58.026: INFO: At 2020-01-24 20:30:57 +0000 UTC - event for server-hl4lx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:30:58.038: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:30:58.038: INFO: server-hl4lx workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:29:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:30:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:30:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:29:51 +0000 UTC }] | |
| Jan 24 20:30:58.038: INFO: | |
| Jan 24 20:30:58.066: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:30:58.090: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 36148 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:04 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:04 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:04 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:30:04 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:30:58.091: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:30:58.117: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:30:58.138: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.138: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:30:58.138: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:30:58.138: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:30:58.138: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:30:58.138: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:30:58.138: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:30:58.138: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| Jan 24 20:30:58.138: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.138: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:30:58.138: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.139: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.139: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:30:58.139: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.139: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:30:58.139: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:30:58.139: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:30:58.139: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:30:58.139: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.139: INFO: Container etcd ready: true, restart count 0 | |
| W0124 20:30:58.156100 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:30:58.319: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:30:58.319: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:30:58.323: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 36219 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:30:20 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:30:58.324: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:30:58.330: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:30:58.351: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.352: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:30:58.352: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:30:58.353: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:30:58.355: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:30:58.355: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:30:58.356: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:30:58.357: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:30:58.357: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:30:58.357: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:30:58.357: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:30:58.357: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:30:58.357: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:30:58.357: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:30:58.357: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.357: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:30:58.357: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.357: INFO: Container coredns ready: true, restart count 0 | |
| W0124 20:30:58.365404 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:30:58.450: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:30:58.450: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:30:58.456: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 36289 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:30:58.456: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:30:58.462: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:30:58.481: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:30:58.481: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:30:58.481: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.481: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:30:58.481: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:30:58.481: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:30:58.481: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| W0124 20:30:58.487677 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:30:58.601: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:30:58.601: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:30:58.609: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 36206 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:30:13 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:30:58.610: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:30:58.628: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:30:58.649: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.650: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:30:58.650: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:30:58.650: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:30:58.650: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:30:58.650: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:30:58.650: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:30:58.650: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:30:58.650: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:30:58.650: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:30:58.650: INFO: Container vsphere-csi-node ready: false, restart count 7 | |
| Jan 24 20:30:58.650: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.650: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:30:58.650: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:30:58.650: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:30:58.650: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:30:58.658947 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:30:58.768: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:30:58.768: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:30:58.773: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 36288 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:30:58.774: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:30:58.780: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:30:58.799: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.799: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:30:58.799: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:30:58.799: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:30:58.799: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:30:58.799: INFO: server-hl4lx started at 2020-01-24 20:29:51 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:30:58.799: INFO: Container server-container-80 ready: false, restart count 0 | |
| Jan 24 20:30:58.799: INFO: Container server-container-81 ready: false, restart count 0 | |
| W0124 20:30:58.806513 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:30:58.940: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:30:58.940: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:30:58.946: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 36287 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:30:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:30:58.946: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:30:58.953: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:30:58.982: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.982: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:30:58.982: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:30:58.982: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:30:58.982: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:30:58.982: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:30:58.982: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:30:58.987887 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:30:59.085: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:30:59.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-7959" for this suite. | |
| Jan 24 20:31:05.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:31:05.206: INFO: namespace network-policy-7959 deletion completed in 6.113749559s | |
| • Failure [73.855 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should allow ingress access from updated pod [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:824 | |
| Jan 24 20:30:57.884: Error getting container logs: the server rejected our request for an unknown reason (get pods client-a-dz6wx) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce policy based on Ports [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:459 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:31:05.231: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-6423 | |
| Jan 24 20:31:05.283: INFO: Created pod server-2xmfk | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-6423 | |
| Jan 24 20:31:05.333: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:31:15.361: INFO: Waiting for client-can-connect-80-bkbf5 to complete. | |
| Jan 24 20:31:17.371: INFO: Waiting for client-can-connect-80-bkbf5 to complete. | |
| Jan 24 20:31:17.371: INFO: Waiting up to 5m0s for pod "client-can-connect-80-bkbf5" in namespace "network-policy-6423" to be "success or failure" | |
| Jan 24 20:31:17.375: INFO: Pod "client-can-connect-80-bkbf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.391768ms | |
| STEP: Saw pod success | |
| Jan 24 20:31:17.375: INFO: Pod "client-can-connect-80-bkbf5" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-bkbf5 | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:31:17.403: INFO: Waiting for client-can-connect-81-x87mg to complete. | |
| Jan 24 20:31:19.419: INFO: Waiting for client-can-connect-81-x87mg to complete. | |
| Jan 24 20:31:19.419: INFO: Waiting up to 5m0s for pod "client-can-connect-81-x87mg" in namespace "network-policy-6423" to be "success or failure" | |
| Jan 24 20:31:19.423: INFO: Pod "client-can-connect-81-x87mg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.252789ms | |
| STEP: Saw pod success | |
| Jan 24 20:31:19.423: INFO: Pod "client-can-connect-81-x87mg" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-x87mg | |
| [It] should enforce policy based on Ports [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:459 | |
| STEP: Creating a network policy for the Service which allows traffic only to one port. | |
| STEP: Testing pods can connect only to the port allowed by the policy. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:31:19.452: INFO: Waiting for client-a-jpc5f to complete. | |
| Jan 24 20:31:19.452: INFO: Waiting up to 5m0s for pod "client-a-jpc5f" in namespace "network-policy-6423" to be "success or failure" | |
| Jan 24 20:31:19.463: INFO: Pod "client-a-jpc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.301673ms | |
| Jan 24 20:31:21.467: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 2.014885866s | |
| Jan 24 20:31:23.472: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 4.020052023s | |
| Jan 24 20:31:25.477: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 6.024189199s | |
| Jan 24 20:31:27.480: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 8.027875768s | |
| Jan 24 20:31:29.488: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 10.035445524s | |
| Jan 24 20:31:31.492: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 12.040117392s | |
| Jan 24 20:31:33.497: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 14.04469521s | |
| Jan 24 20:31:35.501: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 16.048644564s | |
| Jan 24 20:31:37.510: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 18.057750497s | |
| Jan 24 20:31:39.514: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 20.062120904s | |
| Jan 24 20:31:41.519: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 22.066537511s | |
| Jan 24 20:31:43.525: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 24.072318816s | |
| Jan 24 20:31:45.529: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 26.076466047s | |
| Jan 24 20:31:47.533: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 28.081110705s | |
| Jan 24 20:31:49.538: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 30.085376006s | |
| Jan 24 20:31:51.544: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 32.091351021s | |
| Jan 24 20:31:53.549: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 34.096293166s | |
| Jan 24 20:31:55.553: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 36.10055667s | |
| Jan 24 20:31:57.557: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 38.105008967s | |
| Jan 24 20:31:59.562: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 40.109721461s | |
| Jan 24 20:32:01.567: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 42.11461068s | |
| Jan 24 20:32:03.574: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 44.121755554s | |
| Jan 24 20:32:05.579: INFO: Pod "client-a-jpc5f": Phase="Running", Reason="", readiness=true. Elapsed: 46.126291619s | |
| Jan 24 20:32:07.583: INFO: Pod "client-a-jpc5f": Phase="Failed", Reason="", readiness=false. Elapsed: 48.130460577s | |
| STEP: Cleaning up the pod client-a-jpc5f | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:32:07.620: INFO: Waiting for client-b-ksld7 to complete. | |
| Jan 24 20:32:09.631: INFO: Waiting for client-b-ksld7 to complete. | |
| Jan 24 20:32:09.631: INFO: Waiting up to 5m0s for pod "client-b-ksld7" in namespace "network-policy-6423" to be "success or failure" | |
| Jan 24 20:32:09.641: INFO: Pod "client-b-ksld7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.727414ms | |
| STEP: Saw pod success | |
| Jan 24 20:32:09.641: INFO: Pod "client-b-ksld7" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-b-ksld7 | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| Jan 24 20:32:09.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-6423" for this suite. | |
| Jan 24 20:32:15.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:32:15.880: INFO: namespace network-policy-6423 deletion completed in 6.129766778s | |
| • [SLOW TEST:70.649 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on Ports [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:459 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:337 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:32:15.889: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-2853 | |
| Jan 24 20:32:15.942: INFO: Created pod server-k7c85 | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-2853 | |
| Jan 24 20:32:15.996: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:32:28.075: INFO: Waiting for client-can-connect-80-wb8gv to complete. | |
| Jan 24 20:32:32.107: INFO: Waiting for client-can-connect-80-wb8gv to complete. | |
| Jan 24 20:32:32.107: INFO: Waiting up to 5m0s for pod "client-can-connect-80-wb8gv" in namespace "network-policy-2853" to be "success or failure" | |
| Jan 24 20:32:32.113: INFO: Pod "client-can-connect-80-wb8gv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.408719ms | |
| STEP: Saw pod success | |
| Jan 24 20:32:32.113: INFO: Pod "client-can-connect-80-wb8gv" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-wb8gv | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:32:32.148: INFO: Waiting for client-can-connect-81-9wf4x to complete. | |
| Jan 24 20:32:34.173: INFO: Waiting for client-can-connect-81-9wf4x to complete. | |
| Jan 24 20:32:34.173: INFO: Waiting up to 5m0s for pod "client-can-connect-81-9wf4x" in namespace "network-policy-2853" to be "success or failure" | |
| Jan 24 20:32:34.176: INFO: Pod "client-can-connect-81-9wf4x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.413749ms | |
| STEP: Saw pod success | |
| Jan 24 20:32:34.176: INFO: Pod "client-can-connect-81-9wf4x" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-9wf4x | |
| [It] should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:337 | |
| STEP: Creating a network policy for the server which allows traffic from client-b in namespace-b. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:32:34.295: INFO: Waiting for client-a-xl4mt to complete. | |
| Jan 24 20:32:34.295: INFO: Waiting up to 5m0s for pod "client-a-xl4mt" in namespace "network-policy-b-5376" to be "success or failure" | |
| Jan 24 20:32:34.309: INFO: Pod "client-a-xl4mt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.779588ms | |
| Jan 24 20:32:36.316: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 2.021077963s | |
| Jan 24 20:32:38.324: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 4.029445889s | |
| Jan 24 20:32:40.332: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 6.03716908s | |
| Jan 24 20:32:42.336: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 8.040729268s | |
| Jan 24 20:32:44.340: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 10.04471647s | |
| Jan 24 20:32:46.344: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 12.04873195s | |
| Jan 24 20:32:48.348: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 14.052828513s | |
| Jan 24 20:32:50.352: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 16.05675376s | |
| Jan 24 20:32:52.357: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 18.061781649s | |
| Jan 24 20:32:54.361: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 20.066633041s | |
| Jan 24 20:32:56.366: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 22.070871679s | |
| Jan 24 20:32:58.370: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 24.075237078s | |
| Jan 24 20:33:00.375: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 26.079821739s | |
| Jan 24 20:33:02.382: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 28.087240367s | |
| Jan 24 20:33:04.386: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 30.091077272s | |
| Jan 24 20:33:06.390: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 32.094674192s | |
| Jan 24 20:33:08.393: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 34.098641707s | |
| Jan 24 20:33:10.398: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 36.102658555s | |
| Jan 24 20:33:12.402: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 38.107336417s | |
| Jan 24 20:33:14.406: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 40.111392604s | |
| Jan 24 20:33:16.411: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 42.115803553s | |
| Jan 24 20:33:18.415: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 44.120474747s | |
| Jan 24 20:33:20.420: INFO: Pod "client-a-xl4mt": Phase="Running", Reason="", readiness=true. Elapsed: 46.125130792s | |
| Jan 24 20:33:22.424: INFO: Pod "client-a-xl4mt": Phase="Failed", Reason="", readiness=false. Elapsed: 48.129109237s | |
| STEP: Cleaning up the pod client-a-xl4mt | |
| STEP: Creating client pod client-b that should not be able to connect to svc-server. | |
| Jan 24 20:33:22.453: INFO: Waiting for client-b-gvl59 to complete. | |
| Jan 24 20:33:22.453: INFO: Waiting up to 5m0s for pod "client-b-gvl59" in namespace "network-policy-2853" to be "success or failure" | |
| Jan 24 20:33:22.456: INFO: Pod "client-b-gvl59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.740982ms | |
| Jan 24 20:33:24.461: INFO: Pod "client-b-gvl59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007653365s | |
| Jan 24 20:33:26.465: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 4.012167994s | |
| Jan 24 20:33:28.470: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 6.016189656s | |
| Jan 24 20:33:30.475: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 8.021577102s | |
| Jan 24 20:33:32.479: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 10.025959854s | |
| Jan 24 20:33:34.484: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 12.030973145s | |
| Jan 24 20:33:36.488: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 14.034885521s | |
| Jan 24 20:33:38.493: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 16.039503881s | |
| Jan 24 20:33:40.499: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 18.045315235s | |
| Jan 24 20:33:42.504: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 20.050808135s | |
| Jan 24 20:33:44.509: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 22.055480874s | |
| Jan 24 20:33:46.516: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 24.062659685s | |
| Jan 24 20:33:48.521: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 26.067987996s | |
| Jan 24 20:33:50.533: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 28.079809514s | |
| Jan 24 20:33:52.538: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 30.084851395s | |
| Jan 24 20:33:54.542: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 32.088970982s | |
| Jan 24 20:33:56.547: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 34.094066458s | |
| Jan 24 20:33:58.552: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 36.098221953s | |
| Jan 24 20:34:00.579: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 38.125442497s | |
| Jan 24 20:34:02.585: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 40.132021999s | |
| Jan 24 20:34:04.590: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 42.136285571s | |
| Jan 24 20:34:06.594: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 44.141002203s | |
| Jan 24 20:34:08.599: INFO: Pod "client-b-gvl59": Phase="Running", Reason="", readiness=true. Elapsed: 46.145447393s | |
| Jan 24 20:34:10.611: INFO: Pod "client-b-gvl59": Phase="Failed", Reason="", readiness=false. Elapsed: 48.157775741s | |
| STEP: Cleaning up the pod client-b-gvl59 | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:34:10.644: INFO: Waiting for client-b-fx6jq to complete. | |
| Jan 24 20:34:58.668: INFO: Waiting for client-b-fx6jq to complete. | |
| Jan 24 20:34:58.668: INFO: Waiting up to 5m0s for pod "client-b-fx6jq" in namespace "network-policy-b-5376" to be "success or failure" | |
| Jan 24 20:34:58.672: INFO: Pod "client-b-fx6jq": Phase="Failed", Reason="", readiness=false. Elapsed: 3.978962ms | |
| Jan 24 20:34:58.676: FAIL: Error getting container logs: the server could not find the requested resource (get pods client-b-fx6jq) | |
| STEP: Cleaning up the pod client-b-fx6jq | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-2853". | |
| STEP: Found 20 events. | |
| Jan 24 20:34:58.773: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-b-gvl59: {default-scheduler } Scheduled: Successfully assigned network-policy-2853/client-b-gvl59 to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:34:58.773: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-wb8gv: {default-scheduler } Scheduled: Successfully assigned network-policy-2853/client-can-connect-80-wb8gv to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:34:58.774: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-9wf4x: {default-scheduler } Scheduled: Successfully assigned network-policy-2853/client-can-connect-81-9wf4x to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:34:58.774: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-k7c85: {default-scheduler } Scheduled: Successfully assigned network-policy-2853/server-k7c85 to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:34:58.774: INFO: At 2020-01-24 20:32:16 +0000 UTC - event for server-k7c85: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:34:58.774: INFO: At 2020-01-24 20:32:17 +0000 UTC - event for server-k7c85: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:34:58.774: INFO: At 2020-01-24 20:32:17 +0000 UTC - event for server-k7c85: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:34:58.774: INFO: At 2020-01-24 20:32:17 +0000 UTC - event for server-k7c85: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:34:58.775: INFO: At 2020-01-24 20:32:17 +0000 UTC - event for server-k7c85: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:34:58.775: INFO: At 2020-01-24 20:32:17 +0000 UTC - event for server-k7c85: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:34:58.775: INFO: At 2020-01-24 20:32:29 +0000 UTC - event for client-can-connect-80-wb8gv: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-80-container | |
| Jan 24 20:34:58.775: INFO: At 2020-01-24 20:32:29 +0000 UTC - event for client-can-connect-80-wb8gv: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-80-container | |
| Jan 24 20:34:58.775: INFO: At 2020-01-24 20:32:29 +0000 UTC - event for client-can-connect-80-wb8gv: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:34:58.776: INFO: At 2020-01-24 20:32:33 +0000 UTC - event for client-can-connect-81-9wf4x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-can-connect-81-container | |
| Jan 24 20:34:58.776: INFO: At 2020-01-24 20:32:33 +0000 UTC - event for client-can-connect-81-9wf4x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-can-connect-81-container | |
| Jan 24 20:34:58.777: INFO: At 2020-01-24 20:32:33 +0000 UTC - event for client-can-connect-81-9wf4x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:34:58.777: INFO: At 2020-01-24 20:33:23 +0000 UTC - event for client-b-gvl59: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-b-container | |
| Jan 24 20:34:58.778: INFO: At 2020-01-24 20:33:23 +0000 UTC - event for client-b-gvl59: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-b-container | |
| Jan 24 20:34:58.778: INFO: At 2020-01-24 20:33:23 +0000 UTC - event for client-b-gvl59: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:34:58.779: INFO: At 2020-01-24 20:34:10 +0000 UTC - event for client-b-gvl59: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "0caa3b1b-1acc-4bc6-8809-d6300a0f9522" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"7667ae0fe315bd868f35c679ca2b5714349ec2cf59e0c0fe4e2203c1cff179a5\": could not teardown ipv4 dnat: running [/sbin/iptables -t nat -X CNI-DN-4ed16ec52c191cffd65ee --wait]: exit status 1: iptables: No chain/target/match by that name.\n" | |
| Jan 24 20:34:58.784: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:34:58.784: INFO: server-k7c85 workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:32:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:32:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:32:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:32:15 +0000 UTC }] | |
| Jan 24 20:34:58.784: INFO: | |
| Jan 24 20:34:58.797: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:34:58.813: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 37017 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:34:05 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:34:58.815: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:34:58.832: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:34:58.850: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:58.850: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:34:58.850: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:58.850: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:34:58.850: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:58.851: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:34:58.851: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:58.851: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:34:58.851: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:34:58.851: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:34:58.851: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:58.851: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:34:58.851: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:58.851: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:34:58.851: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:34:58.851: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| W0124 20:34:58.856929 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:34:59.060: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:34:59.060: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:34:59.064: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 37072 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:34:20 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:34:59.065: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:34:59.070: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:34:59.090: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:34:59.090: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:34:59.090: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:34:59.090: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:34:59.090: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:34:59.090: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:34:59.090: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:34:59.090: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:34:59.090: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:34:59.090: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:34:59.090: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:34:59.090: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:34:59.090: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:59.091: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:34:59.091: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:59.091: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:34:59.091: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:59.092: INFO: Container kube-proxy ready: true, restart count 0 | |
| W0124 20:34:59.097914 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:34:59.202: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:34:59.202: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:34:59.206: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 37142 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:34:59.207: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:34:59.214: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:34:59.235: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:59.235: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:34:59.235: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:34:59.235: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:34:59.235: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:34:59.235: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:34:59.235: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:34:59.241670 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:34:59.338: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:34:59.338: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:34:59.342: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 37056 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:13 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:34:13 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:34:59.342: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:34:59.349: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:34:59.369: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:59.369: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:34:59.369: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:34:59.369: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:34:59.369: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:34:59.369: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:34:59.369: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:34:59.369: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:34:59.369: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:34:59.369: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:34:59.369: INFO: Container vsphere-csi-node ready: false, restart count 9 | |
| Jan 24 20:34:59.369: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:59.369: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:34:59.369: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:34:59.369: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:34:59.369: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:34:59.375871 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:34:59.471: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:34:59.471: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:34:59.476: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 37141 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:34:59.477: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:34:59.482: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:34:59.500: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:34:59.500: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:34:59.500: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:34:59.500: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: server-k7c85 started at 2020-01-24 20:32:15 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:34:59.500: INFO: Container server-container-80 ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: Container server-container-81 ready: true, restart count 0 | |
| Jan 24 20:34:59.500: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:59.500: INFO: Container kube-proxy ready: true, restart count 0 | |
| W0124 20:34:59.507471 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:34:59.660: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:34:59.660: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:34:59.664: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 37139 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:34:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:34:59.665: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:34:59.671: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:34:59.691: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:59.691: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:34:59.691: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:34:59.691: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:34:59.691: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:34:59.691: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:34:59.691: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| W0124 20:34:59.697480 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:34:59.795: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:34:59.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-2853" for this suite. | |
| Jan 24 20:35:09.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:35:09.912: INFO: namespace network-policy-2853 deletion completed in 10.110475854s | |
| STEP: Destroying namespace "network-policy-b-5376" for this suite. | |
| Jan 24 20:35:15.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:35:16.044: INFO: namespace network-policy-b-5376 deletion completed in 6.132286374s | |
| • Failure [180.156 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:337 | |
| Jan 24 20:34:58.676: Error getting container logs: the server could not find the requested resource (get pods client-b-fx6jq) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:99 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:35:16.050: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-7621 | |
| Jan 24 20:35:16.105: INFO: Created pod server-lkghc | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-7621 | |
| Jan 24 20:35:16.138: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:35:26.163: INFO: Waiting for client-can-connect-80-b7tv6 to complete. | |
| Jan 24 20:35:28.179: INFO: Waiting for client-can-connect-80-b7tv6 to complete. | |
| Jan 24 20:35:28.179: INFO: Waiting up to 5m0s for pod "client-can-connect-80-b7tv6" in namespace "network-policy-7621" to be "success or failure" | |
| Jan 24 20:35:28.182: INFO: Pod "client-can-connect-80-b7tv6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.65888ms | |
| STEP: Saw pod success | |
| Jan 24 20:35:28.182: INFO: Pod "client-can-connect-80-b7tv6" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-b7tv6 | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:35:28.211: INFO: Waiting for client-can-connect-81-99rhd to complete. | |
| Jan 24 20:35:30.229: INFO: Waiting for client-can-connect-81-99rhd to complete. | |
| Jan 24 20:35:30.229: INFO: Waiting up to 5m0s for pod "client-can-connect-81-99rhd" in namespace "network-policy-7621" to be "success or failure" | |
| Jan 24 20:35:30.232: INFO: Pod "client-can-connect-81-99rhd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.122089ms | |
| STEP: Saw pod success | |
| Jan 24 20:35:30.233: INFO: Pod "client-can-connect-81-99rhd" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-99rhd | |
| [It] should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:99 | |
| STEP: Creating client-a, in server's namespace, which should be able to contact the server. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:35:30.295: INFO: Waiting for client-a-kxpql to complete. | |
| Jan 24 20:35:34.311: INFO: Waiting for client-a-kxpql to complete. | |
| Jan 24 20:35:34.311: INFO: Waiting up to 5m0s for pod "client-a-kxpql" in namespace "network-policy-7621" to be "success or failure" | |
| Jan 24 20:35:34.314: INFO: Pod "client-a-kxpql": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.452776ms | |
| STEP: Saw pod success | |
| Jan 24 20:35:34.314: INFO: Pod "client-a-kxpql" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-kxpql | |
| STEP: Creating client-b, in server's namespace, which should be able to contact the server. | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:35:34.334: INFO: Waiting for client-b-rntpl to complete. | |
| Jan 24 20:35:36.344: INFO: Waiting for client-b-rntpl to complete. | |
| Jan 24 20:35:36.344: INFO: Waiting up to 5m0s for pod "client-b-rntpl" in namespace "network-policy-7621" to be "success or failure" | |
| Jan 24 20:35:36.348: INFO: Pod "client-b-rntpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.950599ms | |
| STEP: Saw pod success | |
| Jan 24 20:35:36.348: INFO: Pod "client-b-rntpl" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-b-rntpl | |
| STEP: Creating client-a, not in server's namespace, which should be able to contact the server. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:35:36.371: INFO: Waiting for client-a-f5wvb to complete. | |
| Jan 24 20:35:40.397: INFO: Waiting for client-a-f5wvb to complete. | |
| Jan 24 20:35:40.398: INFO: Waiting up to 5m0s for pod "client-a-f5wvb" in namespace "network-policy-b-75" to be "success or failure" | |
| Jan 24 20:35:40.402: INFO: Pod "client-a-f5wvb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.893905ms | |
| STEP: Saw pod success | |
| Jan 24 20:35:40.402: INFO: Pod "client-a-f5wvb" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-f5wvb | |
| STEP: Creating a network policy for the server which allows traffic from the pod 'client-a' in same namespace. | |
| STEP: Creating client-a, in server's namespace, which should be able to contact the server. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:35:40.438: INFO: Waiting for client-a-plwfc to complete. | |
| Jan 24 20:36:28.451: INFO: Waiting for client-a-plwfc to complete. | |
| Jan 24 20:36:28.451: INFO: Waiting up to 5m0s for pod "client-a-plwfc" in namespace "network-policy-7621" to be "success or failure" | |
| Jan 24 20:36:28.455: INFO: Pod "client-a-plwfc": Phase="Failed", Reason="", readiness=false. Elapsed: 3.901989ms | |
| Jan 24 20:36:28.460: FAIL: Error getting container logs: the server rejected our request for an unknown reason (get pods client-a-plwfc) | |
| STEP: Cleaning up the pod client-a-plwfc | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-7621". | |
| STEP: Found 29 events. | |
| Jan 24 20:36:28.564: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-kxpql: {default-scheduler } Scheduled: Successfully assigned network-policy-7621/client-a-kxpql to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:36:28.564: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-plwfc: {default-scheduler } Scheduled: Successfully assigned network-policy-7621/client-a-plwfc to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:36:28.564: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-b-rntpl: {default-scheduler } Scheduled: Successfully assigned network-policy-7621/client-b-rntpl to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:36:28.564: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-b7tv6: {default-scheduler } Scheduled: Successfully assigned network-policy-7621/client-can-connect-80-b7tv6 to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:36:28.565: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-99rhd: {default-scheduler } Scheduled: Successfully assigned network-policy-7621/client-can-connect-81-99rhd to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:36:28.565: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-lkghc: {default-scheduler } Scheduled: Successfully assigned network-policy-7621/server-lkghc to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:36:28.566: INFO: At 2020-01-24 20:35:17 +0000 UTC - event for server-lkghc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:36:28.566: INFO: At 2020-01-24 20:35:17 +0000 UTC - event for server-lkghc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:36:28.567: INFO: At 2020-01-24 20:35:17 +0000 UTC - event for server-lkghc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:36:28.567: INFO: At 2020-01-24 20:35:17 +0000 UTC - event for server-lkghc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:36:28.567: INFO: At 2020-01-24 20:35:17 +0000 UTC - event for server-lkghc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:36:28.568: INFO: At 2020-01-24 20:35:17 +0000 UTC - event for server-lkghc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:36:28.568: INFO: At 2020-01-24 20:35:27 +0000 UTC - event for client-can-connect-80-b7tv6: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-80-container | |
| Jan 24 20:36:28.568: INFO: At 2020-01-24 20:35:27 +0000 UTC - event for client-can-connect-80-b7tv6: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:36:28.568: INFO: At 2020-01-24 20:35:27 +0000 UTC - event for client-can-connect-80-b7tv6: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-80-container | |
| Jan 24 20:36:28.569: INFO: At 2020-01-24 20:35:29 +0000 UTC - event for client-can-connect-81-99rhd: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:36:28.569: INFO: At 2020-01-24 20:35:29 +0000 UTC - event for client-can-connect-81-99rhd: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-can-connect-81-container | |
| Jan 24 20:36:28.569: INFO: At 2020-01-24 20:35:29 +0000 UTC - event for client-can-connect-81-99rhd: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-can-connect-81-container | |
| Jan 24 20:36:28.569: INFO: At 2020-01-24 20:35:32 +0000 UTC - event for client-a-kxpql: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-a-container | |
| Jan 24 20:36:28.569: INFO: At 2020-01-24 20:35:32 +0000 UTC - event for client-a-kxpql: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:36:28.569: INFO: At 2020-01-24 20:35:32 +0000 UTC - event for client-a-kxpql: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-a-container | |
| Jan 24 20:36:28.570: INFO: At 2020-01-24 20:35:35 +0000 UTC - event for client-b-rntpl: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-b-container | |
| Jan 24 20:36:28.570: INFO: At 2020-01-24 20:35:35 +0000 UTC - event for client-b-rntpl: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:36:28.570: INFO: At 2020-01-24 20:35:35 +0000 UTC - event for client-b-rntpl: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-b-container | |
| Jan 24 20:36:28.570: INFO: At 2020-01-24 20:35:41 +0000 UTC - event for client-a-plwfc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-a-container | |
| Jan 24 20:36:28.570: INFO: At 2020-01-24 20:35:41 +0000 UTC - event for client-a-plwfc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:36:28.570: INFO: At 2020-01-24 20:35:41 +0000 UTC - event for client-a-plwfc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-a-container | |
| Jan 24 20:36:28.570: INFO: At 2020-01-24 20:36:28 +0000 UTC - event for server-lkghc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:36:28.570: INFO: At 2020-01-24 20:36:28 +0000 UTC - event for server-lkghc: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:36:28.575: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:36:28.575: INFO: server-lkghc workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:35:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:35:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:35:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:35:16 +0000 UTC }] | |
| Jan 24 20:36:28.575: INFO: | |
| Jan 24 20:36:28.588: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:36:28.593: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 37498 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:36:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:36:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:36:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:36:05 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:36:28.594: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:36:28.601: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:36:28.617: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.617: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.617: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:36:28.617: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.617: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:36:28.617: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:36:28.617: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:36:28.617: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.617: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:36:28.617: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.617: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:36:28.618: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:36:28.618: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:36:28.618: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:36:28.618: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:36:28.619: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:36:28.619: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| Jan 24 20:36:28.619: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.619: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| W0124 20:36:28.633475 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:36:28.804: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:36:28.804: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:36:28.808: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 37532 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:36:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:36:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:36:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:36:20 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:36:28.808: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:36:28.815: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:36:28.828: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.828: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:36:28.828: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:36:28.828: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:36:28.828: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.828: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:36:28.828: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.828: INFO: Container coredns ready: true, restart count 0 | |
| W0124 20:36:28.833358 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:36:28.928: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:36:28.928: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:36:28.932: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 37464 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:36:28.933: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:36:28.944: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:36:28.956: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:28.956: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:36:28.956: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:36:28.956: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:36:28.956: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:36:28.956: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:36:28.956: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:36:28.961838 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:36:29.080: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:36:29.083: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:36:29.087: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 37519 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:36:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:36:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:36:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:36:14 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:36:29.088: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:36:29.098: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:36:29.112: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:29.112: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:36:29.112: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:36:29.112: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:36:29.112: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:36:29.112: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:36:29.112: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:36:29.112: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:36:29.112: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:36:29.112: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:36:29.112: INFO: Container vsphere-csi-node ready: false, restart count 9 | |
| Jan 24 20:36:29.112: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:29.112: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:36:29.112: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:36:29.112: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:36:29.112: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:36:29.119311 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:36:29.238: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:36:29.239: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:36:29.243: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 37463 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:36:29.243: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:36:29.250: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:36:29.261: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:29.261: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:36:29.261: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:36:29.262: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:36:29.263: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:36:29.263: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:36:29.263: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:36:29.263: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:36:29.263: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:36:29.263: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:36:29.263: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:36:29.264: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:36:29.264: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:36:29.264: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:36:29.264: INFO: server-lkghc started at 2020-01-24 20:35:16 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:36:29.264: INFO: Container server-container-80 ready: true, restart count 0 | |
| Jan 24 20:36:29.264: INFO: Container server-container-81 ready: true, restart count 0 | |
| W0124 20:36:29.271294 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:36:29.375: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:36:29.375: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:36:29.380: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 37461 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:35:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:36:29.380: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:36:29.388: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:36:29.395: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:29.395: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:36:29.395: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:36:29.395: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:36:29.395: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:36:29.395: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:36:29.395: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:36:29.401185 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:36:29.545: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:36:29.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-7621" for this suite. | |
| Jan 24 20:36:39.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:36:39.679: INFO: namespace network-policy-7621 deletion completed in 10.128296309s | |
| STEP: Destroying namespace "network-policy-b-75" for this suite. | |
| Jan 24 20:36:45.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:36:45.823: INFO: namespace network-policy-b-75 deletion completed in 6.143383914s | |
| • Failure [89.773 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:99 | |
| Jan 24 20:36:28.460: Error getting container logs: the server rejected our request for an unknown reason (get pods client-a-plwfc) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should stop enforcing policies after they are deleted [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1119 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:36:45.830: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-6489 | |
| Jan 24 20:36:45.934: INFO: Created pod server-v5xjg | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-6489 | |
| Jan 24 20:36:45.968: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:36:57.997: INFO: Waiting for client-can-connect-80-gkkcd to complete. | |
| Jan 24 20:37:00.045: INFO: Waiting for client-can-connect-80-gkkcd to complete. | |
| Jan 24 20:37:00.045: INFO: Waiting up to 5m0s for pod "client-can-connect-80-gkkcd" in namespace "network-policy-6489" to be "success or failure" | |
| Jan 24 20:37:00.048: INFO: Pod "client-can-connect-80-gkkcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.668992ms | |
| STEP: Saw pod success | |
| Jan 24 20:37:00.048: INFO: Pod "client-can-connect-80-gkkcd" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-gkkcd | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:37:00.083: INFO: Waiting for client-can-connect-81-tkgvf to complete. | |
| Jan 24 20:37:04.093: INFO: Waiting for client-can-connect-81-tkgvf to complete. | |
| Jan 24 20:37:04.094: INFO: Waiting up to 5m0s for pod "client-can-connect-81-tkgvf" in namespace "network-policy-6489" to be "success or failure" | |
| Jan 24 20:37:04.098: INFO: Pod "client-can-connect-81-tkgvf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.865885ms | |
| STEP: Saw pod success | |
| Jan 24 20:37:04.099: INFO: Pod "client-can-connect-81-tkgvf" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-tkgvf | |
| [It] should stop enforcing policies after they are deleted [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1119 | |
| STEP: Creating a network policy for the server which denies all traffic. | |
| STEP: Creating client-a which should not be able to contact the server. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:37:04.146: INFO: Waiting for client-a-t6pz4 to complete. | |
| Jan 24 20:37:04.146: INFO: Waiting up to 5m0s for pod "client-a-t6pz4" in namespace "network-policy-6489" to be "success or failure" | |
| Jan 24 20:37:04.157: INFO: Pod "client-a-t6pz4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.838337ms | |
| Jan 24 20:37:06.164: INFO: Pod "client-a-t6pz4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017571822s | |
| Jan 24 20:37:08.168: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 4.021716013s | |
| Jan 24 20:37:10.172: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 6.025634132s | |
| Jan 24 20:37:12.176: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 8.029980296s | |
| Jan 24 20:37:14.180: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 10.033861632s | |
| Jan 24 20:37:16.184: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 12.037962847s | |
| Jan 24 20:37:18.188: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 14.042040431s | |
| Jan 24 20:37:20.193: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 16.046733247s | |
| Jan 24 20:37:22.197: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 18.05057164s | |
| Jan 24 20:37:24.202: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 20.055647023s | |
| Jan 24 20:37:26.207: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 22.061061336s | |
| Jan 24 20:37:28.212: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 24.065232548s | |
| Jan 24 20:37:30.216: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 26.069461029s | |
| Jan 24 20:37:32.222: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 28.075541567s | |
| Jan 24 20:37:34.226: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 30.079655999s | |
| Jan 24 20:37:36.230: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 32.083480581s | |
| Jan 24 20:37:38.234: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 34.087872544s | |
| Jan 24 20:37:40.239: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 36.092171109s | |
| Jan 24 20:37:42.243: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 38.096168111s | |
| Jan 24 20:37:44.248: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 40.101619745s | |
| Jan 24 20:37:46.253: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 42.106635537s | |
| Jan 24 20:37:48.259: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 44.112406014s | |
| Jan 24 20:37:50.264: INFO: Pod "client-a-t6pz4": Phase="Running", Reason="", readiness=true. Elapsed: 46.117352093s | |
| Jan 24 20:37:52.269: INFO: Pod "client-a-t6pz4": Phase="Failed", Reason="", readiness=false. Elapsed: 48.122682871s | |
| STEP: Cleaning up the pod client-a-t6pz4 | |
| STEP: Creating a network policy for the server which allows traffic only from client-a. | |
| STEP: Creating client-a which should be able to contact the server. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:37:52.334: INFO: Waiting for client-a-md87c to complete. | |
| Jan 24 20:38:40.343: INFO: Waiting for client-a-md87c to complete. | |
| Jan 24 20:38:40.345: INFO: Waiting up to 5m0s for pod "client-a-md87c" in namespace "network-policy-6489" to be "success or failure" | |
| Jan 24 20:38:40.348: INFO: Pod "client-a-md87c": Phase="Failed", Reason="", readiness=false. Elapsed: 3.260074ms | |
| Jan 24 20:38:40.353: FAIL: Error getting container logs: the server rejected our request for an unknown reason (get pods client-a-md87c) | |
| STEP: Cleaning up the pod client-a-md87c | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-6489". | |
| STEP: Found 25 events. | |
| Jan 24 20:38:40.467: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-md87c: {default-scheduler } Scheduled: Successfully assigned network-policy-6489/client-a-md87c to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:38:40.470: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-t6pz4: {default-scheduler } Scheduled: Successfully assigned network-policy-6489/client-a-t6pz4 to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:38:40.470: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-gkkcd: {default-scheduler } Scheduled: Successfully assigned network-policy-6489/client-can-connect-80-gkkcd to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:38:40.470: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-tkgvf: {default-scheduler } Scheduled: Successfully assigned network-policy-6489/client-can-connect-81-tkgvf to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:38:40.470: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-v5xjg: {default-scheduler } Scheduled: Successfully assigned network-policy-6489/server-v5xjg to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:36:46 +0000 UTC - event for server-v5xjg: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:36:47 +0000 UTC - event for server-v5xjg: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:36:47 +0000 UTC - event for server-v5xjg: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:36:47 +0000 UTC - event for server-v5xjg: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:36:47 +0000 UTC - event for server-v5xjg: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:36:47 +0000 UTC - event for server-v5xjg: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:36:58 +0000 UTC - event for client-can-connect-80-gkkcd: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:36:59 +0000 UTC - event for client-can-connect-80-gkkcd: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-can-connect-80-container | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:36:59 +0000 UTC - event for client-can-connect-80-gkkcd: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-can-connect-80-container | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:37:01 +0000 UTC - event for client-can-connect-81-tkgvf: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-81-container | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:37:01 +0000 UTC - event for client-can-connect-81-tkgvf: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-81-container | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:37:01 +0000 UTC - event for client-can-connect-81-tkgvf: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:37:05 +0000 UTC - event for client-a-t6pz4: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-a-container | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:37:05 +0000 UTC - event for client-a-t6pz4: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-a-container | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:37:05 +0000 UTC - event for client-a-t6pz4: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:37:53 +0000 UTC - event for client-a-md87c: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-a-container | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:37:53 +0000 UTC - event for client-a-md87c: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:37:53 +0000 UTC - event for client-a-md87c: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-a-container | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:38:40 +0000 UTC - event for server-v5xjg: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:38:40.470: INFO: At 2020-01-24 20:38:40 +0000 UTC - event for server-v5xjg: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:38:40.477: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:38:40.477: INFO: server-v5xjg workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:36:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:36:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:36:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:36:45 +0000 UTC }] | |
| Jan 24 20:38:40.477: INFO: | |
| Jan 24 20:38:40.495: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:38:40.499: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 37924 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:38:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:38:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:38:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:38:05 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:38:40.500: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:38:40.514: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:38:40.535: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:38:40.535: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:38:40.535: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:38:40.535: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:38:40.535: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| W0124 20:38:40.554070 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:38:40.779: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:38:40.779: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:38:40.784: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 37961 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:38:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:38:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:38:20 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:38:20 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:38:40.785: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:38:40.795: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:38:40.822: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:38:40.822: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:38:40.822: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:38:40.822: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.822: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.822: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:38:40.822: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.822: INFO: Container kube-proxy ready: true, restart count 0 | |
| W0124 20:38:40.832062 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:38:40.923: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:38:40.923: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:38:40.930: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 37873 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:37:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:37:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:37:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:37:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:38:40.931: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:38:40.938: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:38:40.961: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:40.961: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:38:40.961: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:38:40.961: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:38:40.961: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:38:40.961: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:38:40.961: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:38:40.976847 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:38:41.099: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:38:41.099: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:38:41.105: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 37948 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:38:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:38:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:38:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:38:14 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:38:41.105: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:38:41.112: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:38:41.130: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:41.130: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:38:41.130: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:38:41.130: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:38:41.130: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:38:41.130: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:38:41.130: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:38:41.130: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:38:41.130: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:38:41.130: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:38:41.130: INFO: Container vsphere-csi-node ready: false, restart count 9 | |
| Jan 24 20:38:41.130: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:41.130: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:38:41.130: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:38:41.130: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:38:41.130: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:38:41.137155 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:38:41.232: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:38:41.232: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:38:41.244: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 37872 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:37:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:37:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:37:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:37:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:38:41.244: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:38:41.252: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:38:41.268: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:41.268: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:38:41.268: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:38:41.268: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:38:41.268: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:38:41.268: INFO: server-v5xjg started at 2020-01-24 20:36:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:38:41.269: INFO: Container server-container-80 ready: true, restart count 0 | |
| Jan 24 20:38:41.269: INFO: Container server-container-81 ready: true, restart count 0 | |
| W0124 20:38:41.275106 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:38:41.352: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:38:41.352: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:38:41.356: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 37871 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:37:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:37:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:37:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:37:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:38:41.357: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:38:41.363: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:38:41.382: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:41.382: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:38:41.382: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:38:41.382: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:38:41.382: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:38:41.382: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:38:41.382: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:38:41.388244 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:38:41.489: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:38:41.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-6489" for this suite. | |
| Jan 24 20:38:53.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:38:53.609: INFO: namespace network-policy-6489 deletion completed in 12.114123621s | |
| • Failure [127.779 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1119 | |
| Jan 24 20:38:40.353: Error getting container logs: the server rejected our request for an unknown reason (get pods client-a-md87c) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce updated policy [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:684 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:38:53.614: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-9165 | |
| Jan 24 20:38:53.670: INFO: Created pod server-ndc2k | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-9165 | |
| Jan 24 20:38:53.712: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:39:05.750: INFO: Waiting for client-can-connect-80-5nl4q to complete. | |
| Jan 24 20:39:09.761: INFO: Waiting for client-can-connect-80-5nl4q to complete. | |
| Jan 24 20:39:09.761: INFO: Waiting up to 5m0s for pod "client-can-connect-80-5nl4q" in namespace "network-policy-9165" to be "success or failure" | |
| Jan 24 20:39:09.764: INFO: Pod "client-can-connect-80-5nl4q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.269518ms | |
| STEP: Saw pod success | |
| Jan 24 20:39:09.764: INFO: Pod "client-can-connect-80-5nl4q" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-5nl4q | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:39:09.805: INFO: Waiting for client-can-connect-81-phmj9 to complete. | |
| Jan 24 20:39:11.837: INFO: Waiting for client-can-connect-81-phmj9 to complete. | |
| Jan 24 20:39:11.837: INFO: Waiting up to 5m0s for pod "client-can-connect-81-phmj9" in namespace "network-policy-9165" to be "success or failure" | |
| Jan 24 20:39:11.843: INFO: Pod "client-can-connect-81-phmj9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.227941ms | |
| STEP: Saw pod success | |
| Jan 24 20:39:11.843: INFO: Pod "client-can-connect-81-phmj9" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-phmj9 | |
| [It] should enforce updated policy [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:684 | |
| STEP: Creating a network policy for the Service which allows traffic from pod at a port | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:39:11.879: INFO: Waiting for client-a-8b5jn to complete. | |
| Jan 24 20:39:59.897: INFO: Waiting for client-a-8b5jn to complete. | |
| Jan 24 20:39:59.897: INFO: Waiting up to 5m0s for pod "client-a-8b5jn" in namespace "network-policy-9165" to be "success or failure" | |
| Jan 24 20:39:59.901: INFO: Pod "client-a-8b5jn": Phase="Failed", Reason="", readiness=false. Elapsed: 3.51994ms | |
| Jan 24 20:39:59.905: FAIL: Error getting container logs: the server rejected our request for an unknown reason (get pods client-a-8b5jn) | |
| STEP: Cleaning up the pod client-a-8b5jn | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-9165". | |
| STEP: Found 21 events. | |
| Jan 24 20:40:00.154: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-8b5jn: {default-scheduler } Scheduled: Successfully assigned network-policy-9165/client-a-8b5jn to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:40:00.154: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-5nl4q: {default-scheduler } Scheduled: Successfully assigned network-policy-9165/client-can-connect-80-5nl4q to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:40:00.156: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-phmj9: {default-scheduler } Scheduled: Successfully assigned network-policy-9165/client-can-connect-81-phmj9 to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:40:00.156: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-ndc2k: {default-scheduler } Scheduled: Successfully assigned network-policy-9165/server-ndc2k to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:40:00.157: INFO: At 2020-01-24 20:38:54 +0000 UTC - event for server-ndc2k: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:40:00.158: INFO: At 2020-01-24 20:38:54 +0000 UTC - event for server-ndc2k: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:40:00.159: INFO: At 2020-01-24 20:38:54 +0000 UTC - event for server-ndc2k: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:40:00.159: INFO: At 2020-01-24 20:38:54 +0000 UTC - event for server-ndc2k: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:40:00.159: INFO: At 2020-01-24 20:38:55 +0000 UTC - event for server-ndc2k: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:40:00.160: INFO: At 2020-01-24 20:38:55 +0000 UTC - event for server-ndc2k: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:40:00.160: INFO: At 2020-01-24 20:39:06 +0000 UTC - event for client-can-connect-80-5nl4q: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-can-connect-80-container | |
| Jan 24 20:40:00.161: INFO: At 2020-01-24 20:39:06 +0000 UTC - event for client-can-connect-80-5nl4q: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:40:00.161: INFO: At 2020-01-24 20:39:07 +0000 UTC - event for client-can-connect-80-5nl4q: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-can-connect-80-container | |
| Jan 24 20:40:00.162: INFO: At 2020-01-24 20:39:10 +0000 UTC - event for client-can-connect-81-phmj9: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:40:00.162: INFO: At 2020-01-24 20:39:10 +0000 UTC - event for client-can-connect-81-phmj9: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-81-container | |
| Jan 24 20:40:00.162: INFO: At 2020-01-24 20:39:11 +0000 UTC - event for client-can-connect-81-phmj9: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-81-container | |
| Jan 24 20:40:00.162: INFO: At 2020-01-24 20:39:12 +0000 UTC - event for client-a-8b5jn: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-a-container | |
| Jan 24 20:40:00.163: INFO: At 2020-01-24 20:39:12 +0000 UTC - event for client-a-8b5jn: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:40:00.163: INFO: At 2020-01-24 20:39:13 +0000 UTC - event for client-a-8b5jn: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-a-container | |
| Jan 24 20:40:00.164: INFO: At 2020-01-24 20:39:59 +0000 UTC - event for server-ndc2k: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:40:00.164: INFO: At 2020-01-24 20:39:59 +0000 UTC - event for server-ndc2k: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:40:00.168: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:40:00.168: INFO: server-ndc2k workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:38:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:39:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:39:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:38:53 +0000 UTC }] | |
| Jan 24 20:40:00.168: INFO: | |
| Jan 24 20:40:00.182: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:40:00.188: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 38138 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:39:05 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:40:00.189: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:40:00.222: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:40:00.235: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:40:00.235: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:40:00.235: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:40:00.235: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:40:00.236: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.236: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.236: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.236: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:40:00.236: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.236: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:40:00.236: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.236: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:40:00.236: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.236: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:40:00.236: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:40:00.236: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| W0124 20:40:00.245869 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:40:00.440: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:40:00.440: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:40:00.484: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 38220 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:21 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:21 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:21 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:39:21 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:40:00.486: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:40:00.494: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:40:00.505: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:40:00.505: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:40:00.505: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:40:00.505: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.505: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.505: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:40:00.505: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.505: INFO: Container kube-proxy ready: true, restart count 0 | |
| W0124 20:40:00.513191 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:40:00.642: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:40:00.642: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:40:00.661: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 38291 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:39:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:40:00.662: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:40:00.671: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:40:00.692: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:40:00.692: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:40:00.692: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:40:00.692: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.694: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:40:00.694: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:40:00.694: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:40:00.695: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:40:00.696: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:40:00.696: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:40:00.696: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:40:00.697: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:40:00.697: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:40:00.697: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:40:00.698: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:40:00.698: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:40:00.698: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| W0124 20:40:00.706246 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:40:00.814: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:40:00.814: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:40:00.869: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 38206 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:39:14 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:40:00.872: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:40:00.879: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:40:00.895: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.895: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:40:00.895: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:40:00.895: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:40:00.895: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:40:00.895: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:40:00.895: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:40:00.895: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:40:00.895: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:40:00.895: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:40:00.895: INFO: Container vsphere-csi-node ready: true, restart count 10 | |
| Jan 24 20:40:00.895: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:00.895: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:40:00.895: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:40:00.895: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:40:00.895: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:40:00.902119 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:40:01.137: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:40:01.137: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:40:01.145: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 38290 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:39:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:40:01.146: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:40:01.169: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:40:01.194: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:01.194: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:40:01.194: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:40:01.194: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:40:01.194: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:40:01.194: INFO: server-ndc2k started at 2020-01-24 20:38:53 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:40:01.194: INFO: Container server-container-80 ready: false, restart count 0 | |
| Jan 24 20:40:01.194: INFO: Container server-container-81 ready: false, restart count 0 | |
| W0124 20:40:01.207978 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:40:01.387: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:40:01.387: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:40:01.399: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 38289 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:39:50 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:39:50 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:40:01.400: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:40:01.442: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:40:01.462: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:01.462: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:40:01.462: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:40:01.462: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:40:01.463: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:40:01.463: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:40:01.463: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:40:01.463: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:40:01.463: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:40:01.463: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:40:01.463: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:40:01.463: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:40:01.463: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:40:01.463: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:40:01.463: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:40:01.463: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| W0124 20:40:01.472770 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:40:01.661: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:40:01.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-9165" for this suite. | |
| Jan 24 20:40:07.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:40:07.806: INFO: namespace network-policy-9165 deletion completed in 6.130083702s | |
| • Failure [74.192 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce updated policy [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:684 | |
| Jan 24 20:39:59.905: Error getting container logs: the server rejected our request for an unknown reason (get pods client-a-8b5jn) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1027 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:40:07.811: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-1717 | |
| Jan 24 20:40:07.890: INFO: Created pod server-djjfd | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-1717 | |
| Jan 24 20:40:07.948: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:40:17.994: INFO: Waiting for client-can-connect-80-lvvfp to complete. | |
| Jan 24 20:40:22.045: INFO: Waiting for client-can-connect-80-lvvfp to complete. | |
| Jan 24 20:40:22.045: INFO: Waiting up to 5m0s for pod "client-can-connect-80-lvvfp" in namespace "network-policy-1717" to be "success or failure" | |
| Jan 24 20:40:22.048: INFO: Pod "client-can-connect-80-lvvfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.425945ms | |
| STEP: Saw pod success | |
| Jan 24 20:40:22.048: INFO: Pod "client-can-connect-80-lvvfp" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-lvvfp | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:40:22.082: INFO: Waiting for client-can-connect-81-qvn7x to complete. | |
| Jan 24 20:40:24.157: INFO: Waiting for client-can-connect-81-qvn7x to complete. | |
| Jan 24 20:40:24.158: INFO: Waiting up to 5m0s for pod "client-can-connect-81-qvn7x" in namespace "network-policy-1717" to be "success or failure" | |
| Jan 24 20:40:24.162: INFO: Pod "client-can-connect-81-qvn7x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.894826ms | |
| STEP: Saw pod success | |
| Jan 24 20:40:24.162: INFO: Pod "client-can-connect-81-qvn7x" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-qvn7x | |
| [It] should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1027 | |
| STEP: Creating a server pod server-b in namespace network-policy-1717 | |
| Jan 24 20:40:24.270: INFO: Created pod server-b-lps2w | |
| STEP: Creating a service svc-server-b for pod server-b in namespace network-policy-1717 | |
| Jan 24 20:40:24.372: INFO: Created service svc-server-b | |
| STEP: Waiting for pod ready | |
| STEP: Creating client-a which should be able to contact the server before applying policy. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server-b. | |
| Jan 24 20:40:48.438: INFO: Waiting for client-a-jj2m7 to complete. | |
| Jan 24 20:40:52.517: INFO: Waiting for client-a-jj2m7 to complete. | |
| Jan 24 20:40:52.518: INFO: Waiting up to 5m0s for pod "client-a-jj2m7" in namespace "network-policy-1717" to be "success or failure" | |
| Jan 24 20:40:52.541: INFO: Pod "client-a-jj2m7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.5311ms | |
| STEP: Saw pod success | |
| Jan 24 20:40:52.541: INFO: Pod "client-a-jj2m7" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-jj2m7 | |
| STEP: Creating a network policy for the server which allows traffic only to server-a. | |
| STEP: Creating client-a which should not be able to contact the server-b. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server-b. | |
| Jan 24 20:40:52.572: INFO: Waiting for client-a-qwst6 to complete. | |
| Jan 24 20:40:52.572: INFO: Waiting up to 5m0s for pod "client-a-qwst6" in namespace "network-policy-1717" to be "success or failure" | |
| Jan 24 20:40:52.640: INFO: Pod "client-a-qwst6": Phase="Pending", Reason="", readiness=false. Elapsed: 67.611258ms | |
| Jan 24 20:40:54.646: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 2.073952903s | |
| Jan 24 20:40:56.650: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 4.077997277s | |
| Jan 24 20:40:58.656: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 6.083943436s | |
| Jan 24 20:41:00.662: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 8.089594346s | |
| Jan 24 20:41:02.674: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 10.102052706s | |
| Jan 24 20:41:04.678: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 12.106081475s | |
| Jan 24 20:41:06.683: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 14.110363481s | |
| Jan 24 20:41:08.687: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 16.114280378s | |
| Jan 24 20:41:10.691: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 18.118784584s | |
| Jan 24 20:41:12.695: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 20.122891579s | |
| Jan 24 20:41:14.699: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 22.126651659s | |
| Jan 24 20:41:16.704: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 24.131638376s | |
| Jan 24 20:41:18.710: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 26.137707301s | |
| Jan 24 20:41:20.716: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 28.143364655s | |
| Jan 24 20:41:22.721: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 30.148456004s | |
| Jan 24 20:41:24.727: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 32.154195419s | |
| Jan 24 20:41:26.730: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 34.157721269s | |
| Jan 24 20:41:28.734: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 36.162113456s | |
| Jan 24 20:41:30.739: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 38.166900475s | |
| Jan 24 20:41:32.743: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 40.170940202s | |
| Jan 24 20:41:34.747: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 42.175187717s | |
| Jan 24 20:41:36.751: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 44.178850622s | |
| Jan 24 20:41:38.756: INFO: Pod "client-a-qwst6": Phase="Running", Reason="", readiness=true. Elapsed: 46.183709939s | |
| Jan 24 20:41:40.771: INFO: Pod "client-a-qwst6": Phase="Failed", Reason="", readiness=false. Elapsed: 48.199129734s | |
| STEP: Cleaning up the pod client-a-qwst6 | |
| STEP: Creating client-a which should be able to contact the server. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:41:40.833: INFO: Waiting for client-a-vvh65 to complete. | |
| Jan 24 20:41:42.844: INFO: Waiting for client-a-vvh65 to complete. | |
| Jan 24 20:41:42.844: INFO: Waiting up to 5m0s for pod "client-a-vvh65" in namespace "network-policy-1717" to be "success or failure" | |
| Jan 24 20:41:42.848: INFO: Pod "client-a-vvh65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.114911ms | |
| STEP: Saw pod success | |
| Jan 24 20:41:42.848: INFO: Pod "client-a-vvh65" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-vvh65 | |
| STEP: Creating a network policy which allows traffic to all pods. | |
| STEP: Creating client-a which should be able to contact the server-b. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server-b. | |
| Jan 24 20:41:42.898: INFO: Waiting for client-a-vrnxr to complete. | |
| Jan 24 20:41:44.911: INFO: Waiting for client-a-vrnxr to complete. | |
| Jan 24 20:41:44.911: INFO: Waiting up to 5m0s for pod "client-a-vrnxr" in namespace "network-policy-1717" to be "success or failure" | |
| Jan 24 20:41:44.915: INFO: Pod "client-a-vrnxr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.766519ms | |
| STEP: Saw pod success | |
| Jan 24 20:41:44.915: INFO: Pod "client-a-vrnxr" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-vrnxr | |
| STEP: Creating client-a which should be able to contact the server-a. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:41:44.992: INFO: Waiting for client-a-8x4gr to complete. | |
| Jan 24 20:41:47.009: INFO: Waiting for client-a-8x4gr to complete. | |
| Jan 24 20:41:47.009: INFO: Waiting up to 5m0s for pod "client-a-8x4gr" in namespace "network-policy-1717" to be "success or failure" | |
| Jan 24 20:41:47.013: INFO: Pod "client-a-8x4gr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.624262ms | |
| STEP: Saw pod success | |
| Jan 24 20:41:47.013: INFO: Pod "client-a-8x4gr" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-8x4gr | |
| STEP: Cleaning up the policy. | |
| STEP: Cleaning up the policy. | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| Jan 24 20:41:47.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-1717" for this suite. | |
| Jan 24 20:42:01.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:42:01.559: INFO: namespace network-policy-1717 deletion completed in 14.208897431s | |
| • [SLOW TEST:113.748 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1027 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:158 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:42:01.561: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-1511 | |
| Jan 24 20:42:01.703: INFO: Created pod server-mkr7b | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-1511 | |
| Jan 24 20:42:01.864: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:42:15.890: INFO: Waiting for client-can-connect-80-6gdsz to complete. | |
| Jan 24 20:42:17.906: INFO: Waiting for client-can-connect-80-6gdsz to complete. | |
| Jan 24 20:42:17.906: INFO: Waiting up to 5m0s for pod "client-can-connect-80-6gdsz" in namespace "network-policy-1511" to be "success or failure" | |
| Jan 24 20:42:17.910: INFO: Pod "client-can-connect-80-6gdsz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036971ms | |
| STEP: Saw pod success | |
| Jan 24 20:42:17.910: INFO: Pod "client-can-connect-80-6gdsz" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-6gdsz | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:42:17.942: INFO: Waiting for client-can-connect-81-pmndx to complete. | |
| Jan 24 20:42:19.955: INFO: Waiting for client-can-connect-81-pmndx to complete. | |
| Jan 24 20:42:19.955: INFO: Waiting up to 5m0s for pod "client-can-connect-81-pmndx" in namespace "network-policy-1511" to be "success or failure" | |
| Jan 24 20:42:19.958: INFO: Pod "client-can-connect-81-pmndx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.711617ms | |
| STEP: Saw pod success | |
| Jan 24 20:42:19.958: INFO: Pod "client-can-connect-81-pmndx" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-pmndx | |
| [It] should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:158 | |
| Jan 24 20:42:20.114: INFO: Waiting for server to come up. | |
| STEP: Creating a network policy for the server which allows traffic from namespace-b. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:42:20.130: INFO: Waiting for client-a-vs87b to complete. | |
| Jan 24 20:42:20.130: INFO: Waiting up to 5m0s for pod "client-a-vs87b" in namespace "network-policy-1511" to be "success or failure" | |
| Jan 24 20:42:20.142: INFO: Pod "client-a-vs87b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.526417ms | |
| Jan 24 20:42:22.151: INFO: Pod "client-a-vs87b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020883102s | |
| Jan 24 20:42:24.156: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 4.025532042s | |
| Jan 24 20:42:26.160: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 6.029319863s | |
| Jan 24 20:42:28.164: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 8.033715467s | |
| Jan 24 20:42:30.168: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 10.03770556s | |
| Jan 24 20:42:32.172: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 12.041751919s | |
| Jan 24 20:42:34.177: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 14.046268811s | |
| Jan 24 20:42:36.181: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 16.050698048s | |
| Jan 24 20:42:38.186: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 18.055312655s | |
| Jan 24 20:42:40.190: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 20.059932765s | |
| Jan 24 20:42:42.195: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 22.064420634s | |
| Jan 24 20:42:44.199: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 24.068823082s | |
| Jan 24 20:42:46.205: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 26.074346622s | |
| Jan 24 20:42:48.209: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 28.078897819s | |
| Jan 24 20:42:50.214: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 30.083689223s | |
| Jan 24 20:42:52.219: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 32.088033999s | |
| Jan 24 20:42:54.225: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 34.094509824s | |
| Jan 24 20:42:56.229: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 36.0989238s | |
| Jan 24 20:42:58.233: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 38.10290004s | |
| Jan 24 20:43:00.238: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 40.107196713s | |
| Jan 24 20:43:02.243: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 42.111993647s | |
| Jan 24 20:43:04.247: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 44.116377814s | |
| Jan 24 20:43:06.251: INFO: Pod "client-a-vs87b": Phase="Running", Reason="", readiness=true. Elapsed: 46.120858116s | |
| Jan 24 20:43:08.258: INFO: Pod "client-a-vs87b": Phase="Failed", Reason="", readiness=false. Elapsed: 48.127048663s | |
| STEP: Cleaning up the pod client-a-vs87b | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:43:08.298: INFO: Waiting for client-b-rq5ql to complete. | |
| Jan 24 20:43:56.327: INFO: Waiting for client-b-rq5ql to complete. | |
| Jan 24 20:43:56.327: INFO: Waiting up to 5m0s for pod "client-b-rq5ql" in namespace "network-policy-b-1442" to be "success or failure" | |
| Jan 24 20:43:56.331: INFO: Pod "client-b-rq5ql": Phase="Failed", Reason="", readiness=false. Elapsed: 3.957177ms | |
| Jan 24 20:43:56.336: FAIL: Error getting container logs: the server could not find the requested resource (get pods client-b-rq5ql) | |
| STEP: Cleaning up the pod client-b-rq5ql | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-1511". | |
| STEP: Found 21 events. | |
| Jan 24 20:43:56.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-vs87b: {default-scheduler } Scheduled: Successfully assigned network-policy-1511/client-a-vs87b to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:43:56.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-6gdsz: {default-scheduler } Scheduled: Successfully assigned network-policy-1511/client-can-connect-80-6gdsz to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:43:56.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-pmndx: {default-scheduler } Scheduled: Successfully assigned network-policy-1511/client-can-connect-81-pmndx to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:43:56.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-mkr7b: {default-scheduler } Scheduled: Successfully assigned network-policy-1511/server-mkr7b to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:02 +0000 UTC - event for server-mkr7b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:02 +0000 UTC - event for server-mkr7b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:03 +0000 UTC - event for server-mkr7b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:03 +0000 UTC - event for server-mkr7b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:03 +0000 UTC - event for server-mkr7b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:03 +0000 UTC - event for server-mkr7b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:16 +0000 UTC - event for client-can-connect-80-6gdsz: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-can-connect-80-container | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:16 +0000 UTC - event for client-can-connect-80-6gdsz: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:17 +0000 UTC - event for client-can-connect-80-6gdsz: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-can-connect-80-container | |
| Jan 24 20:43:56.593: INFO: At 2020-01-24 20:42:18 +0000 UTC - event for client-can-connect-81-pmndx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:43:56.594: INFO: At 2020-01-24 20:42:19 +0000 UTC - event for client-can-connect-81-pmndx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-can-connect-81-container | |
| Jan 24 20:43:56.594: INFO: At 2020-01-24 20:42:19 +0000 UTC - event for client-can-connect-81-pmndx: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-can-connect-81-container | |
| Jan 24 20:43:56.595: INFO: At 2020-01-24 20:42:21 +0000 UTC - event for client-a-vs87b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:43:56.595: INFO: At 2020-01-24 20:42:22 +0000 UTC - event for client-a-vs87b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-a-container | |
| Jan 24 20:43:56.596: INFO: At 2020-01-24 20:42:22 +0000 UTC - event for client-a-vs87b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-a-container | |
| Jan 24 20:43:56.596: INFO: At 2020-01-24 20:43:56 +0000 UTC - event for server-mkr7b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:43:56.596: INFO: At 2020-01-24 20:43:56 +0000 UTC - event for server-mkr7b: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:43:56.603: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:43:56.604: INFO: server-mkr7b workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:42:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:42:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:42:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:42:01 +0000 UTC }] | |
| Jan 24 20:43:56.604: INFO: | |
| Jan 24 20:43:56.676: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:43:56.681: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 39106 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:43:05 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:43:56.683: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:43:56.706: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:43:56.730: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:56.730: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:43:56.730: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:56.730: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:43:56.730: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:43:56.730: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:43:56.730: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:43:56.730: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:56.730: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:56.730: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:43:56.730: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:56.730: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:43:56.730: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:56.731: INFO: Container kube-proxy ready: true, restart count 0 | |
| W0124 20:43:56.738854 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:43:56.920: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:43:56.920: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:43:56.924: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 39226 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:43:56.925: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:43:56.938: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:43:56.966: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:56.966: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:56.966: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:56.966: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:43:56.966: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:43:56.966: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:43:56.966: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:43:56.966: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| W0124 20:43:56.974662 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:43:57.094: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:43:57.095: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:43:57.114: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 39227 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:43:57.115: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:43:57.123: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:43:57.144: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:57.144: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:43:57.144: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:43:57.144: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:43:57.144: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:43:57.144: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:43:57.144: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:43:57.164177 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:43:57.289: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:43:57.289: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:43:57.294: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 39145 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:43:14 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:43:57.295: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:43:57.302: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:43:57.409: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:57.409: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:43:57.409: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:43:57.409: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:43:57.409: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:43:57.409: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:43:57.409: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:43:57.409: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:43:57.409: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:43:57.409: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:43:57.409: INFO: Container vsphere-csi-node ready: false, restart count 11 | |
| Jan 24 20:43:57.409: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:57.409: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:43:57.410: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:43:57.410: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:43:57.410: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:43:57.418205 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:43:57.508: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:43:57.508: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:43:57.516: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 39225 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:43:57.516: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:43:57.523: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:43:57.547: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:43:57.547: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:43:57.547: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:43:57.547: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:43:57.547: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:43:57.547: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:43:57.547: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:43:57.547: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:43:57.547: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:43:57.547: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:43:57.547: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:43:57.547: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:43:57.547: INFO: server-mkr7b started at 2020-01-24 20:42:01 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:43:57.547: INFO: Container server-container-80 ready: false, restart count 0 | |
| Jan 24 20:43:57.547: INFO: Container server-container-81 ready: false, restart count 0 | |
| Jan 24 20:43:57.547: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:57.547: INFO: Container kube-proxy ready: true, restart count 0 | |
| W0124 20:43:57.554641 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:43:57.683: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:43:57.683: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:43:57.714: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 39224 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:43:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:43:57.715: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:43:57.728: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:43:57.746: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:43:57.746: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:57.746: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:43:57.746: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:43:57.746: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:43:57.746: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:43:57.746: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| W0124 20:43:57.752513 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:43:57.858: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:43:57.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-1511" for this suite. | |
| Jan 24 20:44:09.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:44:09.987: INFO: namespace network-policy-1511 deletion completed in 12.122869467s | |
| STEP: Destroying namespace "network-policy-b-1442" for this suite. | |
| Jan 24 20:44:16.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:44:16.187: INFO: namespace network-policy-b-1442 deletion completed in 6.199323409s | |
| • Failure [134.625 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:158 | |
| Jan 24 20:43:56.336: Error getting container logs: the server could not find the requested resource (get pods client-b-rq5ql) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:869 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:44:16.191: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-3218 | |
| Jan 24 20:44:16.286: INFO: Created pod server-tfg9n | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-3218 | |
| Jan 24 20:44:16.443: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:44:26.467: INFO: Waiting for client-can-connect-80-vnqf5 to complete. | |
| Jan 24 20:44:28.479: INFO: Waiting for client-can-connect-80-vnqf5 to complete. | |
| Jan 24 20:44:28.479: INFO: Waiting up to 5m0s for pod "client-can-connect-80-vnqf5" in namespace "network-policy-3218" to be "success or failure" | |
| Jan 24 20:44:28.483: INFO: Pod "client-can-connect-80-vnqf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.219245ms | |
| STEP: Saw pod success | |
| Jan 24 20:44:28.483: INFO: Pod "client-can-connect-80-vnqf5" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-vnqf5 | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:44:28.533: INFO: Waiting for client-can-connect-81-xl6gt to complete. | |
| Jan 24 20:44:32.551: INFO: Waiting for client-can-connect-81-xl6gt to complete. | |
| Jan 24 20:44:32.551: INFO: Waiting up to 5m0s for pod "client-can-connect-81-xl6gt" in namespace "network-policy-3218" to be "success or failure" | |
| Jan 24 20:44:32.554: INFO: Pod "client-can-connect-81-xl6gt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.329281ms | |
| STEP: Saw pod success | |
| Jan 24 20:44:32.555: INFO: Pod "client-can-connect-81-xl6gt" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-xl6gt | |
| [It] should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:869 | |
| STEP: Creating a server pod ns-b-server-a in namespace network-policy-b-1865 | |
| Jan 24 20:44:32.659: INFO: Created pod ns-b-server-a-mf7vf | |
| STEP: Creating a service svc-ns-b-server-a for pod ns-b-server-a in namespace network-policy-b-1865 | |
| Jan 24 20:44:32.791: INFO: Created service svc-ns-b-server-a | |
| STEP: Creating a server pod ns-b-server-b in namespace network-policy-b-1865 | |
| Jan 24 20:44:32.817: INFO: Created pod ns-b-server-b-m8vf7 | |
| STEP: Creating a service svc-ns-b-server-b for pod ns-b-server-b in namespace network-policy-b-1865 | |
| Jan 24 20:44:32.880: INFO: Created service svc-ns-b-server-b | |
| Jan 24 20:44:32.882: INFO: Waiting for servers to come up. | |
| STEP: Creating a network policy for the server which allows traffic only to a server in different namespace. | |
| STEP: Creating client-a, in 'namespace-a', which should be able to contact the server-a in namespace-b. | |
| STEP: Creating client pod client-a that should successfully connect to svc-ns-b-server-a. | |
| Jan 24 20:44:34.977: INFO: Waiting for client-a-gxf82 to complete. | |
| Jan 24 20:44:44.997: INFO: Waiting for client-a-gxf82 to complete. | |
| Jan 24 20:44:44.997: INFO: Waiting up to 5m0s for pod "client-a-gxf82" in namespace "network-policy-3218" to be "success or failure" | |
| Jan 24 20:44:45.002: INFO: Pod "client-a-gxf82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028232ms | |
| STEP: Saw pod success | |
| Jan 24 20:44:45.002: INFO: Pod "client-a-gxf82" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-gxf82 | |
| STEP: Creating client-a, in 'namespace-a', which should not be able to contact the server-b in namespace-b. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-ns-b-server-b. | |
| Jan 24 20:44:45.066: INFO: Waiting for client-a-htk8t to complete. | |
| Jan 24 20:44:45.066: INFO: Waiting up to 5m0s for pod "client-a-htk8t" in namespace "network-policy-3218" to be "success or failure" | |
| Jan 24 20:44:45.085: INFO: Pod "client-a-htk8t": Phase="Pending", Reason="", readiness=false. Elapsed: 18.609291ms | |
| Jan 24 20:44:47.089: INFO: Pod "client-a-htk8t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022973739s | |
| Jan 24 20:44:49.093: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 4.027307887s | |
| Jan 24 20:44:51.117: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 6.051118618s | |
| Jan 24 20:44:53.122: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 8.055846231s | |
| Jan 24 20:44:55.126: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 10.060155417s | |
| Jan 24 20:44:57.132: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 12.065739949s | |
| Jan 24 20:44:59.136: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 14.070252655s | |
| Jan 24 20:45:01.141: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 16.075039591s | |
| Jan 24 20:45:03.145: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 18.079426913s | |
| Jan 24 20:45:05.150: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 20.08356098s | |
| Jan 24 20:45:07.160: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 22.094073003s | |
| Jan 24 20:45:09.165: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 24.098851239s | |
| Jan 24 20:45:11.170: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 26.103647498s | |
| Jan 24 20:45:13.174: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 28.107936342s | |
| Jan 24 20:45:15.179: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 30.112614198s | |
| Jan 24 20:45:17.183: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 32.116529085s | |
| Jan 24 20:45:19.187: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 34.121011186s | |
| Jan 24 20:45:21.192: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 36.125591103s | |
| Jan 24 20:45:23.196: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 38.130098383s | |
| Jan 24 20:45:25.201: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 40.135066483s | |
| Jan 24 20:45:27.225: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 42.159178388s | |
| Jan 24 20:45:29.230: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 44.163882823s | |
| Jan 24 20:45:31.234: INFO: Pod "client-a-htk8t": Phase="Running", Reason="", readiness=true. Elapsed: 46.168283775s | |
| Jan 24 20:45:33.239: INFO: Pod "client-a-htk8t": Phase="Failed", Reason="", readiness=false. Elapsed: 48.173034981s | |
| STEP: Cleaning up the pod client-a-htk8t | |
| STEP: Creating client-a, in 'namespace-a', which should not be able to contact the server in namespace-a. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:45:33.277: INFO: Waiting for client-a-qmdvf to complete. | |
| Jan 24 20:45:33.278: INFO: Waiting up to 5m0s for pod "client-a-qmdvf" in namespace "network-policy-3218" to be "success or failure" | |
| Jan 24 20:45:33.287: INFO: Pod "client-a-qmdvf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.884997ms | |
| Jan 24 20:45:35.292: INFO: Pod "client-a-qmdvf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014351977s | |
| Jan 24 20:45:37.296: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 4.018682541s | |
| Jan 24 20:45:39.300: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 6.022280844s | |
| Jan 24 20:45:41.305: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 8.02700342s | |
| Jan 24 20:45:43.309: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 10.031200601s | |
| Jan 24 20:45:45.318: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 12.040295857s | |
| Jan 24 20:45:47.321: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 14.043913276s | |
| Jan 24 20:45:49.327: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 16.048956942s | |
| Jan 24 20:45:51.331: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 18.052971266s | |
| Jan 24 20:45:53.335: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 20.057338821s | |
| Jan 24 20:45:55.340: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 22.06230174s | |
| Jan 24 20:45:57.344: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 24.066701046s | |
| Jan 24 20:45:59.348: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 26.070474023s | |
| Jan 24 20:46:01.352: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 28.074050776s | |
| Jan 24 20:46:03.356: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 30.07823154s | |
| Jan 24 20:46:05.360: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 32.081972885s | |
| Jan 24 20:46:07.363: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 34.085903896s | |
| Jan 24 20:46:09.368: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 36.090048999s | |
| Jan 24 20:46:11.372: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 38.094085246s | |
| Jan 24 20:46:13.377: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 40.099221169s | |
| Jan 24 20:46:15.380: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 42.102607776s | |
| Jan 24 20:46:17.384: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 44.106623995s | |
| Jan 24 20:46:19.391: INFO: Pod "client-a-qmdvf": Phase="Running", Reason="", readiness=true. Elapsed: 46.113863618s | |
| Jan 24 20:46:21.395: INFO: Pod "client-a-qmdvf": Phase="Failed", Reason="", readiness=false. Elapsed: 48.117681712s | |
| STEP: Cleaning up the pod client-a-qmdvf | |
| STEP: Cleaning up the policy. | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| Jan 24 20:46:21.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-3218" for this suite. | |
| Jan 24 20:46:27.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:46:27.797: INFO: namespace network-policy-3218 deletion completed in 6.135918588s | |
| STEP: Destroying namespace "network-policy-b-1865" for this suite. | |
| Jan 24 20:46:39.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:46:39.912: INFO: namespace network-policy-b-1865 deletion completed in 12.114782852s | |
| • [SLOW TEST:143.721 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:869 | |
| ------------------------------ | |
| SSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should support allow-all policy [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:543 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:46:39.913: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-6006 | |
| Jan 24 20:46:39.979: INFO: Created pod server-6vwcx | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-6006 | |
| Jan 24 20:46:40.016: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:46:52.040: INFO: Waiting for client-can-connect-80-f8m96 to complete. | |
| Jan 24 20:46:54.057: INFO: Waiting for client-can-connect-80-f8m96 to complete. | |
| Jan 24 20:46:54.057: INFO: Waiting up to 5m0s for pod "client-can-connect-80-f8m96" in namespace "network-policy-6006" to be "success or failure" | |
| Jan 24 20:46:54.060: INFO: Pod "client-can-connect-80-f8m96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.899054ms | |
| STEP: Saw pod success | |
| Jan 24 20:46:54.060: INFO: Pod "client-can-connect-80-f8m96" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-f8m96 | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:46:54.078: INFO: Waiting for client-can-connect-81-gbqjw to complete. | |
| Jan 24 20:46:56.098: INFO: Waiting for client-can-connect-81-gbqjw to complete. | |
| Jan 24 20:46:56.098: INFO: Waiting up to 5m0s for pod "client-can-connect-81-gbqjw" in namespace "network-policy-6006" to be "success or failure" | |
| Jan 24 20:46:56.102: INFO: Pod "client-can-connect-81-gbqjw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.619898ms | |
| STEP: Saw pod success | |
| Jan 24 20:46:56.102: INFO: Pod "client-can-connect-81-gbqjw" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-gbqjw | |
| [It] should support allow-all policy [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:543 | |
| STEP: Creating a network policy which allows all traffic. | |
| STEP: Testing pods can connect to both ports when an 'allow-all' policy is present. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:46:56.142: INFO: Waiting for client-a-sht4p to complete. | |
| Jan 24 20:46:58.154: INFO: Waiting for client-a-sht4p to complete. | |
| Jan 24 20:46:58.155: INFO: Waiting up to 5m0s for pod "client-a-sht4p" in namespace "network-policy-6006" to be "success or failure" | |
| Jan 24 20:46:58.158: INFO: Pod "client-a-sht4p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.216235ms | |
| STEP: Saw pod success | |
| Jan 24 20:46:58.158: INFO: Pod "client-a-sht4p" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-sht4p | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:46:58.190: INFO: Waiting for client-b-x77qw to complete. | |
| Jan 24 20:47:00.205: INFO: Waiting for client-b-x77qw to complete. | |
| Jan 24 20:47:00.205: INFO: Waiting up to 5m0s for pod "client-b-x77qw" in namespace "network-policy-6006" to be "success or failure" | |
| Jan 24 20:47:00.208: INFO: Pod "client-b-x77qw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.177096ms | |
| STEP: Saw pod success | |
| Jan 24 20:47:00.208: INFO: Pod "client-b-x77qw" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-b-x77qw | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| Jan 24 20:47:00.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-6006" for this suite. | |
| Jan 24 20:47:12.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:47:12.429: INFO: namespace network-policy-6006 deletion completed in 12.113095275s | |
| • [SLOW TEST:32.516 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should support allow-all policy [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:543 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:242 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:47:12.440: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-905 | |
| Jan 24 20:47:12.624: INFO: Created pod server-cmg42 | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-905 | |
| Jan 24 20:47:12.664: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:47:24.702: INFO: Waiting for client-can-connect-80-7pzhq to complete. | |
| Jan 24 20:47:28.717: INFO: Waiting for client-can-connect-80-7pzhq to complete. | |
| Jan 24 20:47:28.717: INFO: Waiting up to 5m0s for pod "client-can-connect-80-7pzhq" in namespace "network-policy-905" to be "success or failure" | |
| Jan 24 20:47:28.722: INFO: Pod "client-can-connect-80-7pzhq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.251469ms | |
| STEP: Saw pod success | |
| Jan 24 20:47:28.722: INFO: Pod "client-can-connect-80-7pzhq" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-7pzhq | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:47:28.750: INFO: Waiting for client-can-connect-81-5bp2m to complete. | |
| Jan 24 20:47:30.776: INFO: Waiting for client-can-connect-81-5bp2m to complete. | |
| Jan 24 20:47:30.776: INFO: Waiting up to 5m0s for pod "client-can-connect-81-5bp2m" in namespace "network-policy-905" to be "success or failure" | |
| Jan 24 20:47:30.782: INFO: Pod "client-can-connect-81-5bp2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.547142ms | |
| STEP: Saw pod success | |
| Jan 24 20:47:30.782: INFO: Pod "client-can-connect-81-5bp2m" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-5bp2m | |
| [It] should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:242 | |
| STEP: Creating a network policy for the server which allows traffic from ns different than namespace-a. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:47:30.977: INFO: Waiting for client-a-hg5rk to complete. | |
| Jan 24 20:47:30.977: INFO: Waiting up to 5m0s for pod "client-a-hg5rk" in namespace "network-policy-c-6876" to be "success or failure" | |
| Jan 24 20:47:30.984: INFO: Pod "client-a-hg5rk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.845532ms | |
| Jan 24 20:47:32.988: INFO: Pod "client-a-hg5rk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011007393s | |
| STEP: Saw pod success | |
| Jan 24 20:47:32.989: INFO: Pod "client-a-hg5rk" satisfied condition "success or failure" | |
| Jan 24 20:47:32.992: FAIL: Error getting container logs: the server could not find the requested resource (get pods client-a-hg5rk) | |
| STEP: Cleaning up the pod client-a-hg5rk | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-905". | |
| STEP: Found 17 events. | |
| Jan 24 20:47:33.163: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-7pzhq: {default-scheduler } Scheduled: Successfully assigned network-policy-905/client-can-connect-80-7pzhq to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:47:33.163: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-5bp2m: {default-scheduler } Scheduled: Successfully assigned network-policy-905/client-can-connect-81-5bp2m to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:47:33.163: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-cmg42: {default-scheduler } Scheduled: Successfully assigned network-policy-905/server-cmg42 to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:47:33.163: INFO: At 2020-01-24 20:47:13 +0000 UTC - event for server-cmg42: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:47:33.163: INFO: At 2020-01-24 20:47:13 +0000 UTC - event for server-cmg42: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:47:33.163: INFO: At 2020-01-24 20:47:13 +0000 UTC - event for server-cmg42: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:47:33.163: INFO: At 2020-01-24 20:47:13 +0000 UTC - event for server-cmg42: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:47:33.163: INFO: At 2020-01-24 20:47:14 +0000 UTC - event for server-cmg42: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:47:33.163: INFO: At 2020-01-24 20:47:14 +0000 UTC - event for server-cmg42: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:47:33.163: INFO: At 2020-01-24 20:47:25 +0000 UTC - event for client-can-connect-80-7pzhq: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:47:33.164: INFO: At 2020-01-24 20:47:25 +0000 UTC - event for client-can-connect-80-7pzhq: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-can-connect-80-container | |
| Jan 24 20:47:33.164: INFO: At 2020-01-24 20:47:25 +0000 UTC - event for client-can-connect-80-7pzhq: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-can-connect-80-container | |
| Jan 24 20:47:33.164: INFO: At 2020-01-24 20:47:29 +0000 UTC - event for client-can-connect-81-5bp2m: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-can-connect-81-container | |
| Jan 24 20:47:33.164: INFO: At 2020-01-24 20:47:29 +0000 UTC - event for client-can-connect-81-5bp2m: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-can-connect-81-container | |
| Jan 24 20:47:33.164: INFO: At 2020-01-24 20:47:29 +0000 UTC - event for client-can-connect-81-5bp2m: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:47:33.164: INFO: At 2020-01-24 20:47:33 +0000 UTC - event for server-cmg42: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:47:33.164: INFO: At 2020-01-24 20:47:33 +0000 UTC - event for server-cmg42: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:47:33.171: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:47:33.171: INFO: server-cmg42 workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:47:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:47:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:47:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:47:12 +0000 UTC }] | |
| Jan 24 20:47:33.173: INFO: | |
| Jan 24 20:47:33.184: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:47:33.192: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 40100 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:47:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:47:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:47:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:47:05 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:47:33.193: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:47:33.202: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:47:33.219: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:47:33.219: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:47:33.219: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:47:33.219: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:47:33.219: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:47:33.225864 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:47:33.354: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:47:33.354: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:47:33.357: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 39967 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:47:33.358: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:47:33.364: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:47:33.430: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:47:33.430: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.430: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.430: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.430: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:47:33.430: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:47:33.430: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:47:33.430: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:47:33.437486 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:47:33.639: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:47:33.639: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:47:33.643: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 39970 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:47:33.644: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:47:33.650: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:47:33.667: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.667: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:47:33.667: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:47:33.667: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:47:33.667: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:47:33.667: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:47:33.667: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:47:33.667: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:47:33.667: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:47:33.667: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:47:33.667: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:47:33.667: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:47:33.668: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:47:33.668: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:47:33.668: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:47:33.668: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:47:33.668: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:47:33.672494 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:47:33.765: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:47:33.765: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:47:33.769: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 40154 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:47:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:47:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:47:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:47:14 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:47:33.769: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:47:33.778: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:47:33.797: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.798: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:47:33.798: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:47:33.798: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:47:33.798: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:47:33.798: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.798: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:47:33.798: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:47:33.799: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:47:33.799: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:47:33.799: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:47:33.799: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:47:33.799: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:47:33.799: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:47:33.799: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:47:33.799: INFO: Container vsphere-csi-node ready: false, restart count 13 | |
| W0124 20:47:33.806100 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:47:33.916: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:47:33.916: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:47:33.921: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 39966 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:47:33.922: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:47:33.928: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:47:33.950: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:47:33.950: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:47:33.950: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: server-cmg42 started at 2020-01-24 20:47:12 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:47:33.950: INFO: Container server-container-80 ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: Container server-container-81 ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:33.950: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:47:33.950: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:47:33.950: INFO: Container calico-node ready: true, restart count 0 | |
| W0124 20:47:33.955082 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:47:34.143: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:47:34.143: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:47:34.147: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 39965 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:46:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:47:34.148: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:47:34.155: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:47:34.178: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:34.178: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:47:34.178: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:47:34.178: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:47:34.178: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:47:34.178: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:47:34.178: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:47:34.184723 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:47:34.300: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:47:34.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-905" for this suite. | |
| Jan 24 20:47:40.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:47:40.465: INFO: namespace network-policy-905 deletion completed in 6.15738703s | |
| STEP: Destroying namespace "network-policy-b-233" for this suite. | |
| Jan 24 20:47:46.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:47:46.589: INFO: namespace network-policy-b-233 deletion completed in 6.123973838s | |
| STEP: Destroying namespace "network-policy-c-6876" for this suite. | |
| Jan 24 20:47:52.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:47:52.717: INFO: namespace network-policy-c-6876 deletion completed in 6.127992507s | |
| • Failure [40.277 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:242 | |
| Jan 24 20:47:32.992: Error getting container logs: the server could not find the requested resource (get pods client-a-hg5rk) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1458 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:383 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:47:52.731: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-75 | |
| Jan 24 20:47:52.778: INFO: Created pod server-bh2zk | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-75 | |
| Jan 24 20:47:52.815: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:48:04.840: INFO: Waiting for client-can-connect-80-z5b7x to complete. | |
| Jan 24 20:48:06.849: INFO: Waiting for client-can-connect-80-z5b7x to complete. | |
| Jan 24 20:48:06.849: INFO: Waiting up to 5m0s for pod "client-can-connect-80-z5b7x" in namespace "network-policy-75" to be "success or failure" | |
| Jan 24 20:48:06.852: INFO: Pod "client-can-connect-80-z5b7x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.146836ms | |
| STEP: Saw pod success | |
| Jan 24 20:48:06.852: INFO: Pod "client-can-connect-80-z5b7x" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-z5b7x | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:48:06.884: INFO: Waiting for client-can-connect-81-d2sj4 to complete. | |
| Jan 24 20:48:08.902: INFO: Waiting for client-can-connect-81-d2sj4 to complete. | |
| Jan 24 20:48:08.902: INFO: Waiting up to 5m0s for pod "client-can-connect-81-d2sj4" in namespace "network-policy-75" to be "success or failure" | |
| Jan 24 20:48:08.906: INFO: Pod "client-can-connect-81-d2sj4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.152611ms | |
| STEP: Saw pod success | |
| Jan 24 20:48:08.906: INFO: Pod "client-can-connect-81-d2sj4" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-d2sj4 | |
| [It] should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:383 | |
| Jan 24 20:48:09.011: INFO: Waiting for server to come up. | |
| STEP: Creating client-a, in server's namespace, which should be able to contact the server. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:48:09.034: INFO: Waiting for client-a-rz2z9 to complete. | |
| Jan 24 20:48:11.064: INFO: Waiting for client-a-rz2z9 to complete. | |
| Jan 24 20:48:11.064: INFO: Waiting up to 5m0s for pod "client-a-rz2z9" in namespace "network-policy-75" to be "success or failure" | |
| Jan 24 20:48:11.067: INFO: Pod "client-a-rz2z9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.352089ms | |
| STEP: Saw pod success | |
| Jan 24 20:48:11.067: INFO: Pod "client-a-rz2z9" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-rz2z9 | |
| STEP: Creating client-b, in server's namespace, which should be able to contact the server. | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:48:11.122: INFO: Waiting for client-b-bzz68 to complete. | |
| Jan 24 20:48:13.133: INFO: Waiting for client-b-bzz68 to complete. | |
| Jan 24 20:48:13.133: INFO: Waiting up to 5m0s for pod "client-b-bzz68" in namespace "network-policy-75" to be "success or failure" | |
| Jan 24 20:48:13.136: INFO: Pod "client-b-bzz68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.008234ms | |
| STEP: Saw pod success | |
| Jan 24 20:48:13.136: INFO: Pod "client-b-bzz68" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-b-bzz68 | |
| STEP: Creating client-a, not in server's namespace, which should be able to contact the server. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:48:13.159: INFO: Waiting for client-a-hc77t to complete. | |
| Jan 24 20:48:15.174: INFO: Waiting for client-a-hc77t to complete. | |
| Jan 24 20:48:15.174: INFO: Waiting up to 5m0s for pod "client-a-hc77t" in namespace "network-policy-b-3230" to be "success or failure" | |
| Jan 24 20:48:15.177: INFO: Pod "client-a-hc77t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.018701ms | |
| STEP: Saw pod success | |
| Jan 24 20:48:15.177: INFO: Pod "client-a-hc77t" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-a-hc77t | |
| STEP: Creating client-b, not in server's namespace, which should be able to contact the server. | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:48:15.219: INFO: Waiting for client-b-hxr2g to complete. | |
| Jan 24 20:48:17.233: INFO: Waiting for client-b-hxr2g to complete. | |
| Jan 24 20:48:17.233: INFO: Waiting up to 5m0s for pod "client-b-hxr2g" in namespace "network-policy-b-3230" to be "success or failure" | |
| Jan 24 20:48:17.238: INFO: Pod "client-b-hxr2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235125ms | |
| STEP: Saw pod success | |
| Jan 24 20:48:17.238: INFO: Pod "client-b-hxr2g" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-b-hxr2g | |
| STEP: Creating a network policy for the server which allows traffic only from client-a in namespace-b. | |
| STEP: Creating client-a, in server's namespace, which should not be able to contact the server. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:48:17.286: INFO: Waiting for client-a-v8ln7 to complete. | |
| Jan 24 20:48:17.286: INFO: Waiting up to 5m0s for pod "client-a-v8ln7" in namespace "network-policy-75" to be "success or failure" | |
| Jan 24 20:48:17.295: INFO: Pod "client-a-v8ln7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.296063ms | |
| Jan 24 20:48:19.311: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 2.024780835s | |
| Jan 24 20:48:21.315: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 4.028787408s | |
| Jan 24 20:48:23.319: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 6.033168593s | |
| Jan 24 20:48:25.324: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 8.038280627s | |
| Jan 24 20:48:27.329: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 10.042739967s | |
| Jan 24 20:48:29.333: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 12.046809053s | |
| Jan 24 20:48:31.337: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 14.050821192s | |
| Jan 24 20:48:33.341: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 16.054839637s | |
| Jan 24 20:48:35.345: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 18.059152825s | |
| Jan 24 20:48:37.349: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 20.062947048s | |
| Jan 24 20:48:39.354: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 22.067704496s | |
| Jan 24 20:48:41.358: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 24.071377278s | |
| Jan 24 20:48:43.362: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 26.075758547s | |
| Jan 24 20:48:45.366: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 28.079781974s | |
| Jan 24 20:48:47.370: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 30.084243942s | |
| Jan 24 20:48:49.374: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 32.088085009s | |
| Jan 24 20:48:51.382: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 34.095782297s | |
| Jan 24 20:48:53.386: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 36.099384067s | |
| Jan 24 20:48:55.390: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 38.103673511s | |
| Jan 24 20:48:57.394: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 40.107837995s | |
| Jan 24 20:48:59.398: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 42.111592968s | |
| Jan 24 20:49:01.403: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 44.11655533s | |
| Jan 24 20:49:03.407: INFO: Pod "client-a-v8ln7": Phase="Running", Reason="", readiness=true. Elapsed: 46.120593276s | |
| Jan 24 20:49:05.411: INFO: Pod "client-a-v8ln7": Phase="Failed", Reason="", readiness=false. Elapsed: 48.125060769s | |
| STEP: Cleaning up the pod client-a-v8ln7 | |
| STEP: Creating client-b, in server's namespace, which should not be able to contact the server. | |
| STEP: Creating client pod client-b that should not be able to connect to svc-server. | |
| Jan 24 20:49:05.444: INFO: Waiting for client-b-zt68p to complete. | |
| Jan 24 20:49:05.444: INFO: Waiting up to 5m0s for pod "client-b-zt68p" in namespace "network-policy-75" to be "success or failure" | |
| Jan 24 20:49:05.471: INFO: Pod "client-b-zt68p": Phase="Pending", Reason="", readiness=false. Elapsed: 26.712959ms | |
| Jan 24 20:49:07.475: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 2.0308318s | |
| Jan 24 20:49:09.479: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 4.034335034s | |
| Jan 24 20:49:11.483: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 6.038570101s | |
| Jan 24 20:49:13.487: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 8.042320081s | |
| Jan 24 20:49:15.492: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 10.048208808s | |
| Jan 24 20:49:17.497: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 12.053073358s | |
| Jan 24 20:49:19.503: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 14.058404607s | |
| Jan 24 20:49:21.506: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 16.061902989s | |
| Jan 24 20:49:23.510: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 18.066109187s | |
| Jan 24 20:49:25.517: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 20.072864769s | |
| Jan 24 20:49:27.522: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 22.078261195s | |
| Jan 24 20:49:29.526: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 24.082082401s | |
| Jan 24 20:49:31.531: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 26.086632104s | |
| Jan 24 20:49:33.535: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 28.09070242s | |
| Jan 24 20:49:35.539: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 30.0952679s | |
| Jan 24 20:49:37.544: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 32.099315087s | |
| Jan 24 20:49:39.548: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 34.103506341s | |
| Jan 24 20:49:41.553: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 36.108881153s | |
| Jan 24 20:49:43.557: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 38.1131619s | |
| Jan 24 20:49:45.561: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 40.116975718s | |
| Jan 24 20:49:47.565: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 42.120702708s | |
| Jan 24 20:49:49.569: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 44.124612477s | |
| Jan 24 20:49:51.573: INFO: Pod "client-b-zt68p": Phase="Running", Reason="", readiness=true. Elapsed: 46.128786624s | |
| Jan 24 20:49:53.577: INFO: Pod "client-b-zt68p": Phase="Failed", Reason="", readiness=false. Elapsed: 48.133020274s | |
| STEP: Cleaning up the pod client-b-zt68p | |
| STEP: Creating client-a, not in server's namespace, which should be able to contact the server. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:49:53.609: INFO: Waiting for client-a-j6746 to complete. | |
| Jan 24 20:50:41.621: INFO: Waiting for client-a-j6746 to complete. | |
| Jan 24 20:50:41.621: INFO: Waiting up to 5m0s for pod "client-a-j6746" in namespace "network-policy-b-3230" to be "success or failure" | |
| Jan 24 20:50:41.626: INFO: Pod "client-a-j6746": Phase="Failed", Reason="", readiness=false. Elapsed: 5.191941ms | |
| Jan 24 20:50:41.630: FAIL: Error getting container logs: the server could not find the requested resource (get pods client-a-j6746) | |
| STEP: Cleaning up the pod client-a-j6746 | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-75". | |
| STEP: Found 33 events. | |
| Jan 24 20:50:41.804: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-rz2z9: {default-scheduler } Scheduled: Successfully assigned network-policy-75/client-a-rz2z9 to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:50:41.804: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-v8ln7: {default-scheduler } Scheduled: Successfully assigned network-policy-75/client-a-v8ln7 to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:50:41.804: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-b-bzz68: {default-scheduler } Scheduled: Successfully assigned network-policy-75/client-b-bzz68 to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:50:41.804: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-b-zt68p: {default-scheduler } Scheduled: Successfully assigned network-policy-75/client-b-zt68p to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:50:41.804: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-z5b7x: {default-scheduler } Scheduled: Successfully assigned network-policy-75/client-can-connect-80-z5b7x to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:50:41.804: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-d2sj4: {default-scheduler } Scheduled: Successfully assigned network-policy-75/client-can-connect-81-d2sj4 to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:50:41.804: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-bh2zk: {default-scheduler } Scheduled: Successfully assigned network-policy-75/server-bh2zk to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:50:41.805: INFO: At 2020-01-24 20:47:53 +0000 UTC - event for server-bh2zk: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:50:41.805: INFO: At 2020-01-24 20:47:53 +0000 UTC - event for server-bh2zk: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:50:41.805: INFO: At 2020-01-24 20:47:54 +0000 UTC - event for server-bh2zk: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:50:41.805: INFO: At 2020-01-24 20:47:54 +0000 UTC - event for server-bh2zk: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:50:41.805: INFO: At 2020-01-24 20:47:54 +0000 UTC - event for server-bh2zk: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:50:41.805: INFO: At 2020-01-24 20:47:54 +0000 UTC - event for server-bh2zk: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:50:41.805: INFO: At 2020-01-24 20:48:05 +0000 UTC - event for client-can-connect-80-z5b7x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:50:41.805: INFO: At 2020-01-24 20:48:06 +0000 UTC - event for client-can-connect-80-z5b7x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-can-connect-80-container | |
| Jan 24 20:50:41.805: INFO: At 2020-01-24 20:48:06 +0000 UTC - event for client-can-connect-80-z5b7x: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-can-connect-80-container | |
| Jan 24 20:50:41.806: INFO: At 2020-01-24 20:48:07 +0000 UTC - event for client-can-connect-81-d2sj4: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-81-container | |
| Jan 24 20:50:41.806: INFO: At 2020-01-24 20:48:07 +0000 UTC - event for client-can-connect-81-d2sj4: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:50:41.806: INFO: At 2020-01-24 20:48:08 +0000 UTC - event for client-can-connect-81-d2sj4: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-81-container | |
| Jan 24 20:50:41.806: INFO: At 2020-01-24 20:48:09 +0000 UTC - event for client-a-rz2z9: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:50:41.806: INFO: At 2020-01-24 20:48:10 +0000 UTC - event for client-a-rz2z9: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-a-container | |
| Jan 24 20:50:41.806: INFO: At 2020-01-24 20:48:10 +0000 UTC - event for client-a-rz2z9: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-a-container | |
| Jan 24 20:50:41.806: INFO: At 2020-01-24 20:48:12 +0000 UTC - event for client-b-bzz68: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:48:12 +0000 UTC - event for client-b-bzz68: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-b-container | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:48:12 +0000 UTC - event for client-b-bzz68: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-b-container | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:48:18 +0000 UTC - event for client-a-v8ln7: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-a-container | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:48:18 +0000 UTC - event for client-a-v8ln7: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-a-container | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:48:18 +0000 UTC - event for client-a-v8ln7: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:49:06 +0000 UTC - event for client-b-zt68p: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-b-container | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:49:06 +0000 UTC - event for client-b-zt68p: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-b-container | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:49:06 +0000 UTC - event for client-b-zt68p: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:50:41 +0000 UTC - event for server-bh2zk: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:50:41.807: INFO: At 2020-01-24 20:50:41 +0000 UTC - event for server-bh2zk: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:50:41.821: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:50:41.821: INFO: server-bh2zk workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:47:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:48:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:48:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:47:52 +0000 UTC }] | |
| Jan 24 20:50:41.821: INFO: | |
| Jan 24 20:50:41.870: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:50:41.874: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 40832 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:50:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:50:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:50:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:50:05 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:50:41.874: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:50:41.880: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:50:41.899: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:50:41.899: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:50:41.899: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:50:41.899: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:41.899: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:50:41.899: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:41.899: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:50:41.899: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:41.899: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:50:41.899: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:41.899: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:50:41.899: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:50:41.899: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:50:41.899: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:50:41.899: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:50:41.899: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:50:41.900: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:50:41.900: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:50:41.900: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:50:41.900: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:50:41.900: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:41.900: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:50:41.900: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:41.900: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:50:41.900: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:50:41.900: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:50:41.900: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:50:41.900: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:50:41.900: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:50:41.900: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| W0124 20:50:41.908175 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:50:42.110: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:50:42.110: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:50:42.116: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 40781 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:50:42.117: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:50:42.124: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:50:42.147: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:42.147: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:42.147: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:42.147: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:50:42.147: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:50:42.147: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:50:42.147: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:50:42.147: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| W0124 20:50:42.154492 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:50:42.268: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:50:42.268: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:50:42.272: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 40782 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:50:42.273: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:50:42.279: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:50:42.297: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:42.297: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:50:42.297: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:50:42.297: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:50:42.297: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:50:42.297: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:50:42.297: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:50:42.302766 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:50:42.392: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:50:42.392: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:50:42.396: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 40854 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:50:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:50:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:50:14 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:50:14 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:50:42.396: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:50:42.402: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:50:42.423: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:50:42.423: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:50:42.423: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:50:42.423: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:42.423: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:50:42.423: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:50:42.423: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:50:42.423: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:50:42.423: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:50:42.423: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:50:42.423: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:50:42.423: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:50:42.423: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:50:42.423: INFO: Container vsphere-csi-node ready: false, restart count 13 | |
| Jan 24 20:50:42.423: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:42.423: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| W0124 20:50:42.429450 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:50:42.529: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:50:42.530: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:50:42.534: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 40780 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:50:42.535: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:50:42.544: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:50:42.564: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:42.564: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:50:42.564: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:50:42.564: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:50:42.564: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:50:42.564: INFO: server-bh2zk started at 2020-01-24 20:47:52 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:50:42.564: INFO: Container server-container-80 ready: false, restart count 0 | |
| Jan 24 20:50:42.564: INFO: Container server-container-81 ready: false, restart count 0 | |
| W0124 20:50:42.571391 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:50:42.667: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:50:42.667: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:50:42.673: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 40778 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:49:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:50:42.673: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:50:42.681: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:50:42.699: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:50:42.699: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:50:42.699: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:42.699: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:50:42.699: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:50:42.699: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:50:42.699: INFO: Container kube-proxy ready: true, restart count 0 | |
| W0124 20:50:42.704740 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:50:42.801: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:50:42.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-75" for this suite. | |
| Jan 24 20:50:48.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:50:48.911: INFO: namespace network-policy-75 deletion completed in 6.104292014s | |
| STEP: Destroying namespace "network-policy-b-3230" for this suite. | |
| Jan 24 20:50:54.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:50:55.016: INFO: namespace network-policy-b-3230 deletion completed in 6.105099688s | |
| • Failure [182.285 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:383 | |
| Jan 24 20:50:41.630: Error getting container logs: the server could not find the requested resource (get pods client-a-j6746) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1270 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:50:55.032: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-9326 | |
| Jan 24 20:50:55.095: INFO: Created pod server-qgk8t | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-9326 | |
| Jan 24 20:50:55.133: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:51:07.161: INFO: Waiting for client-can-connect-80-hz6sk to complete. | |
| Jan 24 20:51:11.178: INFO: Waiting for client-can-connect-80-hz6sk to complete. | |
| Jan 24 20:51:11.178: INFO: Waiting up to 5m0s for pod "client-can-connect-80-hz6sk" in namespace "network-policy-9326" to be "success or failure" | |
| Jan 24 20:51:11.181: INFO: Pod "client-can-connect-80-hz6sk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.581083ms | |
| STEP: Saw pod success | |
| Jan 24 20:51:11.181: INFO: Pod "client-can-connect-80-hz6sk" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-hz6sk | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:51:11.200: INFO: Waiting for client-can-connect-81-vrgrh to complete. | |
| Jan 24 20:51:13.213: INFO: Waiting for client-can-connect-81-vrgrh to complete. | |
| Jan 24 20:51:13.213: INFO: Waiting up to 5m0s for pod "client-can-connect-81-vrgrh" in namespace "network-policy-9326" to be "success or failure" | |
| Jan 24 20:51:13.217: INFO: Pod "client-can-connect-81-vrgrh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.291444ms | |
| STEP: Saw pod success | |
| Jan 24 20:51:13.217: INFO: Pod "client-can-connect-81-vrgrh" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-vrgrh | |
| [It] should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1270 | |
| STEP: Creating a server pod pod-a in namespace network-policy-9326 | |
| Jan 24 20:51:13.242: INFO: Created pod pod-a-757q4 | |
| STEP: Creating a service svc-pod-a for pod pod-a in namespace network-policy-9326 | |
| Jan 24 20:51:13.273: INFO: Created service svc-pod-a | |
| STEP: Waiting for pod-a to be ready | |
| STEP: Creating client pod-b which should be able to contact the server pod-a. | |
| STEP: Creating client pod pod-b that should successfully connect to svc-pod-a. | |
| Jan 24 20:51:21.301: INFO: Waiting for pod-b-rpq8s to complete. | |
| Jan 24 20:51:25.310: INFO: Waiting for pod-b-rpq8s to complete. | |
| Jan 24 20:51:25.310: INFO: Waiting up to 5m0s for pod "pod-b-rpq8s" in namespace "network-policy-9326" to be "success or failure" | |
| Jan 24 20:51:25.314: INFO: Pod "pod-b-rpq8s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.266298ms | |
| STEP: Saw pod success | |
| Jan 24 20:51:25.314: INFO: Pod "pod-b-rpq8s" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod pod-b-rpq8s | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| STEP: Creating a server pod pod-b in namespace network-policy-9326 | |
| Jan 24 20:51:25.413: INFO: Created pod pod-b-q96gq | |
| STEP: Creating a service svc-pod-b for pod pod-b in namespace network-policy-9326 | |
| Jan 24 20:51:25.463: INFO: Created service svc-pod-b | |
| STEP: Waiting for pod-b to be ready | |
| STEP: Creating client pod-a which should be able to contact the server pod-b. | |
| STEP: Creating client pod pod-a that should successfully connect to svc-pod-b. | |
| Jan 24 20:51:37.484: INFO: Waiting for pod-a-w5pqb to complete. | |
| Jan 24 20:51:39.494: INFO: Waiting for pod-a-w5pqb to complete. | |
| Jan 24 20:51:39.494: INFO: Waiting up to 5m0s for pod "pod-a-w5pqb" in namespace "network-policy-9326" to be "success or failure" | |
| Jan 24 20:51:39.498: INFO: Pod "pod-a-w5pqb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.895747ms | |
| STEP: Saw pod success | |
| Jan 24 20:51:39.498: INFO: Pod "pod-a-w5pqb" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod pod-a-w5pqb | |
| STEP: Creating a network policy for pod-a which allows Egress traffic to pod-b. | |
| STEP: Creating a network policy for pod-a that denies traffic from pod-b. | |
| STEP: Creating client pod-a which should be able to contact the server pod-b. | |
| STEP: Creating client pod pod-a that should successfully connect to svc-pod-b. | |
| Jan 24 20:51:39.541: INFO: Waiting for pod-a-q4xx6 to complete. | |
| Jan 24 20:51:41.570: INFO: Waiting for pod-a-q4xx6 to complete. | |
| Jan 24 20:51:41.570: INFO: Waiting up to 5m0s for pod "pod-a-q4xx6" in namespace "network-policy-9326" to be "success or failure" | |
| Jan 24 20:51:41.579: INFO: Pod "pod-a-q4xx6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.782367ms | |
| STEP: Saw pod success | |
| Jan 24 20:51:41.580: INFO: Pod "pod-a-q4xx6" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod pod-a-q4xx6 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| STEP: Creating a server pod pod-a in namespace network-policy-9326 | |
| Jan 24 20:51:41.687: INFO: Created pod pod-a-z6kbr | |
| STEP: Creating a service svc-pod-a for pod pod-a in namespace network-policy-9326 | |
| Jan 24 20:51:41.731: INFO: Created service svc-pod-a | |
| STEP: Waiting for pod-a to be ready | |
| STEP: Creating client pod-b which should be able to contact the server pod-a. | |
| STEP: Creating client pod pod-b that should not be able to connect to svc-pod-a. | |
| Jan 24 20:51:51.792: INFO: Waiting for pod-b-ln295 to complete. | |
| Jan 24 20:51:51.792: INFO: Waiting up to 5m0s for pod "pod-b-ln295" in namespace "network-policy-9326" to be "success or failure" | |
| Jan 24 20:51:51.843: INFO: Pod "pod-b-ln295": Phase="Pending", Reason="", readiness=false. Elapsed: 50.814562ms | |
| Jan 24 20:51:53.847: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 2.054769355s | |
| Jan 24 20:51:55.851: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 4.058660234s | |
| Jan 24 20:51:57.855: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 6.063385308s | |
| Jan 24 20:51:59.860: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 8.067673378s | |
| Jan 24 20:52:01.865: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 10.072850362s | |
| Jan 24 20:52:03.871: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 12.078962705s | |
| Jan 24 20:52:05.876: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 14.083847891s | |
| Jan 24 20:52:07.880: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 16.087870725s | |
| Jan 24 20:52:09.888: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 18.096432658s | |
| Jan 24 20:52:11.893: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 20.101100961s | |
| Jan 24 20:52:13.897: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 22.105037663s | |
| Jan 24 20:52:15.901: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 24.109118041s | |
| Jan 24 20:52:17.905: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 26.112964303s | |
| Jan 24 20:52:19.932: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 28.139545946s | |
| Jan 24 20:52:21.936: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 30.143587338s | |
| Jan 24 20:52:23.940: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 32.147697518s | |
| Jan 24 20:52:25.944: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 34.151826637s | |
| Jan 24 20:52:27.948: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 36.155468161s | |
| Jan 24 20:52:29.952: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 38.160305358s | |
| Jan 24 20:52:31.957: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 40.164992581s | |
| Jan 24 20:52:33.961: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 42.169161606s | |
| Jan 24 20:52:35.965: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 44.172690337s | |
| Jan 24 20:52:37.969: INFO: Pod "pod-b-ln295": Phase="Running", Reason="", readiness=true. Elapsed: 46.176595917s | |
| Jan 24 20:52:39.974: INFO: Pod "pod-b-ln295": Phase="Failed", Reason="", readiness=false. Elapsed: 48.181760538s | |
| STEP: Cleaning up the pod pod-b-ln295 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| STEP: Cleaning up the policy. | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| Jan 24 20:52:40.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-9326" for this suite. | |
| Jan 24 20:52:46.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:52:46.307: INFO: namespace network-policy-9326 deletion completed in 6.130220366s | |
| • [SLOW TEST:111.276 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1270 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:290 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:52:46.327: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-8451 | |
| Jan 24 20:52:46.373: INFO: Created pod server-dzn76 | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-8451 | |
| Jan 24 20:52:46.403: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:52:52.422: INFO: Waiting for client-can-connect-80-x8wg8 to complete. | |
| Jan 24 20:52:56.437: INFO: Waiting for client-can-connect-80-x8wg8 to complete. | |
| Jan 24 20:52:56.437: INFO: Waiting up to 5m0s for pod "client-can-connect-80-x8wg8" in namespace "network-policy-8451" to be "success or failure" | |
| Jan 24 20:52:56.442: INFO: Pod "client-can-connect-80-x8wg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269965ms | |
| STEP: Saw pod success | |
| Jan 24 20:52:56.442: INFO: Pod "client-can-connect-80-x8wg8" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-x8wg8 | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:52:56.473: INFO: Waiting for client-can-connect-81-998sv to complete. | |
| Jan 24 20:52:58.484: INFO: Waiting for client-can-connect-81-998sv to complete. | |
| Jan 24 20:52:58.484: INFO: Waiting up to 5m0s for pod "client-can-connect-81-998sv" in namespace "network-policy-8451" to be "success or failure" | |
| Jan 24 20:52:58.488: INFO: Pod "client-can-connect-81-998sv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024959ms | |
| STEP: Saw pod success | |
| Jan 24 20:52:58.488: INFO: Pod "client-can-connect-81-998sv" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-998sv | |
| [It] should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:290 | |
| STEP: Creating a network policy for the server which allows traffic from client-b or namespace-b. | |
| STEP: Creating client pod client-a that should successfully connect to svc-server. | |
| Jan 24 20:52:58.574: INFO: Waiting for client-a-p6df2 to complete. | |
| Jan 24 20:53:46.584: INFO: Waiting for client-a-p6df2 to complete. | |
| Jan 24 20:53:46.584: INFO: Waiting up to 5m0s for pod "client-a-p6df2" in namespace "network-policy-b-3051" to be "success or failure" | |
| Jan 24 20:53:46.587: INFO: Pod "client-a-p6df2": Phase="Failed", Reason="", readiness=false. Elapsed: 3.023164ms | |
| Jan 24 20:53:46.591: FAIL: Error getting container logs: the server could not find the requested resource (get pods client-a-p6df2) | |
| STEP: Cleaning up the pod client-a-p6df2 | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-8451". | |
| STEP: Found 17 events. | |
| Jan 24 20:53:46.687: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-x8wg8: {default-scheduler } Scheduled: Successfully assigned network-policy-8451/client-can-connect-80-x8wg8 to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:53:46.687: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-998sv: {default-scheduler } Scheduled: Successfully assigned network-policy-8451/client-can-connect-81-998sv to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:53:46.687: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-dzn76: {default-scheduler } Scheduled: Successfully assigned network-policy-8451/server-dzn76 to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:47 +0000 UTC - event for server-dzn76: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:47 +0000 UTC - event for server-dzn76: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:47 +0000 UTC - event for server-dzn76: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:47 +0000 UTC - event for server-dzn76: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:47 +0000 UTC - event for server-dzn76: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:47 +0000 UTC - event for server-dzn76: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:53 +0000 UTC - event for client-can-connect-80-x8wg8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:53 +0000 UTC - event for client-can-connect-80-x8wg8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-80-container | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:53 +0000 UTC - event for client-can-connect-80-x8wg8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-80-container | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:57 +0000 UTC - event for client-can-connect-81-998sv: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-81-container | |
| Jan 24 20:53:46.688: INFO: At 2020-01-24 20:52:57 +0000 UTC - event for client-can-connect-81-998sv: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-81-container | |
| Jan 24 20:53:46.689: INFO: At 2020-01-24 20:52:57 +0000 UTC - event for client-can-connect-81-998sv: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:53:46.690: INFO: At 2020-01-24 20:53:46 +0000 UTC - event for server-dzn76: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:53:46.691: INFO: At 2020-01-24 20:53:46 +0000 UTC - event for server-dzn76: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:53:46.702: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:53:46.702: INFO: server-dzn76 workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:52:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:52:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:52:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:52:46 +0000 UTC }] | |
| Jan 24 20:53:46.702: INFO: | |
| Jan 24 20:53:46.717: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:53:46.725: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 41688 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:53:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:53:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:53:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:53:05 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:53:46.726: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:53:46.736: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:53:46.754: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:53:46.754: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:53:46.754: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:53:46.754: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:53:46.754: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:53:46.754: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:53:46.754: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:53:46.754: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:53:46.754: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:53:46.754: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:53:46.754: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:53:46.754: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:53:46.754: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:46.755: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:53:46.755: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:46.755: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:53:46.755: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:46.756: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:53:46.756: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:46.756: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:53:46.756: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:46.757: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| Jan 24 20:53:46.757: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:46.757: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:53:46.757: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:53:46.757: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:53:46.757: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:53:46.757: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:53:46.757: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:53:46.757: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| W0124 20:53:46.763433 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:53:46.919: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:53:46.919: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:53:46.923: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 41606 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:53:46.923: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:53:46.930: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:53:46.948: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:53:46.949: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:53:46.949: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:53:46.949: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:53:46.950: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:53:46.950: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:53:46.951: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:53:46.951: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:46.951: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:53:46.951: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:46.952: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:53:46.952: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:46.952: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:53:46.952: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:53:46.952: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:53:46.953: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:53:46.953: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:53:46.953: INFO: Container calico-node ready: true, restart count 0 | |
| W0124 20:53:46.958395 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:53:47.045: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:53:47.045: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:53:47.049: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 41607 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:53:47.050: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:53:47.056: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:53:47.079: INFO: kube-proxy-pw7c7 started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:47.079: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: calico-node-rqffr started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:53:47.079: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: vsphere-csi-node-bml6x started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:53:47.079: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: sonobuoy-e2e-job-60496bb95c8b4e15 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:53:47.079: INFO: Container e2e ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-pvvzm started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:53:47.079: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:53:47.079: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:53:47.088351 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:53:47.190: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:53:47.190: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:53:47.193: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-rnz88 /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-rnz88 6a50990d-c3aa-408e-89a9-b96a59028428 41712 0 2020-01-24 17:31:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-rnz88 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-rnz88"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.19.181/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.158.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42041b55-5b6a-2e5f-e33f-ad3ffa7f7b6e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:37 +0000 UTC,LastTransitionTime:2020-01-24 20:20:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:53:15 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:53:15 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:53:15 +0000 UTC,LastTransitionTime:2020-01-24 17:31:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:53:15 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-rnz88,},NodeAddress{Type:ExternalIP,Address:10.193.19.181,},NodeAddress{Type:InternalIP,Address:10.193.19.181,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:89e27edae47c4afabb7c067d037286ec,SystemUUID:551B0442-6A5B-5F2E-E33F-AD3FFA7F7B6E,BootID:d2a8e20a-f6ad-4b68-b732-e0d7bdbcc492,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:4a7190a4731cf3340c39ab33abf845baf6c81d911c965a879fffc1552e1a1938 docker.io/calico/kube-controllers:v3.10.3],SizeBytes:21175514,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:53:47.194: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:53:47.200: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:53:47.218: INFO: kube-proxy-hwjc4 started at 2020-01-24 17:31:34 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:47.218: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:53:47.218: INFO: calico-node-5g8mr started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:53:47.218: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:53:47.218: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:53:47.218: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:53:47.218: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:53:47.218: INFO: vsphere-csi-node-6nzwf started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:53:47.218: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:53:47.218: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:53:47.218: INFO: Container vsphere-csi-node ready: false, restart count 15 | |
| Jan 24 20:53:47.218: INFO: calico-kube-controllers-7489ff5b7c-sq5gk started at 2020-01-24 20:20:00 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:47.218: INFO: Container calico-kube-controllers ready: true, restart count 0 | |
| Jan 24 20:53:47.218: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-zjw2s started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:53:47.218: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:53:47.218: INFO: Container systemd-logs ready: true, restart count 0 | |
| W0124 20:53:47.224254 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:53:47.333: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:53:47.333: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:53:47.337: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-tgjll /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-tgjll efb8ee46-7fce-4349-92f2-46fa03f02d30 41605 0 2020-01-24 17:31:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-tgjll kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-tgjll"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.25.9/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.225.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204f234-cdde-565a-8623-6c3c5457e3cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:26 +0000 UTC,LastTransitionTime:2020-01-24 20:20:26 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-tgjll,},NodeAddress{Type:ExternalIP,Address:10.193.25.9,},NodeAddress{Type:InternalIP,Address:10.193.25.9,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb2c77d8141443fd8503de52dbae52e8,SystemUUID:34F20442-DECD-5A56-8623-6C3C5457E3CC,BootID:849ca1bb-d08e-417e-bf12-485a70ffc44a,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:53:47.338: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:53:47.344: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:53:47.362: INFO: kube-proxy-wp7tn started at 2020-01-24 17:31:37 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:47.362: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: calico-node-wvctw started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:53:47.362: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: vsphere-csi-node-rqv42 started at 2020-01-24 17:44:49 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:53:47.362: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-prfkd started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:53:47.362: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: server-dzn76 started at 2020-01-24 20:52:46 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:53:47.362: INFO: Container server-container-80 ready: true, restart count 0 | |
| Jan 24 20:53:47.362: INFO: Container server-container-81 ready: true, restart count 0 | |
| W0124 20:53:47.369194 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:53:47.450: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:53:47.450: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:53:47.454: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-vvrzt /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-vvrzt 200933f4-8d1c-469c-b168-43a6ccbf5d04 41604 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-vvrzt kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-vvrzt"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.7.86/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.163.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42048f3c-cde1-2850-a851-6c4d757659cc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:52:51 +0000 UTC,LastTransitionTime:2020-01-24 17:44:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-vvrzt,},NodeAddress{Type:ExternalIP,Address:10.193.7.86,},NodeAddress{Type:InternalIP,Address:10.193.7.86,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c2ea802c8014c7aa88cb9bbbea1711b,SystemUUID:3C8F0442-E1CD-5028-A851-6C4D757659CC,BootID:9e70f010-f55d-4f1e-9e44-645f0afe4c09,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:53:47.455: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:53:47.460: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:53:47.481: INFO: vsphere-csi-node-ttr82 started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:53:47.481: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: sonobuoy started at 2020-01-24 20:21:31 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:47.481: INFO: Container kube-sonobuoy ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-9jn4t started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:53:47.481: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: kube-proxy-5jvhn started at 2020-01-24 17:31:43 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:53:47.481: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: calico-node-zpwz4 started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:53:47.481: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:53:47.481: INFO: Container calico-node ready: true, restart count 0 | |
| W0124 20:53:47.486643 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:53:47.608: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:53:47.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
| STEP: Destroying namespace "network-policy-8451" for this suite. | |
| Jan 24 20:53:59.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:53:59.717: INFO: namespace network-policy-8451 deletion completed in 12.104418343s | |
| STEP: Destroying namespace "network-policy-b-3051" for this suite. | |
| Jan 24 20:54:05.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered | |
| Jan 24 20:54:05.829: INFO: namespace network-policy-b-3051 deletion completed in 6.111509159s | |
| • Failure [79.502 seconds] | |
| [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 | |
| NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:56 | |
| should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [It] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:290 | |
| Jan 24 20:53:46.591: Error getting container logs: the server could not find the requested resource (get pods client-a-p6df2) | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:1421 | |
| ------------------------------ | |
| SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS | |
| ------------------------------ | |
| [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client | |
| should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:960 | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 | |
| STEP: Creating a kubernetes client | |
| Jan 24 20:54:05.834: INFO: >>> kubeConfig: /tmp/kubeconfig-418650170 | |
| STEP: Building a namespace api object, basename network-policy | |
| STEP: Waiting for a default service account to be provisioned in namespace | |
| [BeforeEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:51 | |
| [BeforeEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:57 | |
| STEP: Creating a simple server that serves on port 80 and 81. | |
| STEP: Creating a server pod server in namespace network-policy-6177 | |
| Jan 24 20:54:05.908: INFO: Created pod server-4r5l8 | |
| STEP: Creating a service svc-server for pod server in namespace network-policy-6177 | |
| Jan 24 20:54:05.939: INFO: Created service svc-server | |
| STEP: Waiting for pod ready | |
| STEP: Testing pods can connect to both ports when no policy is present. | |
| STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. | |
| Jan 24 20:54:15.968: INFO: Waiting for client-can-connect-80-jb7gv to complete. | |
| Jan 24 20:54:17.985: INFO: Waiting for client-can-connect-80-jb7gv to complete. | |
| Jan 24 20:54:17.985: INFO: Waiting up to 5m0s for pod "client-can-connect-80-jb7gv" in namespace "network-policy-6177" to be "success or failure" | |
| Jan 24 20:54:17.989: INFO: Pod "client-can-connect-80-jb7gv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.831539ms | |
| STEP: Saw pod success | |
| Jan 24 20:54:17.989: INFO: Pod "client-can-connect-80-jb7gv" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-80-jb7gv | |
| STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. | |
| Jan 24 20:54:18.039: INFO: Waiting for client-can-connect-81-pj768 to complete. | |
| Jan 24 20:54:20.062: INFO: Waiting for client-can-connect-81-pj768 to complete. | |
| Jan 24 20:54:20.062: INFO: Waiting up to 5m0s for pod "client-can-connect-81-pj768" in namespace "network-policy-6177" to be "success or failure" | |
| Jan 24 20:54:20.067: INFO: Pod "client-can-connect-81-pj768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.583248ms | |
| STEP: Saw pod success | |
| Jan 24 20:54:20.067: INFO: Pod "client-can-connect-81-pj768" satisfied condition "success or failure" | |
| STEP: Cleaning up the pod client-can-connect-81-pj768 | |
| [It] should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:960 | |
| STEP: Creating a network policy for the server which allows traffic only from client-b. | |
| STEP: Creating client-a which should not be able to contact the server. | |
| STEP: Creating client pod client-a that should not be able to connect to svc-server. | |
| Jan 24 20:54:20.102: INFO: Waiting for client-a-8pt8j to complete. | |
| Jan 24 20:54:20.102: INFO: Waiting up to 5m0s for pod "client-a-8pt8j" in namespace "network-policy-6177" to be "success or failure" | |
| Jan 24 20:54:20.115: INFO: Pod "client-a-8pt8j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.65376ms | |
| Jan 24 20:54:22.148: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 2.045874678s | |
| Jan 24 20:54:24.152: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 4.049692072s | |
| Jan 24 20:54:26.157: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 6.054767974s | |
| Jan 24 20:54:28.161: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 8.059191673s | |
| Jan 24 20:54:30.165: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 10.063128775s | |
| Jan 24 20:54:32.177: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 12.074724774s | |
| Jan 24 20:54:34.181: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 14.078917699s | |
| Jan 24 20:54:36.185: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 16.083081543s | |
| Jan 24 20:54:38.190: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 18.087829746s | |
| Jan 24 20:54:40.195: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 20.092395175s | |
| Jan 24 20:54:42.199: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 22.096358039s | |
| Jan 24 20:54:44.204: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 24.101616461s | |
| Jan 24 20:54:46.209: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 26.106428102s | |
| Jan 24 20:54:48.213: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 28.111141789s | |
| Jan 24 20:54:50.217: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 30.11521466s | |
| Jan 24 20:54:52.222: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 32.119683895s | |
| Jan 24 20:54:54.226: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 34.123825856s | |
| Jan 24 20:54:56.230: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 36.128156644s | |
| Jan 24 20:54:58.235: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 38.132921311s | |
| Jan 24 20:55:00.239: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 40.136701589s | |
| Jan 24 20:55:02.247: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 42.144423557s | |
| Jan 24 20:55:04.250: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 44.148052291s | |
| Jan 24 20:55:06.254: INFO: Pod "client-a-8pt8j": Phase="Running", Reason="", readiness=true. Elapsed: 46.152099894s | |
| Jan 24 20:55:08.301: INFO: Pod "client-a-8pt8j": Phase="Failed", Reason="", readiness=false. Elapsed: 48.199287714s | |
| STEP: Cleaning up the pod client-a-8pt8j | |
| STEP: Creating client-b which should be able to contact the server. | |
| STEP: Creating client pod client-b that should successfully connect to svc-server. | |
| Jan 24 20:55:08.354: INFO: Waiting for client-b-hh6l4 to complete. | |
| Jan 24 20:55:56.379: INFO: Waiting for client-b-hh6l4 to complete. | |
| Jan 24 20:55:56.379: INFO: Waiting up to 5m0s for pod "client-b-hh6l4" in namespace "network-policy-6177" to be "success or failure" | |
| Jan 24 20:55:56.383: INFO: Pod "client-b-hh6l4": Phase="Failed", Reason="", readiness=false. Elapsed: 3.474777ms | |
| Jan 24 20:55:56.387: FAIL: Error getting container logs: the server rejected our request for an unknown reason (get pods client-b-hh6l4) | |
| STEP: Cleaning up the pod client-b-hh6l4 | |
| STEP: Cleaning up the policy. | |
| [AfterEach] NetworkPolicy between server and client | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/network_policy.go:75 | |
| STEP: Cleaning up the server. | |
| STEP: Cleaning up the server's service. | |
| [AfterEach] [sig-network] NetworkPolicy [LinuxOnly] | |
| /workspace/anago-v1.16.3-beta.0.56+b3cbbae08ec52a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 | |
| STEP: Collecting events from namespace "network-policy-6177". | |
| STEP: Found 25 events. | |
| Jan 24 20:55:56.502: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-8pt8j: {default-scheduler } Scheduled: Successfully assigned network-policy-6177/client-a-8pt8j to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:55:56.502: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-b-hh6l4: {default-scheduler } Scheduled: Successfully assigned network-policy-6177/client-b-hh6l4 to workload-cluster-4-md-0-5c7f78dbc8-vvrzt | |
| Jan 24 20:55:56.503: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-jb7gv: {default-scheduler } Scheduled: Successfully assigned network-policy-6177/client-can-connect-80-jb7gv to workload-cluster-4-md-0-5c7f78dbc8-rnz88 | |
| Jan 24 20:55:56.503: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-pj768: {default-scheduler } Scheduled: Successfully assigned network-policy-6177/client-can-connect-81-pj768 to workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:55:56.503: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-4r5l8: {default-scheduler } Scheduled: Successfully assigned network-policy-6177/server-4r5l8 to workload-cluster-4-md-0-5c7f78dbc8-tgjll | |
| Jan 24 20:55:56.503: INFO: At 2020-01-24 20:54:06 +0000 UTC - event for server-4r5l8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-80 | |
| Jan 24 20:55:56.504: INFO: At 2020-01-24 20:54:06 +0000 UTC - event for server-4r5l8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:55:56.504: INFO: At 2020-01-24 20:54:07 +0000 UTC - event for server-4r5l8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-81 | |
| Jan 24 20:55:56.505: INFO: At 2020-01-24 20:54:07 +0000 UTC - event for server-4r5l8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Created: Created container server-container-81 | |
| Jan 24 20:55:56.506: INFO: At 2020-01-24 20:54:07 +0000 UTC - event for server-4r5l8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.6" already present on machine | |
| Jan 24 20:55:56.506: INFO: At 2020-01-24 20:54:07 +0000 UTC - event for server-4r5l8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Started: Started container server-container-80 | |
| Jan 24 20:55:56.506: INFO: At 2020-01-24 20:54:16 +0000 UTC - event for client-can-connect-80-jb7gv: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:55:56.509: INFO: At 2020-01-24 20:54:17 +0000 UTC - event for client-can-connect-80-jb7gv: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Created: Created container client-can-connect-80-container | |
| Jan 24 20:55:56.509: INFO: At 2020-01-24 20:54:17 +0000 UTC - event for client-can-connect-80-jb7gv: {kubelet workload-cluster-4-md-0-5c7f78dbc8-rnz88} Started: Started container client-can-connect-80-container | |
| Jan 24 20:55:56.512: INFO: At 2020-01-24 20:54:19 +0000 UTC - event for client-can-connect-81-pj768: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:55:56.512: INFO: At 2020-01-24 20:54:19 +0000 UTC - event for client-can-connect-81-pj768: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-can-connect-81-container | |
| Jan 24 20:55:56.512: INFO: At 2020-01-24 20:54:19 +0000 UTC - event for client-can-connect-81-pj768: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-can-connect-81-container | |
| Jan 24 20:55:56.512: INFO: At 2020-01-24 20:54:21 +0000 UTC - event for client-a-8pt8j: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Created: Created container client-a-container | |
| Jan 24 20:55:56.513: INFO: At 2020-01-24 20:54:21 +0000 UTC - event for client-a-8pt8j: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:55:56.513: INFO: At 2020-01-24 20:54:21 +0000 UTC - event for client-a-8pt8j: {kubelet workload-cluster-4-md-0-5c7f78dbc8-dj56d} Started: Started container client-a-container | |
| Jan 24 20:55:56.513: INFO: At 2020-01-24 20:55:09 +0000 UTC - event for client-b-hh6l4: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Started: Started container client-b-container | |
| Jan 24 20:55:56.513: INFO: At 2020-01-24 20:55:09 +0000 UTC - event for client-b-hh6l4: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Created: Created container client-b-container | |
| Jan 24 20:55:56.513: INFO: At 2020-01-24 20:55:09 +0000 UTC - event for client-b-hh6l4: {kubelet workload-cluster-4-md-0-5c7f78dbc8-vvrzt} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine | |
| Jan 24 20:55:56.515: INFO: At 2020-01-24 20:55:56 +0000 UTC - event for server-4r5l8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-80 | |
| Jan 24 20:55:56.516: INFO: At 2020-01-24 20:55:56 +0000 UTC - event for server-4r5l8: {kubelet workload-cluster-4-md-0-5c7f78dbc8-tgjll} Killing: Stopping container server-container-81 | |
| Jan 24 20:55:56.520: INFO: POD NODE PHASE GRACE CONDITIONS | |
| Jan 24 20:55:56.520: INFO: server-4r5l8 workload-cluster-4-md-0-5c7f78dbc8-tgjll Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:54:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:54:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:54:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 20:54:05 +0000 UTC }] | |
| Jan 24 20:55:56.520: INFO: | |
| Jan 24 20:55:56.528: INFO: | |
| Logging node info for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:55:56.531: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-controlplane-0 /api/v1/nodes/workload-cluster-4-controlplane-0 82506459-dd0f-48e7-bd9d-75efde1111f6 42087 0 2020-01-24 17:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-controlplane-0 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-controlplane-0"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.28.186/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.210.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204e6d6-9184-b271-f650-0dabd9f75516,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:55:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:55:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:55:05 +0000 UTC,LastTransitionTime:2020-01-24 17:18:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:55:05 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-controlplane-0,},NodeAddress{Type:ExternalIP,Address:10.193.28.186,},NodeAddress{Type:InternalIP,Address:10.193.28.186,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80b9ac3a81a84f4e8f29dac5e41ffd1f,SystemUUID:D6E60442-8491-71B2-F650-0DABD9F75516,BootID:0411b150-3c1b-4216-a7bc-d4a359e6753d,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/syncer@sha256:fc80ec77a2ab4b58ddfa259a938f6d741933566011d56e5ffcc8680cc83538fe gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1],SizeBytes:38454608,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner:v1.2.1],SizeBytes:18924297,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/cpi/release/manager@sha256:64de5c7f10e55703142383fade40886091528ca505f00c98d57e27f10f04fc03 gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.1.0],SizeBytes:16201394,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher:v1.1.1],SizeBytes:15526843,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:55:56.531: INFO: | |
| Logging kubelet events for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:55:56.537: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-controlplane-0 | |
| Jan 24 20:55:56.569: INFO: kube-apiserver-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Container kube-apiserver ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: kube-scheduler-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Container kube-scheduler ready: true, restart count 1 | |
| Jan 24 20:55:56.569: INFO: kube-proxy-ng9fc started at 2020-01-24 17:19:39 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: vsphere-csi-node-bvs7m started at 2020-01-24 17:44:44 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: calico-node-dtwvg started at 2020-01-24 20:19:59 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-t9xfs started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: etcd-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Container etcd ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: vsphere-cloud-controller-manager-xgbwj started at 2020-01-24 17:19:42 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: vsphere-csi-controller-0 started at 2020-01-24 17:44:46 +0000 UTC (0+5 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Container csi-attacher ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Container csi-provisioner ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Container vsphere-csi-controller ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: Container vsphere-syncer ready: true, restart count 0 | |
| Jan 24 20:55:56.569: INFO: kube-controller-manager-workload-cluster-4-controlplane-0 started at 2020-01-24 17:17:56 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:55:56.569: INFO: Container kube-controller-manager ready: true, restart count 2 | |
| W0124 20:55:56.575210 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:55:56.724: INFO: | |
| Latency metrics for node workload-cluster-4-controlplane-0 | |
| Jan 24 20:55:56.724: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:55:56.728: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-dj56d /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-dj56d e3b9736e-3d53-4225-9f32-9b5ed43a351d 42209 0 2020-01-24 17:31:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-dj56d kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-dj56d"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.22.156/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.222.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://42045096-e00b-ae55-1d85-007f6b8efaee,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229211136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124353536 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:50 +0000 UTC,LastTransitionTime:2020-01-24 20:20:50 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:55:52 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:55:52 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:55:52 +0000 UTC,LastTransitionTime:2020-01-24 17:31:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:55:52 +0000 UTC,LastTransitionTime:2020-01-24 17:44:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-dj56d,},NodeAddress{Type:ExternalIP,Address:10.193.22.156,},NodeAddress{Type:InternalIP,Address:10.193.22.156,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b52db3bea2844f019e98985de63db412,SystemUUID:96500442-0BE0-55AE-1D85-007F6B8EFAEE,BootID:4ec1b10c-fc30-41e5-9d16-834bd0f5cf13,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[docker.io/jayunit100/kube-controllers@sha256:80d3bff453f163c62d0dadf44d5056053144eceaf0d05fe8c90b861aa4d5d602 docker.io/jayunit100/kube-controllers:tkg2],SizeBytes:21160877,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:14124020,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:6939423,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:6690548,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:f23c709f991553b75a9a8c6f156c4f61f47097424d6e5b0e6e9319da98a86185 docker.io/calico/pod2daemon-flexvol:v3.10.3],SizeBytes:4908661,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} | |
| Jan 24 20:55:56.729: INFO: | |
| Logging kubelet events for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:55:56.736: INFO: | |
| Logging pods the kubelet thinks is on node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:55:56.757: INFO: kube-proxy-9cjbv started at 2020-01-24 17:31:41 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:55:56.757: INFO: Container kube-proxy ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: calico-node-8t722 started at 2020-01-24 20:19:58 +0000 UTC (3+1 container statuses recorded) | |
| Jan 24 20:55:56.757: INFO: Init container upgrade-ipam ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: Init container install-cni ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: Init container flexvol-driver ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: Container calico-node ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: sonobuoy-systemd-logs-daemon-set-6ed29df5401c40af-25557 started at 2020-01-24 20:21:45 +0000 UTC (0+2 container statuses recorded) | |
| Jan 24 20:55:56.757: INFO: Container sonobuoy-worker ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: Container systemd-logs ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: vsphere-csi-node-tkb9r started at 2020-01-24 17:44:43 +0000 UTC (0+3 container statuses recorded) | |
| Jan 24 20:55:56.757: INFO: Container liveness-probe ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: Container node-driver-registrar ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: Container vsphere-csi-node ready: true, restart count 0 | |
| Jan 24 20:55:56.757: INFO: coredns-5644d7b6d9-m4lts started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:55:56.758: INFO: Container coredns ready: true, restart count 0 | |
| Jan 24 20:55:56.758: INFO: coredns-5644d7b6d9-ph9m8 started at 2020-01-24 17:44:44 +0000 UTC (0+1 container statuses recorded) | |
| Jan 24 20:55:56.758: INFO: Container coredns ready: true, restart count 0 | |
| W0124 20:55:56.765309 22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. | |
| Jan 24 20:55:56.848: INFO: | |
| Latency metrics for node workload-cluster-4-md-0-5c7f78dbc8-dj56d | |
| Jan 24 20:55:56.848: INFO: | |
| Logging node info for node workload-cluster-4-md-0-5c7f78dbc8-l8q2f | |
| Jan 24 20:55:56.852: INFO: Node Info: &Node{ObjectMeta:{workload-cluster-4-md-0-5c7f78dbc8-l8q2f /api/v1/nodes/workload-cluster-4-md-0-5c7f78dbc8-l8q2f 3be5c565-a115-4d45-a1be-f70db9c1b166 42210 0 2020-01-24 17:31:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:vsphere-vm.cpu-2.mem-9gb.os-linux beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:workload-cluster-4-md-0-5c7f78dbc8-l8q2f kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi.vsphere.vmware.com":"workload-cluster-4-md-0-5c7f78dbc8-l8q2f"} kubeadm.alpha.kubernetes.io/cri-socket:/var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.193.1.57/19 projectcalico.org/IPv4IPIPTunnelAddr:192.168.184.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://4204aa91-067b-70ca-7b40-e384413568c2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{158398345216 0} {<nil>} 154685884Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10229202944 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{142558510459 0} {<nil>} 142558510459 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{10124345344 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-24 20:20:23 +0000 UTC,LastTransitionTime:2020-01-24 20:20:23 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-24 20:55:52 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-24 20:55:52 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-24 20:55:52 +0000 UTC,LastTransitionTime:2020-01-24 17:31:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-24 20:55:52 +0000 UTC,LastTransitionTime:2020-01-24 17:44:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:workload-cluster-4-md-0-5c7f78dbc8-l8q2f,},NodeAddress{Type:ExternalIP,Address:10.193.1.57,},NodeAddress{Type:InternalIP,Address:10.193.1.57,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:45e1a30e9fcd470795575df43a4210d5,SystemUUID:91AA0442-7B06-CA70-7B40-E384413568C2,BootID:3aae672c-d64c-4549-8c6f-2c6720e8ef80,KernelVersion:4.15.0-72-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.3.0,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:0ae49fdc22b3d81e85b98ea4587628bb78f49eaf5c14cdb47a13d8918d62cc00 gcr.io/google-containers/conformance:v1.16.3],SizeBytes:195864937,},ContainerImage{Names:[gcr.io/heptio-images/sonobuoy-plugin-systemd-logs@sha256:004b11e6f2096bc72684f1d84ebaac28ed9edff96ee4ef8d5b9a75471acd1109 gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest],SizeBytes:121664884,},ContainerImage{Names:[docker.io/jayunit100/node@sha256:a66b00d1618f2909b8db4674e32bf90fbaa4db483c3d1021eda17b4ae1f49418 docker.io/jayunit100/node:tkg3],SizeBytes:88169132,},ContainerImage{Names:[docker.io/calico/node@sha256:9438de2ad7e4426e324c15ab20c5167b2ce168ebc21de84eca37825dbd434d19 docker.io/calico/node:v3.10.3],SizeBytes:87935755,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:85499339,},ContainerImage{Names:[gcr.io/cloud-provider-vsphere/csi/release/driver@sha256:fae6806f5423a0099cdf60cf53cff474b228ee4846a242d025e4833a66f91b3f gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1],SizeBytes:75110484,},ContainerImage{Names:[docker.io/jayunit100/cni-plugin@sha256:4a917896921a035fad4c60708289901b4963235e3397ed74460093bd3023bf29 docker.io/jayunit100/cni-plugin:tkg2],SizeBytes:57289165,},ContainerImage{Names:[docker.io/calico/cni@sha256:dd9840a37c296f42da6affc4c2e5285af772536c58f5eab0feafac9a1fceb48d docker.io/calico/cni:v3.10.3],SizeBytes:57093678,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f5a92265420ef1f553d0f4eafd4c688d7adcd6891c9378984cbf0d5df6c481a1 k8s.gcr.io/kube-apiserver:v1.16.3],SizeBytes:50504471,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:ca57d9c1ec4fb73e6b784cd8a3c7d8e9071b427a6e94ba20b3752c43adedf584 k8s.gcr.io/kube-controller-manager:v1.16.3],SizeBytes:48859981,},ContainerImage{Names:[docker.io/sonobuoy/sonobuoy@sha256:71f686fe27f2454c6648b8fbb82373bfa2b88ede4b5df623a655dd81930c6f16 docker.io/sonobuoy/sonobuoy:v0.17.1],SizeBytes:32030359,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6442ca8c88e003c354d4aab6b70c85edec1927e59441111a25d8968df5b405e9 k8s.gcr.io/kube-scheduler:v1.16.3],SizeBytes:31412214,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 k8s.gcr.io/kube-proxy:v1.16.3],SizeBytes:30889499,},ContainerImage{Names:[docker.io/jayunit100/pod2daemon@sha256:06934883f117a793d895c2d1ed75cd8d341f41e5d85dbe3d373dec6b4b221123 docker.io/jayunit100/pod2daemon:tkg2],SizeBytes:29893197,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment