Created
February 29, 2016 19:04
-
-
Save yifan-gu/51e29ef92428e4272d04 to your computer and use it in GitHub Desktop.
e2e log Feb.28
This file has been truncated, but you can view the full file.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a", GitCommit:"f5e2032ea2f1cd79187eaac32e22e37655c9c900", GitTreeState:"dirty"} | |
+++ Staging server tars to Google Storage: gs://kubernetes-staging-9c9cb47be7/spotter-kube-rkt-devel | |
+++ kubernetes-server-linux-amd64.tar.gz already staged ('rm /home/spotter/gocode/src/k8s.io/y-kubernetes/hack/e2e-internal/../../cluster/gce/../../cluster/../cluster/../cluster/../cluster/gce/../../cluster/../_output/release-tars/kubernetes-server-linux-amd64.tar.gz.uploaded.sha1' to force) | |
+++ kubernetes-salt.tar.gz already staged ('rm /home/spotter/gocode/src/k8s.io/y-kubernetes/hack/e2e-internal/../../cluster/gce/../../cluster/../cluster/../cluster/../cluster/gce/../../cluster/../_output/release-tars/kubernetes-salt.tar.gz.uploaded.sha1' to force) | |
+++ kubernetes-manifests.tar.gz already staged ('rm /home/spotter/gocode/src/k8s.io/y-kubernetes/hack/e2e-internal/../../cluster/gce/../../cluster/../cluster/../cluster/../cluster/gce/../../cluster/../_output/release-tars/kubernetes-manifests.tar.gz.uploaded.sha1' to force) | |
Starting master and configuring firewalls | |
NAME ZONE SIZE_GB TYPE STATUS | |
spotter-kube-rkt-master-pd us-east1-b 20 pd-ssd READY | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
spotter-kube-rkt-master-https e2e 0.0.0.0/0 tcp:443 spotter-kube-rkt-master | |
Generating certs for alternate-names: IP:104.196.32.11,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:spotter-kube-rkt-master | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
spotter-kube-rkt-minion-all e2e 10.245.0.0/16 tcp,udp,icmp,esp,ah,sctp spotter-kube-rkt-minion | |
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS | |
spotter-kube-rkt-master us-east1-b n1-standard-2 10.240.0.2 104.196.32.11 RUNNING | |
Creating minions. | |
NAME ZONE BASE_INSTANCE_NAME SIZE TARGET_SIZE INSTANCE_TEMPLATE AUTOSCALED | |
spotter-kube-rkt-minion-group us-east1-b spotter-kube-rkt-minion 3 spotter-kube-rkt-minion-template | |
Waiting for group to become stable, current operations: creating: 3 | |
Waiting for group to become stable, current operations: creating: 3 | |
Waiting for group to become stable, current operations: creating: 3 | |
Waiting for group to become stable, current operations: creating: 3 | |
Waiting for group to become stable, current operations: creating: 3 | |
Waiting for group to become stable, current operations: creating: 3 | |
Group is stable | |
Using master: spotter-kube-rkt-master (external IP: 104.196.32.11) | |
Waiting up to 300 seconds for cluster initialization. | |
This will continually check to see if the API for kubernetes is reachable. | |
This may time out if there was some uncaught error during start up. | |
Kubernetes cluster created. | |
cluster "coreos-gce-testing_spotter-kube-rkt" set. | |
user "coreos-gce-testing_spotter-kube-rkt" set. | |
context "coreos-gce-testing_spotter-kube-rkt" set. | |
switched to context "coreos-gce-testing_spotter-kube-rkt". | |
user "coreos-gce-testing_spotter-kube-rkt-basic-auth" set. | |
Wrote config for coreos-gce-testing_spotter-kube-rkt to /home/spotter/.kube/config | |
Kubernetes cluster is running. The master is running at: | |
https://104.196.32.11 | |
The user name and password to use is located in /home/spotter/.kube/config. | |
Detected 4 ready nodes, found 4 nodes out of expected 3. Found more nodes than expected, your cluster may not behave correctly. | |
Found 4 node(s). | |
NAME STATUS AGE | |
spotter-kube-rkt-master Ready,SchedulingDisabled 20s | |
spotter-kube-rkt-minion-8b1u Ready 11s | |
spotter-kube-rkt-minion-yii0 Ready 20s | |
spotter-kube-rkt-minion-yo39 Ready 20s | |
Validate output: | |
NAME STATUS MESSAGE ERROR | |
scheduler Healthy ok | |
controller-manager Healthy ok | |
etcd-0 Healthy {"health": "true"} | |
etcd-1 Healthy {"health": "true"} | |
Cluster validation succeeded | |
Kubernetes master is running at https://104.196.32.11 | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
spotter-kube-rkt-minion-spotter-kube-rkt-http-alt e2e 0.0.0.0/0 tcp:80,tcp:8080 spotter-kube-rkt-minion | |
allowed: | |
- IPProtocol: tcp | |
ports: | |
- '80' | |
- IPProtocol: tcp | |
ports: | |
- '8080' | |
creationTimestamp: '2016-02-28T19:26:25.893-08:00' | |
description: '' | |
id: '9176346562479916174' | |
kind: compute#firewall | |
name: spotter-kube-rkt-minion-spotter-kube-rkt-http-alt | |
network: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/networks/e2e | |
selfLink: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/spotter-kube-rkt-minion-spotter-kube-rkt-http-alt | |
sourceRanges: | |
- 0.0.0.0/0 | |
targetTags: | |
- spotter-kube-rkt-minion | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
spotter-kube-rkt-minion-spotter-kube-rkt-nodeports e2e 0.0.0.0/0 tcp:30000-32767,udp:30000-32767 spotter-kube-rkt-minion | |
allowed: | |
- IPProtocol: tcp | |
ports: | |
- 30000-32767 | |
- IPProtocol: udp | |
ports: | |
- 30000-32767 | |
creationTimestamp: '2016-02-28T19:27:06.200-08:00' | |
description: '' | |
id: '4578206471472153701' | |
kind: compute#firewall | |
name: spotter-kube-rkt-minion-spotter-kube-rkt-nodeports | |
network: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/networks/e2e | |
selfLink: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/spotter-kube-rkt-minion-spotter-kube-rkt-nodeports | |
sourceRanges: | |
- 0.0.0.0/0 | |
targetTags: | |
- spotter-kube-rkt-minion | |
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a", GitCommit:"f5e2032ea2f1cd79187eaac32e22e37655c9c900", GitTreeState:"dirty"} | |
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a", GitCommit:"f5e2032ea2f1cd79187eaac32e22e37655c9c900", GitTreeState:"dirty"} | |
Setting up for KUBERNETES_PROVIDER="gce". | |
Feb 28 19:27:37.075: INFO: Fetching cloud provider for "gce" | |
I0228 19:27:37.075240 11176 gce.go:262] Using DefaultTokenSource &oauth2.reuseTokenSource{new:(*oauth2.tokenRefresher)(0xc208291ad0), mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(0xc208336240)} | |
I0228 19:27:38.007311 11176 e2e.go:287] Starting e2e run "611b0c6e-de94-11e5-a1fb-54ee75510eb4" on Ginkgo node 1 | |
Running Suite: Kubernetes e2e suite | |
=================================== | |
Random Seed: 1456716457 - Will randomize all specs | |
Will run 171 of 237 specs | |
Feb 28 19:27:38.011: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
Feb 28 19:27:38.136: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready | |
Feb 28 19:27:38.657: INFO: 7 / 7 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) | |
Feb 28 19:27:38.657: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready. | |
Kubectl client Proxy server | |
should support --unix-socket=/path [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1080 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:27:38.657: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:27:38.746: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-7zyt5 | |
Feb 28 19:27:38.832: INFO: Service account default in ns e2e-tests-kubectl-7zyt5 with secrets found. (86.462337ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:27:38.832: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-7zyt5 | |
Feb 28 19:27:38.915: INFO: Service account default in ns e2e-tests-kubectl-7zyt5 with secrets found. (82.787274ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[It] should support --unix-socket=/path [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1080 | |
STEP: Starting the proxy | |
Feb 28 19:27:38.915: INFO: Asynchronously running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix944377321/test' | |
STEP: retrieving proxy /api/ output | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 19:27:39.480: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-7zyt5" for this suite. | |
• | |
------------------------------ | |
Services | |
should be able to change the type and ports of a service | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:716 | |
[BeforeEach] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:27:39.816: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:27:39.907: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-ud58i | |
Feb 28 19:27:39.991: INFO: Service account default in ns e2e-tests-services-ud58i had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 19:27:42.074: INFO: Service account default in ns e2e-tests-services-ud58i with secrets found. (2.166847091s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:27:42.074: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-ud58i | |
Feb 28 19:27:42.158: INFO: Service account default in ns e2e-tests-services-ud58i with secrets found. (83.916342ms) | |
[BeforeEach] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:73 | |
Feb 28 19:27:42.158: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
[It] should be able to change the type and ports of a service | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:716 | |
Feb 28 19:27:42.160: INFO: namespace for TCP test: e2e-tests-services-ud58i | |
STEP: creating a second namespace | |
Feb 28 19:27:42.248: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-ywrz7 | |
Feb 28 19:27:42.331: INFO: Service account default in ns e2e-tests-services-ywrz7 with secrets found. (83.085133ms) | |
Feb 28 19:27:42.331: INFO: namespace for UDP test: e2e-tests-services-ywrz7 | |
STEP: creating a TCP service mutability-test with type=ClusterIP in namespace e2e-tests-services-ud58i | |
STEP: creating a UDP service mutability-test with type=ClusterIP in namespace e2e-tests-services-ywrz7 | |
STEP: verifying that TCP and UDP use the same port | |
Feb 28 19:27:42.597: INFO: service port (TCP and UDP): 80 | |
STEP: creating a pod to be part of the TCP service mutability-test | |
Feb 28 19:27:42.687: INFO: Waiting up to 2m0s for 1 pods to be created | |
Feb 28 19:27:42.769: INFO: Found all 1 pods | |
Feb 28 19:27:42.769: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-qykz9] | |
Feb 28 19:27:42.769: INFO: Waiting up to 2m0s for pod mutability-test-qykz9 status to be running and ready | |
Feb 28 19:27:42.854: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Pending", readiness: false) (84.164628ms elapsed) | |
Feb 28 19:27:44.943: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.173557438s elapsed) | |
Feb 28 19:27:47.030: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.261016493s elapsed) | |
Feb 28 19:27:49.111: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Pending", readiness: false) (6.34128686s elapsed) | |
Feb 28 19:27:51.193: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Pending", readiness: false) (8.423722347s elapsed) | |
Feb 28 19:27:53.279: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (10.509615545s elapsed) | |
Feb 28 19:27:55.364: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (12.594622808s elapsed) | |
Feb 28 19:27:57.449: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (14.679910745s elapsed) | |
Feb 28 19:27:59.536: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (16.766808446s elapsed) | |
Feb 28 19:28:01.620: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (18.850347546s elapsed) | |
Feb 28 19:28:03.704: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (20.934155487s elapsed) | |
Feb 28 19:28:05.788: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (23.018145264s elapsed) | |
Feb 28 19:28:07.869: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (25.099216239s elapsed) | |
Feb 28 19:28:09.955: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (27.185198252s elapsed) | |
Feb 28 19:28:12.039: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (29.270000969s elapsed) | |
Feb 28 19:28:14.120: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (31.350871258s elapsed) | |
Feb 28 19:28:16.201: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (33.432060387s elapsed) | |
Feb 28 19:28:18.287: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (35.517586028s elapsed) | |
Feb 28 19:28:20.374: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (37.604864368s elapsed) | |
Feb 28 19:28:22.462: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (39.692095376s elapsed) | |
Feb 28 19:28:24.551: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (41.781768721s elapsed) | |
Feb 28 19:28:26.635: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (43.865442075s elapsed) | |
Feb 28 19:28:28.720: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (45.950859396s elapsed) | |
Feb 28 19:28:30.801: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (48.032030711s elapsed) | |
Feb 28 19:28:32.886: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (50.117016862s elapsed) | |
Feb 28 19:28:34.970: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (52.200702794s elapsed) | |
Feb 28 19:28:37.051: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (54.28172818s elapsed) | |
Feb 28 19:28:39.137: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (56.36760451s elapsed) | |
Feb 28 19:28:41.221: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (58.451543442s elapsed) | |
Feb 28 19:28:43.301: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m0.531508701s elapsed) | |
Feb 28 19:28:45.384: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m2.614116022s elapsed) | |
Feb 28 19:28:47.466: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m4.696241893s elapsed) | |
Feb 28 19:28:49.549: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m6.779938282s elapsed) | |
Feb 28 19:28:51.635: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m8.865548196s elapsed) | |
Feb 28 19:28:53.722: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m10.953063052s elapsed) | |
Feb 28 19:28:55.807: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m13.03772263s elapsed) | |
Feb 28 19:28:57.888: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m15.118966072s elapsed) | |
Feb 28 19:28:59.974: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m17.20413423s elapsed) | |
Feb 28 19:29:02.058: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m19.288632968s elapsed) | |
Feb 28 19:29:04.143: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m21.373510774s elapsed) | |
Feb 28 19:29:06.227: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m23.457384661s elapsed) | |
Feb 28 19:29:08.311: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m25.541978677s elapsed) | |
Feb 28 19:29:10.396: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m27.626676973s elapsed) | |
Feb 28 19:29:12.476: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m29.706950122s elapsed) | |
Feb 28 19:29:14.560: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m31.790194593s elapsed) | |
Feb 28 19:29:16.644: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m33.874609061s elapsed) | |
Feb 28 19:29:18.726: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m35.956836597s elapsed) | |
Feb 28 19:29:20.810: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m38.040756339s elapsed) | |
Feb 28 19:29:22.894: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m40.124284462s elapsed) | |
Feb 28 19:29:24.982: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m42.212162841s elapsed) | |
Feb 28 19:29:27.065: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m44.295365447s elapsed) | |
Feb 28 19:29:29.153: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m46.383893577s elapsed) | |
Feb 28 19:29:31.240: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m48.470796129s elapsed) | |
Feb 28 19:29:33.326: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m50.556698911s elapsed) | |
Feb 28 19:29:35.412: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m52.642441264s elapsed) | |
Feb 28 19:29:37.491: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m54.721202585s elapsed) | |
Feb 28 19:29:39.575: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m56.805958508s elapsed) | |
Feb 28 19:29:41.659: INFO: Waiting for pod mutability-test-qykz9 in namespace 'e2e-tests-services-ud58i' status to be 'running and ready'(found phase: "Running", readiness: false) (1m58.889435994s elapsed) | |
Feb 28 19:29:43.659: INFO: Pod mutability-test-qykz9 failed to be running and ready. | |
Feb 28 19:29:43.659: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [mutability-test-qykz9] | |
Feb 28 19:29:43.659: FAIL: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready | |
[AfterEach] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-services-ud58i". | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:27:42 -0800 PST - event for mutability-test: {replication-controller } SuccessfulCreate: Created pod: mutability-test-qykz9 | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:27:42 -0800 PST - event for mutability-test-qykz9: {default-scheduler } Scheduled: Successfully assigned mutability-test-qykz9 to spotter-kube-rkt-minion-8b1u | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:27:42 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "gcr.io/google_containers/netexec:1.4" | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:27:47 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "gcr.io/google_containers/netexec:1.4" | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:27:50 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id c3784fdb | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:27:51 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id c3784fdb | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:27:51 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Container image "gcr.io/google_containers/netexec:1.4" already present on machine | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:27:54 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 6fa3ee09 | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:27:54 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 6fa3ee09 | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:29:13 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 379de49f | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:29:13 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 379de49f | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:29:16 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 3f499740 | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:29:16 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 3f499740 | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:29:20 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id f669d599 | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:29:20 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id f669d599 | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:29:24 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 63c71963 | |
Feb 28 19:29:43.824: INFO: At 2016-02-28 19:29:24 -0800 PST - event for mutability-test-qykz9: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 63c71963 | |
Feb 28 19:29:43.993: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 19:29:43.993: INFO: mutability-test-qykz9 spotter-kube-rkt-minion-8b1u Running [{Ready False 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:27:42 -0800 PST ContainersNotReady containers with unready status: [netexec]}] | |
Feb 28 19:29:43.993: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:46 -0800 PST }] | |
Feb 28 19:29:43.993: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:44 -0800 PST }] | |
Feb 28 19:29:43.993: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:29:43.993: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 19:29:43.993: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:27:01 -0800 PST }] | |
Feb 28 19:29:43.993: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:29:43.993: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 19:29:43.993: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:27:05 -0800 PST }] | |
Feb 28 19:29:43.993: INFO: | |
Feb 28 19:29:44.077: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 19:29:44.164: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 200 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:29:42 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:29:42 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152}]}} | |
Feb 28 19:29:44.164: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 19:29:44.250: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 19:29:44.431: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 19:29:44.431: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:29:44.514: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 199 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:29:42 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:29:42 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 19:29:44.514: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:29:44.599: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:29:44.884: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 19:29:45.165: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:29:45.165: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:29:45.251: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 202 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:29:43 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:29:43 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A 301ef7e3-2066-4c44-95be-e4e1565c746a 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360}]}} | |
Feb 28 19:29:45.251: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:29:45.333: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:29:45.615: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 19:29:45.891: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:29:45.891: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:21.864916s} | |
Feb 28 19:29:45.891: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:21.864916s} | |
Feb 28 19:29:45.891: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:21.864916s} | |
Feb 28 19:29:45.891: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:29:45.975: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 201 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 19:29:43 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:29:43 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F ba43a3b7-0093-4146-a294-411b5877c396 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360}]}} | |
Feb 28 19:29:45.975: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:29:46.065: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:29:46.329: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 19:29:46.603: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:29:46.603: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:33.185358s} | |
Feb 28 19:29:46.603: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:33.185358s} | |
Feb 28 19:29:46.603: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:33.185358s} | |
Feb 28 19:29:46.604: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-ud58i" for this suite. | |
STEP: Destroying namespace "e2e-tests-services-ywrz7" for this suite. | |
• Failure [137.546 seconds] | |
Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:902 | |
should be able to change the type and ports of a service [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:716 | |
Feb 28 19:29:43.659: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1742 | |
------------------------------ | |
ConfigMap | |
updates should be reflected in volume [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:272 | |
[BeforeEach] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:29:57.362: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:29:57.452: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-sml9n | |
Feb 28 19:29:57.536: INFO: Service account default in ns e2e-tests-configmap-sml9n with secrets found. (83.836946ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:29:57.536: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-sml9n | |
Feb 28 19:29:57.617: INFO: Service account default in ns e2e-tests-configmap-sml9n with secrets found. (80.402718ms) | |
[It] updates should be reflected in volume [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:272 | |
STEP: Creating configMap with name configmap-test-upd-b4e6150c-de94-11e5-a1fb-54ee75510eb4 | |
STEP: Creating the pod | |
Feb 28 19:29:57.791: INFO: Waiting up to 5m0s for pod pod-configmaps-b4f30724-de94-11e5-a1fb-54ee75510eb4 status to be running | |
Feb 28 19:29:57.878: INFO: Waiting for pod pod-configmaps-b4f30724-de94-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-configmap-sml9n' status to be 'running'(found phase: "Pending", readiness: false) (86.596135ms elapsed) | |
Feb 28 19:29:59.964: INFO: Waiting for pod pod-configmaps-b4f30724-de94-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-configmap-sml9n' status to be 'running'(found phase: "Pending", readiness: false) (2.172495616s elapsed) | |
Feb 28 19:30:02.050: INFO: Waiting for pod pod-configmaps-b4f30724-de94-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-configmap-sml9n' status to be 'running'(found phase: "Pending", readiness: false) (4.25898168s elapsed) | |
Feb 28 19:30:04.137: INFO: Found pod 'pod-configmaps-b4f30724-de94-11e5-a1fb-54ee75510eb4' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: Updating configmap configmap-test-upd-b4e6150c-de94-11e5-a1fb-54ee75510eb4 | |
STEP: waiting to observe update in volume | |
STEP: Deleting the pod | |
STEP: Cleaning up the configMap | |
[AfterEach] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 19:31:13.642: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-configmap-sml9n" for this suite. | |
• [SLOW TEST:81.696 seconds] | |
ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:333 | |
updates should be reflected in volume [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:272 | |
------------------------------ | |
Daemon set | |
should run and stop simple daemon | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:139 | |
[BeforeEach] Daemon set | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:31:19.059: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:31:19.149: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonsets-wvxh7 | |
Feb 28 19:31:19.235: INFO: Service account default in ns e2e-tests-daemonsets-wvxh7 with secrets found. (85.612758ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:31:19.235: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonsets-wvxh7 | |
Feb 28 19:31:19.318: INFO: Service account default in ns e2e-tests-daemonsets-wvxh7 with secrets found. (82.782653ms) | |
[BeforeEach] Daemon set | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:82 | |
[It] should run and stop simple daemon | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:139 | |
Feb 28 19:31:25.670: INFO: Creating simple daemon set daemon-set | |
STEP: Check that daemon pods launch on every node of the cluster. | |
Feb 28 19:31:27.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:31:29.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:31:31.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:31:33.934: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:31:35.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:31:37.934: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:31:39.933: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:31:41.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:31:43.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:31:45.935: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:31:47.935: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:31:49.937: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:31:51.933: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:31:53.940: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:31:55.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:31:57.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:31:59.936: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:01.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:03.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:32:05.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:32:07.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:32:09.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:32:11.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:32:13.943: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:32:15.933: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:32:17.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:19.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:21.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:32:23.923: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:32:25.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:27.937: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:29.934: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:31.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:32:33.934: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:35.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:37.937: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:39.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:41.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:43.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:45.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:32:47.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:49.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:51.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:32:53.935: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:55.939: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:57.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:32:59.943: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:01.936: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:03.933: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:05.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:07.946: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:09.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:33:11.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:13.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:33:15.924: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:33:17.924: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:33:19.935: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:21.925: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:33:23.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:33:25.935: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:33:27.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:29.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:31.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:33:33.933: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:33:35.924: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:37.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:39.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:41.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:43.934: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:33:45.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:47.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:33:49.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:51.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:33:53.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:33:55.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:33:57.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:33:59.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:34:01.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:03.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:34:05.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:07.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:09.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:11.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:13.923: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:15.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:34:17.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:19.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:34:21.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:34:23.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:25.935: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:34:27.938: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:29.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:31.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:33.934: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:34:35.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:34:37.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:39.923: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:41.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:43.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:45.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:47.933: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:49.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:51.934: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:53.935: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:34:55.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:57.938: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:34:59.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:35:01.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:35:03.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:05.942: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:35:07.924: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:35:09.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:35:11.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:13.925: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:35:15.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:17.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:19.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:21.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:23.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:35:25.937: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:35:27.938: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:35:29.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:35:31.936: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:33.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:35.936: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:37.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:39.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:41.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:35:43.931: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:45.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:47.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:49.936: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:35:51.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:53.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:55.933: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:35:57.930: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1} | |
Feb 28 19:35:59.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:36:01.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:03.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:05.933: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:36:07.936: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:09.934: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:11.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1} | |
Feb 28 19:36:13.929: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:15.932: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1, "spotter-kube-rkt-minion-8b1u":1} | |
Feb 28 19:36:17.926: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:19.927: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:21.925: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:23.928: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:25.933: INFO: nodesToPodCount: map[string]int{"spotter-kube-rkt-minion-8b1u":1, "spotter-kube-rkt-minion-yo39":1, "spotter-kube-rkt-master":1, "spotter-kube-rkt-minion-yii0":1} | |
Feb 28 19:36:25.933: INFO: Check that reaper kills all daemon pods for daemon-set | |
Feb 28 19:36:30.400: INFO: nodesToPodCount: map[string]int{} | |
[AfterEach] Daemon set | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:67 | |
Feb 28 19:36:30.481: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"extensions/v1beta1","metadata":{"selfLink":"/apis/extensions/v1beta1/namespaces/e2e-tests-daemonsets-wvxh7/daemonsets","resourceVersion":"435"},"items":[]} | |
Feb 28 19:36:30.566: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wvxh7/pods","resourceVersion":"435"},"items":[]} | |
[AfterEach] Daemon set | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-daemonsets-wvxh7". | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:25 -0800 PST - event for daemon-set: {daemon-set } SuccessfulCreate: Created pod: daemon-set-veq70 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:25 -0800 PST - event for daemon-set: {daemon-set } SuccessfulCreate: Created pod: daemon-set-ixewi | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:25 -0800 PST - event for daemon-set: {daemon-set } SuccessfulCreate: Created pod: daemon-set-iqpuy | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:25 -0800 PST - event for daemon-set: {daemon-set } SuccessfulCreate: Created pod: daemon-set-irggu | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:25 -0800 PST - event for daemon-set-iqpuy: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "gcr.io/google_containers/serve_hostname:1.1" | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:25 -0800 PST - event for daemon-set-irggu: {kubelet spotter-kube-rkt-minion-yo39} Pulling: pulling image "gcr.io/google_containers/serve_hostname:1.1" | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:25 -0800 PST - event for daemon-set-veq70: {kubelet spotter-kube-rkt-minion-yii0} Pulling: pulling image "gcr.io/google_containers/serve_hostname:1.1" | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:26 -0800 PST - event for daemon-set-ixewi: {kubelet spotter-kube-rkt-master} Pulling: pulling image "gcr.io/google_containers/serve_hostname:1.1" | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:26 -0800 PST - event for daemon-set-ixewi: {kubelet spotter-kube-rkt-master} Pulled: Successfully pulled image "gcr.io/google_containers/serve_hostname:1.1" | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:26 -0800 PST - event for daemon-set-ixewi: {kubelet spotter-kube-rkt-master} Created: Created container with docker id d51eeac7db28 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:27 -0800 PST - event for daemon-set-ixewi: {kubelet spotter-kube-rkt-master} Started: Started container with docker id d51eeac7db28 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:28 -0800 PST - event for daemon-set-iqpuy: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "gcr.io/google_containers/serve_hostname:1.1" | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:28 -0800 PST - event for daemon-set-irggu: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Successfully pulled image "gcr.io/google_containers/serve_hostname:1.1" | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:28 -0800 PST - event for daemon-set-veq70: {kubelet spotter-kube-rkt-minion-yii0} Pulled: Successfully pulled image "gcr.io/google_containers/serve_hostname:1.1" | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:31 -0800 PST - event for daemon-set-iqpuy: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 83d504b5 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:31 -0800 PST - event for daemon-set-iqpuy: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 83d504b5 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:31 -0800 PST - event for daemon-set-irggu: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 6b546394 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:31 -0800 PST - event for daemon-set-irggu: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 6b546394 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:31 -0800 PST - event for daemon-set-veq70: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id eea8f20c | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:31:31 -0800 PST - event for daemon-set-veq70: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id eea8f20c | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:36:26 -0800 PST - event for daemon-set: {daemon-set } SuccessfulDelete: Deleted pod: daemon-set-veq70 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:36:26 -0800 PST - event for daemon-set: {daemon-set } SuccessfulDelete: Deleted pod: daemon-set-iqpuy | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:36:26 -0800 PST - event for daemon-set: {daemon-set } SuccessfulDelete: Deleted pod: daemon-set-ixewi | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:36:26 -0800 PST - event for daemon-set: {daemon-set } SuccessfulDelete: Deleted pod: daemon-set-irggu | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:36:26 -0800 PST - event for daemon-set-iqpuy: {kubelet spotter-kube-rkt-minion-8b1u} Killing: Killing with rkt id 83d504b5 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:36:26 -0800 PST - event for daemon-set-irggu: {kubelet spotter-kube-rkt-minion-yo39} Killing: Killing with rkt id 6b546394 | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:36:26 -0800 PST - event for daemon-set-ixewi: {kubelet spotter-kube-rkt-master} Killing: Killing container with docker id d51eeac7db28: Need to kill pod. | |
Feb 28 19:36:37.078: INFO: At 2016-02-28 19:36:26 -0800 PST - event for daemon-set-veq70: {kubelet spotter-kube-rkt-minion-yii0} Killing: Killing with rkt id eea8f20c | |
Feb 28 19:36:37.168: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 19:36:37.168: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:46 -0800 PST }] | |
Feb 28 19:36:37.168: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:44 -0800 PST }] | |
Feb 28 19:36:37.168: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:36:37.168: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 19:36:37.168: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:27:01 -0800 PST }] | |
Feb 28 19:36:37.168: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:36:37.168: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 19:36:37.168: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:27:05 -0800 PST }] | |
Feb 28 19:36:37.168: INFO: | |
Feb 28 19:36:37.256: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 19:36:37.345: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 436 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:36:33 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:36:33 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 19:36:37.345: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 19:36:37.431: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 19:36:37.599: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 19:36:37.599: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:36:37.691: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 437 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:36:33 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:36:33 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 19:36:37.691: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:36:37.777: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:36:38.031: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 19:36:38.315: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:36:38.315: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:36:38.402: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 439 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:36:34 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:36:34 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A 301ef7e3-2066-4c44-95be-e4e1565c746a 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 19:36:38.402: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:36:38.493: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:36:38.764: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 19:36:39.039: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:36:39.039: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:36:39.121: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 438 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:36:33 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:36:33 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F ba43a3b7-0093-4146-a294-411b5877c396 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 19:36:39.121: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:36:39.205: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:36:39.463: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 19:36:39.746: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:36:39.746: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-daemonsets-wvxh7" for this suite. | |
• Failure [326.105 seconds] | |
Daemon set | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:194 | |
should run and stop simple daemon [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:139 | |
error waiting for daemon pod to start | |
Expected error: | |
<*errors.errorString | 0xc20802a8b0>: { | |
s: "timed out waiting for the condition", | |
} | |
timed out waiting for the condition | |
not to have occurred | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:123 | |
------------------------------ | |
Docker Containers | |
should be able to override the image's default arguments (docker cmd) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:50 | |
[BeforeEach] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:36:45.163: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:36:45.248: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-g3kpl | |
Feb 28 19:36:45.332: INFO: Service account default in ns e2e-tests-containers-g3kpl had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 19:36:47.415: INFO: Service account default in ns e2e-tests-containers-g3kpl with secrets found. (2.167290698s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:36:47.415: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-g3kpl | |
Feb 28 19:36:47.499: INFO: Service account default in ns e2e-tests-containers-g3kpl with secrets found. (83.333314ms) | |
[BeforeEach] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:35 | |
[It] should be able to override the image's default arguments (docker cmd) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:50 | |
STEP: Creating a pod to test override arguments | |
Feb 28 19:36:47.588: INFO: Waiting up to 5m0s for pod client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4 status to be success or failure | |
Feb 28 19:36:47.671: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-containers-g3kpl' so far | |
Feb 28 19:36:47.671: INFO: Waiting for pod client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-containers-g3kpl' status to be 'success or failure'(found phase: "Pending", readiness: false) (83.654743ms elapsed) | |
Feb 28 19:36:49.767: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-containers-g3kpl' so far | |
Feb 28 19:36:49.767: INFO: Waiting for pod client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-containers-g3kpl' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.179319781s elapsed) | |
Feb 28 19:36:51.849: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-containers-g3kpl' so far | |
Feb 28 19:36:51.850: INFO: Waiting for pod client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-containers-g3kpl' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.261822618s elapsed) | |
Feb 28 19:36:53.933: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-containers-g3kpl' so far | |
Feb 28 19:36:53.933: INFO: Waiting for pod client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-containers-g3kpl' status to be 'success or failure'(found phase: "Running", readiness: true) (6.345367208s elapsed) | |
Feb 28 19:36:56.018: INFO: Unexpected error occurred: pod 'client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4' terminated with failure: &{ExitCode:52 Signal:0 Reason:Error Message: StartedAt:2016-02-28 19:36:47 -0800 PST FinishedAt:0001-01-01 00:00:00 +0000 UTC ContainerID:rkt://8487e813-bb53-48e0-96d4-0fcdbaa9b36e:test-container} | |
[AfterEach] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-containers-g3kpl". | |
Feb 28 19:36:56.193: INFO: At 2016-02-28 19:36:47 -0800 PST - event for client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "gcr.io/google_containers/eptest:0.1" | |
Feb 28 19:36:56.193: INFO: At 2016-02-28 19:36:47 -0800 PST - event for client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4: {default-scheduler } Scheduled: Successfully assigned client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4 to spotter-kube-rkt-minion-8b1u | |
Feb 28 19:36:56.193: INFO: At 2016-02-28 19:36:50 -0800 PST - event for client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "gcr.io/google_containers/eptest:0.1" | |
Feb 28 19:36:56.193: INFO: At 2016-02-28 19:36:53 -0800 PST - event for client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 8487e813 | |
Feb 28 19:36:56.193: INFO: At 2016-02-28 19:36:53 -0800 PST - event for client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 8487e813 | |
Feb 28 19:36:56.361: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 19:36:56.361: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:46 -0800 PST }] | |
Feb 28 19:36:56.361: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:44 -0800 PST }] | |
Feb 28 19:36:56.361: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:36:56.361: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 19:36:56.361: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:27:01 -0800 PST }] | |
Feb 28 19:36:56.361: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:36:56.361: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 19:36:56.361: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:27:05 -0800 PST }] | |
Feb 28 19:36:56.361: INFO: | |
Feb 28 19:36:56.448: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 19:36:56.531: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 456 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 19:36:53 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:36:53 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 19:36:56.532: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 19:36:56.617: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 19:36:56.787: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 19:36:56.787: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:36:56.870: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 457 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:36:53 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:36:53 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/eptest:0.1] 2977792} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 19:36:56.870: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:36:56.951: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:36:57.215: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 19:36:57.529: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:36:57.529: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:36:57.617: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 460 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:36:54 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:36:54 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A 301ef7e3-2066-4c44-95be-e4e1565c746a 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 19:36:57.617: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:36:57.706: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:36:57.961: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 19:36:58.254: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:36:58.254: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:36:58.336: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 459 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:36:53 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:36:53 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F ba43a3b7-0093-4146-a294-411b5877c396 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 19:36:58.336: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:36:58.419: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:36:58.669: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 19:36:58.947: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:36:58.947: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-containers-g3kpl" for this suite. | |
• Failure [19.206 seconds] | |
Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:72 | |
should be able to override the image's default arguments (docker cmd) [Conformance] [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:50 | |
Expected error: | |
<*errors.errorString | 0xc208326b60>: { | |
s: "pod 'client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4' terminated with failure: &{ExitCode:52 Signal:0 Reason:Error Message: StartedAt:2016-02-28 19:36:47 -0800 PST FinishedAt:0001-01-01 00:00:00 +0000 UTC ContainerID:rkt://8487e813-bb53-48e0-96d4-0fcdbaa9b36e:test-container}", | |
} | |
pod 'client-containers-a9351fd9-de95-11e5-a1fb-54ee75510eb4' terminated with failure: &{ExitCode:52 Signal:0 Reason:Error Message: StartedAt:2016-02-28 19:36:47 -0800 PST FinishedAt:0001-01-01 00:00:00 +0000 UTC ContainerID:rkt://8487e813-bb53-48e0-96d4-0fcdbaa9b36e:test-container} | |
not to have occurred | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:1455 | |
------------------------------ | |
Proxy version v1 | |
should proxy logs on node [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56 | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:37:04.369: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:37:04.457: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-r4tnj | |
Feb 28 19:37:04.541: INFO: Service account default in ns e2e-tests-proxy-r4tnj had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 19:37:06.622: INFO: Service account default in ns e2e-tests-proxy-r4tnj with secrets found. (2.165173879s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:37:06.622: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-r4tnj | |
Feb 28 19:37:06.707: INFO: Service account default in ns e2e-tests-proxy-r4tnj with secrets found. (85.005347ms) | |
[It] should proxy logs on node [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56 | |
Feb 28 19:37:06.876: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 82.025413ms) | |
Feb 28 19:37:06.962: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 85.967839ms) | |
Feb 28 19:37:07.046: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 84.002177ms) | |
Feb 28 19:37:07.131: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 84.244277ms) | |
Feb 28 19:37:07.216: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 84.821621ms) | |
Feb 28 19:37:07.300: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 83.882109ms) | |
Feb 28 19:37:07.384: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 84.69203ms) | |
Feb 28 19:37:07.467: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 82.600082ms) | |
Feb 28 19:37:07.552: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 85.409902ms) | |
Feb 28 19:37:07.639: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 86.760984ms) | |
Feb 28 19:37:07.721: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 81.417686ms) | |
Feb 28 19:37:07.807: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 86.906975ms) | |
Feb 28 19:37:07.889: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 81.597603ms) | |
Feb 28 19:37:07.975: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 85.36572ms) | |
Feb 28 19:37:08.059: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 84.503715ms) | |
W0228 19:37:08.171154 11176 request.go:627] Throttling request took 111.554074ms, request: https://104.196.32.11/api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/ | |
Feb 28 19:37:08.256: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 196.727059ms) | |
W0228 19:37:08.371125 11176 request.go:627] Throttling request took 114.759436ms, request: https://104.196.32.11/api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/ | |
Feb 28 19:37:08.451: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 195.514191ms) | |
W0228 19:37:08.571087 11176 request.go:627] Throttling request took 119.171698ms, request: https://104.196.32.11/api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/ | |
Feb 28 19:37:08.655: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 204.051351ms) | |
W0228 19:37:08.771141 11176 request.go:627] Throttling request took 115.136726ms, request: https://104.196.32.11/api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/ | |
Feb 28 19:37:08.859: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 203.939371ms) | |
W0228 19:37:08.971116 11176 request.go:627] Throttling request took 111.147015ms, request: https://104.196.32.11/api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/ | |
Feb 28 19:37:09.056: INFO: /api/v1/proxy/nodes/spotter-kube-rkt-minion-8b1u/logs/: <pre> | |
<a href="lastlog">lastlog</a> | |
<a href="wtmp">wtmp</a> | |
<a href="journal/">journal/</a> | |
<a hr... (200; 196.089238ms) | |
[AfterEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 19:37:09.056: INFO: Waiting up to 1m0s for all nodes to be ready | |
W0228 19:37:09.171102 11176 request.go:627] Throttling request took 114.939343ms, request: https://104.196.32.11/api/v1/nodes | |
STEP: Destroying namespace "e2e-tests-proxy-r4tnj" for this suite. | |
W0228 19:37:09.371080 11176 request.go:627] Throttling request took 112.868594ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-proxy-r4tnj | |
W0228 19:37:09.571140 11176 request.go:627] Throttling request took 116.150026ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-proxy-r4tnj | |
W0228 19:37:09.771143 11176 request.go:627] Throttling request took 120.174282ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-proxy-r4tnj/pods | |
• [SLOW TEST:5.485 seconds] | |
Proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40 | |
version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39 | |
should proxy logs on node [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56 | |
------------------------------ | |
Pod Disks | |
should schedule a pod w/ a RW PD, remove it, then schedule it on another host [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:117 | |
[BeforeEach] Pod Disks | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:37:09.854: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:37:09.942: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-5e6h6 | |
Feb 28 19:37:10.027: INFO: Service account default in ns e2e-tests-pod-disks-5e6h6 with secrets found. (84.789658ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:37:10.027: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-5e6h6 | |
Feb 28 19:37:10.107: INFO: Service account default in ns e2e-tests-pod-disks-5e6h6 with secrets found. (80.061344ms) | |
[BeforeEach] Pod Disks | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:64 | |
[It] should schedule a pod w/ a RW PD, remove it, then schedule it on another host [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:117 | |
STEP: creating PD | |
Feb 28 19:37:17.089: INFO: Successfully created a new PD: "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4". | |
STEP: submitting host0Pod to kubernetes | |
Feb 28 19:37:17.183: INFO: Waiting up to 15m0s for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 status to be running | |
Feb 28 19:37:17.266: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (83.822058ms elapsed) | |
Feb 28 19:37:19.361: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (2.178447771s elapsed) | |
Feb 28 19:37:21.440: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (4.257488575s elapsed) | |
Feb 28 19:37:23.526: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (6.34307229s elapsed) | |
Feb 28 19:37:25.615: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (8.431922041s elapsed) | |
Feb 28 19:37:27.698: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (10.515055633s elapsed) | |
Feb 28 19:37:29.789: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (12.60607697s elapsed) | |
Feb 28 19:37:31.874: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (14.691265419s elapsed) | |
Feb 28 19:37:33.961: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (16.77800408s elapsed) | |
Feb 28 19:37:36.043: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (18.860571645s elapsed) | |
Feb 28 19:37:38.135: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (20.95242106s elapsed) | |
Feb 28 19:37:40.220: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (23.037430382s elapsed) | |
Feb 28 19:37:42.308: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (25.125202052s elapsed) | |
Feb 28 19:37:44.399: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (27.21594823s elapsed) | |
Feb 28 19:37:46.480: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (29.297423643s elapsed) | |
Feb 28 19:37:48.572: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (31.388959209s elapsed) | |
Feb 28 19:37:50.661: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (33.478346693s elapsed) | |
Feb 28 19:37:52.746: INFO: Waiting for pod pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (35.563144318s elapsed) | |
Feb 28 19:37:54.831: INFO: Found pod 'pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: writing a file in the container | |
Feb 28 19:37:54.831: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-5e6h6 pd-test-bad843c4-de95-11e5-a1fb-54ee75510eb4 -c=mycontainer -- /bin/sh -c echo '7966716116149108126' > '/testpd1/tracker'' | |
Feb 28 19:37:56.486: INFO: Wrote value: 7966716116149108126 | |
STEP: deleting host0Pod | |
STEP: submitting host1Pod to kubernetes | |
Feb 28 19:37:56.667: INFO: Waiting up to 15m0s for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 status to be running | |
Feb 28 19:37:56.750: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (82.48147ms elapsed) | |
Feb 28 19:37:58.838: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (2.170494182s elapsed) | |
Feb 28 19:38:00.925: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (4.257920441s elapsed) | |
Feb 28 19:38:03.011: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (6.343911967s elapsed) | |
Feb 28 19:38:05.095: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (8.428123336s elapsed) | |
Feb 28 19:38:07.186: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (10.518421438s elapsed) | |
Feb 28 19:38:09.281: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (12.613270327s elapsed) | |
Feb 28 19:38:11.364: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (14.696433476s elapsed) | |
Feb 28 19:38:13.448: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (16.78096715s elapsed) | |
Feb 28 19:38:15.542: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (18.874916971s elapsed) | |
Feb 28 19:38:17.635: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (20.967919278s elapsed) | |
Feb 28 19:38:19.719: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (23.051295263s elapsed) | |
Feb 28 19:38:21.808: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (25.141033559s elapsed) | |
Feb 28 19:38:23.890: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (27.222784712s elapsed) | |
Feb 28 19:38:25.976: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (29.308720726s elapsed) | |
Feb 28 19:38:28.060: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (31.392977111s elapsed) | |
Feb 28 19:38:30.146: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (33.47875312s elapsed) | |
Feb 28 19:38:32.237: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (35.569318325s elapsed) | |
Feb 28 19:38:34.317: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (37.649947696s elapsed) | |
Feb 28 19:38:36.411: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (39.743580604s elapsed) | |
Feb 28 19:38:38.497: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (41.829623389s elapsed) | |
Feb 28 19:38:40.583: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (43.915459865s elapsed) | |
Feb 28 19:38:42.673: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (46.005699997s elapsed) | |
Feb 28 19:38:44.765: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (48.097648778s elapsed) | |
Feb 28 19:38:46.849: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (50.181258599s elapsed) | |
Feb 28 19:38:48.935: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (52.267234133s elapsed) | |
Feb 28 19:38:51.025: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (54.35769255s elapsed) | |
Feb 28 19:38:53.118: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (56.451122539s elapsed) | |
Feb 28 19:38:55.205: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (58.537480241s elapsed) | |
Feb 28 19:38:57.288: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (1m0.621037054s elapsed) | |
Feb 28 19:38:59.373: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (1m2.706113584s elapsed) | |
Feb 28 19:39:01.465: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (1m4.797721675s elapsed) | |
Feb 28 19:39:03.550: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (1m6.883032029s elapsed) | |
Feb 28 19:39:05.638: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (1m8.970571462s elapsed) | |
Feb 28 19:39:07.726: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (1m11.05854454s elapsed) | |
Feb 28 19:39:09.810: INFO: Waiting for pod pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-5e6h6' status to be 'running'(found phase: "Pending", readiness: false) (1m13.143048686s elapsed) | |
Feb 28 19:39:11.904: INFO: Found pod 'pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4' on node 'spotter-kube-rkt-minion-yii0' | |
STEP: reading a file in the container | |
Feb 28 19:39:11.904: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-5e6h6 pd-test-bad8440a-de95-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker' | |
Feb 28 19:39:13.607: INFO: Read value: 7966716116149108126 | |
STEP: deleting host1Pod | |
STEP: cleaning up PD-RW test environment | |
E0228 19:39:17.861605 11176 gce.go:405] GCE operation failed: googleapi: Error 400: Invalid value for field 'disk': 'spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4'. | |
STEP: Waiting for PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4" to detach from "spotter-kube-rkt-minion-8b1u" | |
Feb 28 19:39:18.078: INFO: GCE PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4" appears to have successfully detached from "spotter-kube-rkt-minion-8b1u". | |
E0228 19:39:22.315633 11176 gce.go:405] GCE operation failed: googleapi: Error 400: Invalid value for field 'disk': 'spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4'. | |
STEP: Waiting for PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4" to detach from "spotter-kube-rkt-minion-yii0" | |
Feb 28 19:39:22.539: INFO: GCE PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4" appears to have successfully detached from "spotter-kube-rkt-minion-yii0". | |
STEP: Deleting PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4" | |
Feb 28 19:39:22.901: INFO: Error deleting PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4": googleapi: Error 400: The disk resource 'spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-yii0', resourceInUseByAnotherResource | |
Feb 28 19:39:22.901: INFO: Couldn't delete PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4". Sleeping 5 seconds (googleapi: Error 400: The disk resource 'spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-yii0', resourceInUseByAnotherResource) | |
Feb 28 19:39:28.464: INFO: Error deleting PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4": googleapi: Error 400: The disk resource 'spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-yii0', resourceInUseByAnotherResource | |
Feb 28 19:39:28.464: INFO: Couldn't delete PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4". Sleeping 5 seconds (googleapi: Error 400: The disk resource 'spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-yii0', resourceInUseByAnotherResource) | |
Feb 28 19:39:33.822: INFO: Error deleting PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4": googleapi: Error 400: The disk resource 'spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-yii0', resourceInUseByAnotherResource | |
Feb 28 19:39:33.823: INFO: Couldn't delete PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4". Sleeping 5 seconds (googleapi: Error 400: The disk resource 'spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-yii0', resourceInUseByAnotherResource) | |
Feb 28 19:39:44.544: INFO: Successfully deleted PD "spotter-kube-rkt-b6bca2da-de95-11e5-a1fb-54ee75510eb4". | |
[AfterEach] Pod Disks | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 19:39:44.545: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pod-disks-5e6h6" for this suite. | |
• [SLOW TEST:160.111 seconds] | |
Pod Disks | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:267 | |
should schedule a pod w/ a RW PD, remove it, then schedule it on another host [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:117 | |
------------------------------ | |
KubeletManagedEtcHosts | |
should test kubelet managed /etc/hosts file | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:55 | |
[BeforeEach] KubeletManagedEtcHosts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:39:49.965: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:39:50.056: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-kubelet-etc-hosts-zn5wl | |
Feb 28 19:39:50.142: INFO: Service account default in ns e2e-tests-e2e-kubelet-etc-hosts-zn5wl had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 19:39:52.226: INFO: Service account default in ns e2e-tests-e2e-kubelet-etc-hosts-zn5wl with secrets found. (2.169246947s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:39:52.226: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-kubelet-etc-hosts-zn5wl | |
Feb 28 19:39:52.310: INFO: Service account default in ns e2e-tests-e2e-kubelet-etc-hosts-zn5wl with secrets found. (84.770947ms) | |
[It] should test kubelet managed /etc/hosts file | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:55 | |
STEP: Setting up the test | |
STEP: Creating hostNetwork=false pod | |
Feb 28 19:39:52.405: INFO: Waiting up to 5m0s for pod test-pod status to be running | |
Feb 28 19:39:52.490: INFO: Waiting for pod test-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-zn5wl' status to be 'running'(found phase: "Pending", readiness: false) (85.082683ms elapsed) | |
Feb 28 19:39:54.573: INFO: Waiting for pod test-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-zn5wl' status to be 'running'(found phase: "Pending", readiness: false) (2.167723269s elapsed) | |
Feb 28 19:39:56.658: INFO: Waiting for pod test-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-zn5wl' status to be 'running'(found phase: "Pending", readiness: false) (4.253337747s elapsed) | |
Feb 28 19:39:58.746: INFO: Waiting for pod test-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-zn5wl' status to be 'running'(found phase: "Pending", readiness: false) (6.340717154s elapsed) | |
Feb 28 19:40:00.830: INFO: Waiting for pod test-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-zn5wl' status to be 'running'(found phase: "Pending", readiness: false) (8.425438616s elapsed) | |
Feb 28 19:40:02.913: INFO: Found pod 'test-pod' on node 'spotter-kube-rkt-minion-yo39' | |
STEP: Creating hostNetwork=true pod | |
Feb 28 19:40:03.087: INFO: Waiting up to 5m0s for pod test-host-network-pod status to be running | |
Feb 28 19:40:03.171: INFO: Waiting for pod test-host-network-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-zn5wl' status to be 'running'(found phase: "Pending", readiness: false) (83.566718ms elapsed) | |
Feb 28 19:40:05.256: INFO: Waiting for pod test-host-network-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-zn5wl' status to be 'running'(found phase: "Pending", readiness: false) (2.168352842s elapsed) | |
Feb 28 19:40:07.339: INFO: Found pod 'test-host-network-pod' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: Running the test | |
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false | |
Feb 28 19:40:07.421: INFO: Asynchronously running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubelet-etc-hosts-zn5wl test-pod -c busybox-1 cat /etc/hosts' | |
Feb 28 19:40:07.422: INFO: reading from `kubectl exec` command's stdout | |
Feb 28 19:40:09.065: FAIL: /etc/hosts file should be kubelet managed, but is not: "# Generated by rkt\n\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet\nfe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes\nfe00::2\tip6-allrouters\n" | |
[AfterEach] KubeletManagedEtcHosts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-e2e-kubelet-etc-hosts-zn5wl". | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:39:52 -0800 PST - event for test-pod: {kubelet spotter-kube-rkt-minion-yo39} Pulling: pulling image "gcr.io/google_containers/netexec:1.4" | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:39:52 -0800 PST - event for test-pod: {default-scheduler } Scheduled: Successfully assigned test-pod to spotter-kube-rkt-minion-yo39 | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:39:57 -0800 PST - event for test-pod: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Container image "gcr.io/google_containers/netexec:1.4" already present on machine | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:39:57 -0800 PST - event for test-pod: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Successfully pulled image "gcr.io/google_containers/netexec:1.4" | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:40:00 -0800 PST - event for test-pod: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 5507a42e | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:40:00 -0800 PST - event for test-pod: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 5507a42e | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:40:03 -0800 PST - event for test-host-network-pod: {default-scheduler } Scheduled: Successfully assigned test-host-network-pod to spotter-kube-rkt-minion-8b1u | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:40:03 -0800 PST - event for test-host-network-pod: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Container image "gcr.io/google_containers/netexec:1.4" already present on machine | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:40:05 -0800 PST - event for test-host-network-pod: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 0f4a93c7 | |
Feb 28 19:40:09.152: INFO: At 2016-02-28 19:40:05 -0800 PST - event for test-host-network-pod: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 0f4a93c7 | |
Feb 28 19:40:09.320: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 19:40:09.320: INFO: test-host-network-pod spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:06 -0800 PST }] | |
Feb 28 19:40:09.320: INFO: test-pod spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:01 -0800 PST }] | |
Feb 28 19:40:09.320: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:46 -0800 PST }] | |
Feb 28 19:40:09.320: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:44 -0800 PST }] | |
Feb 28 19:40:09.320: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:40:09.320: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 19:40:09.320: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:27:01 -0800 PST }] | |
Feb 28 19:40:09.320: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:40:09.320: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 19:40:09.320: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready False 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:27:05 -0800 PST ContainersNotReady containers with unready status: [default-http-backend l7-lb-controller]}] | |
Feb 28 19:40:09.320: INFO: | |
Feb 28 19:40:09.404: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 19:40:09.485: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 577 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:40:03 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:40:03 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 19:40:09.485: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 19:40:09.571: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 19:40:09.738: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 19:40:09.738: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:40:09.820: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 578 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:40:04 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:40:04 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/busybox:1.24] 1315840} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 19:40:09.820: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:40:09.906: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:40:10.157: INFO: test-host-network-pod started at <nil> (0 container statuses recorded) | |
Feb 28 19:40:10.157: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 19:40:10.459: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:40:10.459: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:40:10.546: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 559 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 19:39:44 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:39:44 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A 301ef7e3-2066-4c44-95be-e4e1565c746a 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/busybox:1.24] 1315840} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 19:40:10.546: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:40:10.634: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:40:17.812: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:40:17.812: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:40:17.896: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 583 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:40:16 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:40:16 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/netexec:1.4] 7513088} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 19:40:17.896: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:40:17.981: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:40:18.253: INFO: test-pod started at <nil> (0 container statuses recorded) | |
Feb 28 19:40:18.608: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:40:18.608: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:10.12277s} | |
Feb 28 19:40:18.609: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.5 Latency:10.12277s} | |
Feb 28 19:40:18.609: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.9 Latency:10.12277s} | |
Feb 28 19:40:18.609: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-zn5wl" for this suite. | |
• Failure [34.075 seconds] | |
KubeletManagedEtcHosts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:56 | |
should test kubelet managed /etc/hosts file [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:55 | |
Feb 28 19:40:09.065: /etc/hosts file should be kubelet managed, but is not: "# Generated by rkt\n\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet\nfe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes\nfe00::2\tip6-allrouters\n" | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:114 | |
------------------------------ | |
Kubectl client Update Demo | |
should create and stop a replication controller [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:128 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:40:24.041: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:40:24.130: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-s4as6 | |
Feb 28 19:40:24.219: INFO: Service account default in ns e2e-tests-kubectl-s4as6 with secrets found. (88.998438ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:40:24.219: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-s4as6 | |
Feb 28 19:40:24.305: INFO: Service account default in ns e2e-tests-kubectl-s4as6 with secrets found. (85.899656ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[BeforeEach] Update Demo | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:121 | |
[It] should create and stop a replication controller [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:128 | |
STEP: creating a replication controller | |
Feb 28 19:40:24.305: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:25.072: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created\n" | |
Feb 28 19:40:25.072: INFO: stderr: "" | |
STEP: waiting for all containers in name=update-demo pods to come up. | |
Feb 28 19:40:25.073: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:25.729: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:40:25.729: INFO: stderr: "" | |
Feb 28 19:40:25.729: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:26.397: INFO: stdout: "" | |
Feb 28 19:40:26.397: INFO: stderr: "" | |
Feb 28 19:40:26.397: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:40:31.397: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:32.058: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:40:32.058: INFO: stderr: "" | |
Feb 28 19:40:32.058: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:32.700: INFO: stdout: "" | |
Feb 28 19:40:32.700: INFO: stderr: "" | |
Feb 28 19:40:32.700: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:40:37.700: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:38.367: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:40:38.367: INFO: stderr: "" | |
Feb 28 19:40:38.367: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:39.006: INFO: stdout: "" | |
Feb 28 19:40:39.006: INFO: stderr: "" | |
Feb 28 19:40:39.006: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:40:44.006: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:44.657: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:40:44.657: INFO: stderr: "" | |
Feb 28 19:40:44.657: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:45.309: INFO: stdout: "" | |
Feb 28 19:40:45.309: INFO: stderr: "" | |
Feb 28 19:40:45.309: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:40:50.309: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:50.967: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:40:50.967: INFO: stderr: "" | |
Feb 28 19:40:50.967: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:51.614: INFO: stdout: "" | |
Feb 28 19:40:51.614: INFO: stderr: "" | |
Feb 28 19:40:51.614: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:40:56.614: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:57.353: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:40:57.353: INFO: stderr: "" | |
Feb 28 19:40:57.353: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:40:58.006: INFO: stdout: "" | |
Feb 28 19:40:58.006: INFO: stderr: "" | |
Feb 28 19:40:58.006: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:03.006: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:03.654: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:41:03.654: INFO: stderr: "" | |
Feb 28 19:41:03.655: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:04.310: INFO: stdout: "" | |
Feb 28 19:41:04.310: INFO: stderr: "" | |
Feb 28 19:41:04.310: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:09.310: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:09.967: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:41:09.967: INFO: stderr: "" | |
Feb 28 19:41:09.967: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:10.623: INFO: stdout: "" | |
Feb 28 19:41:10.623: INFO: stderr: "" | |
Feb 28 19:41:10.623: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:15.623: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:16.275: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:41:16.275: INFO: stderr: "" | |
Feb 28 19:41:16.275: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:16.937: INFO: stdout: "" | |
Feb 28 19:41:16.937: INFO: stderr: "" | |
Feb 28 19:41:16.937: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:21.937: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:22.595: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:41:22.595: INFO: stderr: "" | |
Feb 28 19:41:22.595: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:23.249: INFO: stdout: "" | |
Feb 28 19:41:23.249: INFO: stderr: "" | |
Feb 28 19:41:23.249: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:28.249: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:28.900: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:41:28.900: INFO: stderr: "" | |
Feb 28 19:41:28.900: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:29.556: INFO: stdout: "" | |
Feb 28 19:41:29.556: INFO: stderr: "" | |
Feb 28 19:41:29.556: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:34.556: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:35.210: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:41:35.211: INFO: stderr: "" | |
Feb 28 19:41:35.211: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:35.861: INFO: stdout: "" | |
Feb 28 19:41:35.861: INFO: stderr: "" | |
Feb 28 19:41:35.861: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:40.862: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:41.502: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:41:41.502: INFO: stderr: "" | |
Feb 28 19:41:41.502: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:42.140: INFO: stdout: "" | |
Feb 28 19:41:42.140: INFO: stderr: "" | |
Feb 28 19:41:42.140: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:47.141: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:47.799: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:41:47.799: INFO: stderr: "" | |
Feb 28 19:41:47.799: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:48.471: INFO: stdout: "" | |
Feb 28 19:41:48.471: INFO: stderr: "" | |
Feb 28 19:41:48.471: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:53.471: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:54.121: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:41:54.121: INFO: stderr: "" | |
Feb 28 19:41:54.122: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:41:54.780: INFO: stdout: "" | |
Feb 28 19:41:54.780: INFO: stderr: "" | |
Feb 28 19:41:54.780: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:41:59.780: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:00.444: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:00.444: INFO: stderr: "" | |
Feb 28 19:42:00.444: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:01.110: INFO: stdout: "" | |
Feb 28 19:42:01.110: INFO: stderr: "" | |
Feb 28 19:42:01.110: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:42:06.111: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:06.756: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:06.756: INFO: stderr: "" | |
Feb 28 19:42:06.756: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:07.398: INFO: stdout: "" | |
Feb 28 19:42:07.398: INFO: stderr: "" | |
Feb 28 19:42:07.398: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:42:12.398: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:13.048: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:13.048: INFO: stderr: "" | |
Feb 28 19:42:13.048: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:13.714: INFO: stdout: "" | |
Feb 28 19:42:13.714: INFO: stderr: "" | |
Feb 28 19:42:13.714: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:42:18.714: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:19.351: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:19.351: INFO: stderr: "" | |
Feb 28 19:42:19.351: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:20.001: INFO: stdout: "" | |
Feb 28 19:42:20.001: INFO: stderr: "" | |
Feb 28 19:42:20.001: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:42:25.002: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:25.644: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:25.644: INFO: stderr: "" | |
Feb 28 19:42:25.644: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:26.294: INFO: stdout: "" | |
Feb 28 19:42:26.294: INFO: stderr: "" | |
Feb 28 19:42:26.294: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:42:31.294: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:31.944: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:31.944: INFO: stderr: "" | |
Feb 28 19:42:31.944: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:32.596: INFO: stdout: "" | |
Feb 28 19:42:32.596: INFO: stderr: "" | |
Feb 28 19:42:32.596: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:42:37.596: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:38.240: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:38.240: INFO: stderr: "" | |
Feb 28 19:42:38.240: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:38.892: INFO: stdout: "" | |
Feb 28 19:42:38.892: INFO: stderr: "" | |
Feb 28 19:42:38.892: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:42:43.892: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:44.541: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:44.541: INFO: stderr: "" | |
Feb 28 19:42:44.541: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:45.191: INFO: stdout: "" | |
Feb 28 19:42:45.191: INFO: stderr: "" | |
Feb 28 19:42:45.191: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:42:50.192: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:50.850: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:50.850: INFO: stderr: "" | |
Feb 28 19:42:50.850: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:51.504: INFO: stdout: "" | |
Feb 28 19:42:51.504: INFO: stderr: "" | |
Feb 28 19:42:51.504: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:42:56.504: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:57.158: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:42:57.158: INFO: stderr: "" | |
Feb 28 19:42:57.158: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:42:57.826: INFO: stdout: "" | |
Feb 28 19:42:57.826: INFO: stderr: "" | |
Feb 28 19:42:57.827: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:02.827: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:03.478: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:43:03.478: INFO: stderr: "" | |
Feb 28 19:43:03.478: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:04.137: INFO: stdout: "" | |
Feb 28 19:43:04.137: INFO: stderr: "" | |
Feb 28 19:43:04.137: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:09.137: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:09.798: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:43:09.798: INFO: stderr: "" | |
Feb 28 19:43:09.798: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:10.463: INFO: stdout: "" | |
Feb 28 19:43:10.463: INFO: stderr: "" | |
Feb 28 19:43:10.463: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:15.463: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:16.145: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:43:16.145: INFO: stderr: "" | |
Feb 28 19:43:16.145: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:16.797: INFO: stdout: "" | |
Feb 28 19:43:16.797: INFO: stderr: "" | |
Feb 28 19:43:16.797: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:21.797: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:22.447: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:43:22.447: INFO: stderr: "" | |
Feb 28 19:43:22.447: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:23.103: INFO: stdout: "" | |
Feb 28 19:43:23.103: INFO: stderr: "" | |
Feb 28 19:43:23.103: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:28.103: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:28.749: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:43:28.749: INFO: stderr: "" | |
Feb 28 19:43:28.749: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:29.404: INFO: stdout: "" | |
Feb 28 19:43:29.404: INFO: stderr: "" | |
Feb 28 19:43:29.404: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:34.404: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:35.047: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:43:35.047: INFO: stderr: "" | |
Feb 28 19:43:35.047: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:35.715: INFO: stdout: "" | |
Feb 28 19:43:35.715: INFO: stderr: "" | |
Feb 28 19:43:35.715: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:40.715: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:41.373: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:43:41.373: INFO: stderr: "" | |
Feb 28 19:43:41.374: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:42.028: INFO: stdout: "" | |
Feb 28 19:43:42.028: INFO: stderr: "" | |
Feb 28 19:43:42.028: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:47.028: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:47.675: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:43:47.675: INFO: stderr: "" | |
Feb 28 19:43:47.675: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:48.323: INFO: stdout: "" | |
Feb 28 19:43:48.323: INFO: stderr: "" | |
Feb 28 19:43:48.323: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:53.323: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:53.977: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:43:53.977: INFO: stderr: "" | |
Feb 28 19:43:53.977: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:43:54.625: INFO: stdout: "" | |
Feb 28 19:43:54.625: INFO: stderr: "" | |
Feb 28 19:43:54.625: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:43:59.625: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:00.279: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:00.279: INFO: stderr: "" | |
Feb 28 19:44:00.279: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:00.933: INFO: stdout: "" | |
Feb 28 19:44:00.933: INFO: stderr: "" | |
Feb 28 19:44:00.933: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:44:05.933: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:06.590: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:06.590: INFO: stderr: "" | |
Feb 28 19:44:06.590: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:07.246: INFO: stdout: "" | |
Feb 28 19:44:07.246: INFO: stderr: "" | |
Feb 28 19:44:07.246: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:44:12.246: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:12.911: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:12.911: INFO: stderr: "" | |
Feb 28 19:44:12.912: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:13.584: INFO: stdout: "" | |
Feb 28 19:44:13.584: INFO: stderr: "" | |
Feb 28 19:44:13.584: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:44:18.584: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:19.229: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:19.229: INFO: stderr: "" | |
Feb 28 19:44:19.229: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:19.892: INFO: stdout: "" | |
Feb 28 19:44:19.892: INFO: stderr: "" | |
Feb 28 19:44:19.892: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:44:24.892: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:25.544: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:25.544: INFO: stderr: "" | |
Feb 28 19:44:25.545: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:26.193: INFO: stdout: "" | |
Feb 28 19:44:26.193: INFO: stderr: "" | |
Feb 28 19:44:26.193: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:44:31.193: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:31.837: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:31.837: INFO: stderr: "" | |
Feb 28 19:44:31.837: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:32.483: INFO: stdout: "" | |
Feb 28 19:44:32.483: INFO: stderr: "" | |
Feb 28 19:44:32.483: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:44:37.483: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:38.124: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:38.124: INFO: stderr: "" | |
Feb 28 19:44:38.124: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:38.775: INFO: stdout: "" | |
Feb 28 19:44:38.775: INFO: stderr: "" | |
Feb 28 19:44:38.775: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:44:43.775: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:44.432: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:44.432: INFO: stderr: "" | |
Feb 28 19:44:44.432: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:45.086: INFO: stdout: "" | |
Feb 28 19:44:45.086: INFO: stderr: "" | |
Feb 28 19:44:45.086: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:44:50.087: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:50.739: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:50.740: INFO: stderr: "" | |
Feb 28 19:44:50.740: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:51.403: INFO: stdout: "" | |
Feb 28 19:44:51.403: INFO: stderr: "" | |
Feb 28 19:44:51.403: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:44:56.403: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:57.058: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:44:57.058: INFO: stderr: "" | |
Feb 28 19:44:57.059: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:44:57.719: INFO: stdout: "" | |
Feb 28 19:44:57.719: INFO: stderr: "" | |
Feb 28 19:44:57.719: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:45:02.719: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:03.369: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:45:03.369: INFO: stderr: "" | |
Feb 28 19:45:03.369: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:04.033: INFO: stdout: "" | |
Feb 28 19:45:04.033: INFO: stderr: "" | |
Feb 28 19:45:04.033: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:45:09.033: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:09.678: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:45:09.678: INFO: stderr: "" | |
Feb 28 19:45:09.678: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:10.329: INFO: stdout: "" | |
Feb 28 19:45:10.329: INFO: stderr: "" | |
Feb 28 19:45:10.329: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:45:15.329: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:15.983: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:45:15.983: INFO: stderr: "" | |
Feb 28 19:45:15.983: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:16.634: INFO: stdout: "" | |
Feb 28 19:45:16.635: INFO: stderr: "" | |
Feb 28 19:45:16.635: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:45:21.635: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:22.284: INFO: stdout: "update-demo-nautilus-qljwd update-demo-nautilus-rm9s6 " | |
Feb 28 19:45:22.284: INFO: stderr: "" | |
Feb 28 19:45:22.284: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods update-demo-nautilus-qljwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:22.936: INFO: stdout: "" | |
Feb 28 19:45:22.936: INFO: stderr: "" | |
Feb 28 19:45:22.936: INFO: update-demo-nautilus-qljwd is created but not running | |
Feb 28 19:45:27.936: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state | |
STEP: using delete to clean up resources | |
Feb 28 19:45:27.937: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config delete --grace-period=0 -f /home/spotter/gocode/src/k8s.io/y-kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:31.116: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted\n" | |
Feb 28 19:45:31.116: INFO: stderr: "" | |
Feb 28 19:45:31.116: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-s4as6' | |
Feb 28 19:45:31.842: INFO: stdout: "" | |
Feb 28 19:45:31.842: INFO: stderr: "" | |
Feb 28 19:45:31.842: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-s4as6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Feb 28 19:45:32.487: INFO: stdout: "" | |
Feb 28 19:45:32.487: INFO: stderr: "" | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-kubectl-s4as6". | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:25 -0800 PST - event for update-demo-nautilus: {replication-controller } SuccessfulCreate: Created pod: update-demo-nautilus-rm9s6 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:25 -0800 PST - event for update-demo-nautilus: {replication-controller } SuccessfulCreate: Created pod: update-demo-nautilus-qljwd | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:25 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "gcr.io/google_containers/update-demo:nautilus" | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:25 -0800 PST - event for update-demo-nautilus-qljwd: {default-scheduler } Scheduled: Successfully assigned update-demo-nautilus-qljwd to spotter-kube-rkt-minion-8b1u | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:25 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Pulling: pulling image "gcr.io/google_containers/update-demo:nautilus" | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:25 -0800 PST - event for update-demo-nautilus-rm9s6: {default-scheduler } Scheduled: Successfully assigned update-demo-nautilus-rm9s6 to spotter-kube-rkt-minion-yo39 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:27 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "gcr.io/google_containers/update-demo:nautilus" | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:27 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Successfully pulled image "gcr.io/google_containers/update-demo:nautilus" | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:30 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 486218db | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:30 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 486218db | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:30 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 88f3bd49 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:30 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 88f3bd49 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:31 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Container image "gcr.io/google_containers/update-demo:nautilus" already present on machine | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:32 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Container image "gcr.io/google_containers/update-demo:nautilus" already present on machine | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:34 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id a4a266f5 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:34 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id a4a266f5 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:35 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 832713a3 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:35 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 832713a3 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:39 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 5b5b08d3 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:39 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 5b5b08d3 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:40 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 42570204 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:40:40 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 42570204 | |
Feb 28 19:45:32.739: INFO: At 2016-02-28 19:41:40 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 43b85579 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:40 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 43b85579 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:43 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 7934fd98 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:43 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 7934fd98 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:46 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id aa2d219f | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:46 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id aa2d219f | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:49 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 733f903f | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:49 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 733f903f | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:54 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 88f73f67 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:54 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 88f73f67 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:59 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id b8ac4555 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:41:59 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id b8ac4555 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:42:02 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 5146441f | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:42:02 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 5146441f | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:43:14 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 4c39e76a | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:43:14 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 4c39e76a | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:43:29 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id c8e837aa | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:43:29 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id c8e837aa | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:43:32 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 56d99271 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:43:32 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 56d99271 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:43:40 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Created: (events with common reason combined) | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:43:40 -0800 PST - event for update-demo-nautilus-rm9s6: {kubelet spotter-kube-rkt-minion-yo39} Started: (events with common reason combined) | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:44:57 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 1e3e0463 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:44:57 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 1e3e0463 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:45:17 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id f9dd95bf | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:45:17 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id f9dd95bf | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:45:21 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Started: (events with common reason combined) | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:45:21 -0800 PST - event for update-demo-nautilus-qljwd: {kubelet spotter-kube-rkt-minion-8b1u} Created: (events with common reason combined) | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:45:29 -0800 PST - event for update-demo-nautilus: {replication-controller } SuccessfulDelete: Deleted pod: update-demo-nautilus-rm9s6 | |
Feb 28 19:45:32.740: INFO: At 2016-02-28 19:45:30 -0800 PST - event for update-demo-nautilus: {replication-controller } SuccessfulDelete: Deleted pod: update-demo-nautilus-qljwd | |
Feb 28 19:45:32.834: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 19:45:32.834: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:46 -0800 PST }] | |
Feb 28 19:45:32.834: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:44 -0800 PST }] | |
Feb 28 19:45:32.834: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:45:32.834: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 19:45:32.834: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 19:45:32.834: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 19:45:32.834: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 19:45:32.834: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 19:45:32.834: INFO: | |
Feb 28 19:45:32.918: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 19:45:33.000: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 750 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:45:24 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:45:24 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 19:45:33.000: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 19:45:33.084: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 19:45:33.248: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 19:45:33.248: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:45:33.329: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 751 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:45:25 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:45:25 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 19:45:33.329: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:45:33.414: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:45:33.677: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 19:45:33.976: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 19:45:33.976: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:45:34.064: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 761 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:45:33 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:45:33 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 19:45:34.064: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:45:34.148: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:45:34.434: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 19:45:34.716: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 19:45:34.716: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:45:34.797: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 752 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 19:45:27 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 19:45:27 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 19:45:34.797: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:45:34.890: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:45:35.146: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 19:45:35.436: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 19:45:35.436: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:11.691354s} | |
Feb 28 19:45:35.436: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-s4as6" for this suite. | |
• Failure [316.837 seconds] | |
Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1082 | |
Update Demo | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:153 | |
should create and stop a replication controller [Conformance] [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:128 | |
Feb 28 19:45:27.936: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:1287 | |
------------------------------ | |
Pods | |
should cap back-off at MaxContainerBackOff [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1084 | |
[BeforeEach] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 19:45:40.878: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 19:45:40.971: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-26jij | |
Feb 28 19:45:41.052: INFO: Service account default in ns e2e-tests-pods-26jij with secrets found. (80.573284ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 19:45:41.052: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-26jij | |
Feb 28 19:45:41.139: INFO: Service account default in ns e2e-tests-pods-26jij with secrets found. (87.008326ms) | |
[It] should cap back-off at MaxContainerBackOff [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1084 | |
STEP: submitting the pod to kubernetes | |
Feb 28 19:45:41.232: INFO: Waiting up to 5m0s for pod back-off-cap status to be running | |
Feb 28 19:45:41.313: INFO: Waiting for pod back-off-cap in namespace 'e2e-tests-pods-26jij' status to be 'running'(found phase: "Pending", readiness: false) (80.948315ms elapsed) | |
Feb 28 19:45:43.404: INFO: Waiting for pod back-off-cap in namespace 'e2e-tests-pods-26jij' status to be 'running'(found phase: "Pending", readiness: false) (2.172569039s elapsed) | |
Feb 28 19:45:45.494: INFO: Found pod 'back-off-cap' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: verifying the pod is in kubernetes | |
STEP: geting restart delay when capped | |
Feb 28 20:00:22.165: INFO: getRestartDelay: finishedAt=0001-01-01 00:00:00 +0000 UTC restartedAt=2016-02-28 20:00:18 -0800 PST (2562047h47m16.854775807s) | |
Feb 28 20:00:31.961: INFO: getRestartDelay: finishedAt=0001-01-01 00:00:00 +0000 UTC restartedAt=2016-02-28 20:00:27 -0800 PST (2562047h47m16.854775807s) | |
Feb 28 20:03:22.629: INFO: getRestartDelay: finishedAt=0001-01-01 00:00:00 +0000 UTC restartedAt=2016-02-28 20:03:18 -0800 PST (2562047h47m16.854775807s) | |
Feb 28 20:03:22.629: FAIL: expected 5m0s back-off got=2562047h47m16.854775807s in delay1 | |
STEP: deleting the pod | |
[AfterEach] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-pods-26jij". | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:45:41 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Container image "gcr.io/google_containers/busybox:1.24" already present on machine | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:45:41 -0800 PST - event for back-off-cap: {default-scheduler } Scheduled: Successfully assigned back-off-cap to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:45:44 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id a16183e8 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:45:44 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id a16183e8 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:45:52 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id e9b0a93b | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:45:52 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id e9b0a93b | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:46:02 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id e1269ea7 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:46:02 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id e1269ea7 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:47:16 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id ebb87e7b | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:47:16 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id ebb87e7b | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:47:24 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 71249695 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:47:24 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 71249695 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:48:55 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 53e5f944 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:48:55 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 53e5f944 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:49:18 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 04204813 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:49:18 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 04204813 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:50:18 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 19b972c1 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:50:18 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 19b972c1 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:50:27 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id e7457771 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:50:27 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id e7457771 | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:50:36 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Created: (events with common reason combined) | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 19:50:36 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Started: (events with common reason combined) | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 20:02:20 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Failed: Failed to create rkt container with error: failed to run [prepare --quiet --pod-manifest /tmp/manifest-back-off-cap-231417597]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 20:02:20 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} FailedSync: Error syncing pod, skipping: failed to SyncPod: failed to run [prepare --quiet --pod-manifest /tmp/manifest-back-off-cap-231417597]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:03:22.889: INFO: At 2016-02-28 20:03:22 -0800 PST - event for back-off-cap: {kubelet spotter-kube-rkt-minion-8b1u} Killing: Killing with rkt id e6d7f3c2 | |
Feb 28 20:03:22.982: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:03:22.982: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:46 -0800 PST }] | |
Feb 28 20:03:22.982: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:44 -0800 PST }] | |
Feb 28 20:03:22.982: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:03:22.982: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:03:22.982: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:03:22.982: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:03:22.982: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:03:22.982: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:03:22.982: INFO: | |
Feb 28 20:03:23.070: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:03:23.153: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 1227 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:03:16 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:03:16 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:03:23.153: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:03:23.236: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:03:23.404: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:03:23.404: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:03:23.490: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 1230 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:03:20 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:03:20 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:03:23.491: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:03:23.608: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:03:23.857: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:03:23.857: INFO: back-off-cap started at <nil> (0 container statuses recorded) | |
Feb 28 20:03:24.178: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:03:24.178: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:03:24.266: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 1226 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:03:15 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:03:15 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:03:24.266: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:03:24.352: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:03:24.622: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:03:24.897: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:03:24.897: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:03:24.982: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 1229 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:03:19 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:03:19 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:03:24.982: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:03:25.070: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:03:25.331: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:03:25.630: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:03:25.630: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-26jij" for this suite. | |
• Failure [1070.169 seconds] | |
Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1263 | |
should cap back-off at MaxContainerBackOff [Slow] [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1084 | |
Feb 28 20:03:22.629: expected 5m0s back-off got=2562047h47m16.854775807s in delay1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1072 | |
------------------------------ | |
Mesos | |
starts static pods on every node in the mesos cluster | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:73 | |
[BeforeEach] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:03:31.047: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:03:31.141: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-i139i | |
Feb 28 20:03:31.224: INFO: Service account default in ns e2e-tests-pods-i139i had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:03:33.308: INFO: Service account default in ns e2e-tests-pods-i139i with secrets found. (2.166976673s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:03:33.308: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-i139i | |
Feb 28 20:03:33.391: INFO: Service account default in ns e2e-tests-pods-i139i with secrets found. (83.17715ms) | |
[BeforeEach] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:41 | |
Feb 28 20:03:33.392: SKIP: Only supported for providers [mesos/docker] (not gce) | |
[AfterEach] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:03:33.392: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-i139i" for this suite. | |
S [SKIPPING] in Spec Setup (BeforeEach) [2.690 seconds] | |
Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:119 | |
starts static pods on every node in the mesos cluster [BeforeEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:73 | |
Feb 28 20:03:33.392: Only supported for providers [mesos/docker] (not gce) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:301 | |
------------------------------ | |
Kubectl client Kubectl api-versions | |
should check if v1 is in available api versions [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:517 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:03:33.737: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:03:33.825: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-6jad2 | |
Feb 28 20:03:33.911: INFO: Service account default in ns e2e-tests-kubectl-6jad2 had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:03:35.993: INFO: Service account default in ns e2e-tests-kubectl-6jad2 with secrets found. (2.167586606s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:03:35.993: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-6jad2 | |
Feb 28 20:03:36.077: INFO: Service account default in ns e2e-tests-kubectl-6jad2 with secrets found. (83.532559ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[It] should check if v1 is in available api versions [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:517 | |
STEP: validating api verions | |
Feb 28 20:03:36.077: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config api-versions' | |
Feb 28 20:03:36.806: INFO: stdout: "autoscaling/v1\nbatch/v1\nextensions/v1beta1\nv1\n" | |
Feb 28 20:03:36.806: INFO: stderr: "" | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:03:36.807: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-6jad2" for this suite. | |
• | |
------------------------------ | |
MetricsGrabber | |
should grab all metrics from API server. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:100 | |
[BeforeEach] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:03:37.142: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:03:37.235: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-metrics-grabber-qts8g | |
Feb 28 20:03:37.315: INFO: Service account default in ns e2e-tests-metrics-grabber-qts8g had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:03:39.399: INFO: Service account default in ns e2e-tests-metrics-grabber-qts8g with secrets found. (2.163744488s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:03:39.399: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-metrics-grabber-qts8g | |
Feb 28 20:03:39.482: INFO: Service account default in ns e2e-tests-metrics-grabber-qts8g with secrets found. (83.265247ms) | |
[BeforeEach] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:90 | |
[It] should grab all metrics from API server. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:100 | |
STEP: Connecting to /metrics endpoint | |
[AfterEach] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:03:39.702: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-metrics-grabber-qts8g" for this suite. | |
•SS | |
------------------------------ | |
Pod Disks | |
should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:266 | |
[BeforeEach] Pod Disks | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:03:40.046: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:03:40.135: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-qglfs | |
Feb 28 20:03:40.219: INFO: Service account default in ns e2e-tests-pod-disks-qglfs had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:03:42.303: INFO: Service account default in ns e2e-tests-pod-disks-qglfs with secrets found. (2.168382888s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:03:42.303: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-qglfs | |
Feb 28 20:03:42.387: INFO: Service account default in ns e2e-tests-pod-disks-qglfs with secrets found. (83.937305ms) | |
[BeforeEach] Pod Disks | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:64 | |
[It] should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:266 | |
STEP: creating PD1 | |
Feb 28 20:03:47.771: INFO: Successfully created a new PD: "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4". | |
STEP: creating PD2 | |
Feb 28 20:03:51.455: INFO: Successfully created a new PD: "spotter-kube-rkt-6ef71f20-de99-11e5-a1fb-54ee75510eb4". | |
Feb 28 20:03:51.455: INFO: PD Read/Writer Iteration #0 | |
STEP: submitting host0Pod to kubernetes | |
Feb 28 20:03:51.549: INFO: Waiting up to 15m0s for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 status to be running | |
Feb 28 20:03:51.633: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (84.222675ms elapsed) | |
Feb 28 20:03:53.713: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2.163846281s elapsed) | |
Feb 28 20:03:55.799: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (4.249483483s elapsed) | |
Feb 28 20:03:57.885: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (6.335355222s elapsed) | |
Feb 28 20:03:59.973: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (8.423523693s elapsed) | |
Feb 28 20:04:02.057: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (10.508147347s elapsed) | |
Feb 28 20:04:04.140: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (12.590594327s elapsed) | |
Feb 28 20:04:06.225: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (14.675328138s elapsed) | |
Feb 28 20:04:08.308: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (16.759163424s elapsed) | |
Feb 28 20:04:10.391: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (18.842111527s elapsed) | |
Feb 28 20:04:12.478: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (20.928875808s elapsed) | |
Feb 28 20:04:14.563: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (23.014271458s elapsed) | |
Feb 28 20:04:16.651: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (25.101968371s elapsed) | |
Feb 28 20:04:18.733: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (27.183421282s elapsed) | |
Feb 28 20:04:20.817: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (29.267962113s elapsed) | |
Feb 28 20:04:22.901: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (31.351859149s elapsed) | |
Feb 28 20:04:24.985: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (33.435498982s elapsed) | |
Feb 28 20:04:27.069: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (35.520259601s elapsed) | |
Feb 28 20:04:29.157: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (37.607313456s elapsed) | |
Feb 28 20:04:31.238: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (39.688573563s elapsed) | |
Feb 28 20:04:33.326: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (41.776823305s elapsed) | |
Feb 28 20:04:35.413: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (43.863301755s elapsed) | |
Feb 28 20:04:37.498: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (45.948348281s elapsed) | |
Feb 28 20:04:39.580: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (48.031008016s elapsed) | |
Feb 28 20:04:41.665: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (50.115608799s elapsed) | |
Feb 28 20:04:43.747: INFO: Found pod 'pd-test-71292688-de99-11e5-a1fb-54ee75510eb4' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: writing a file in the container | |
Feb 28 20:04:43.747: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- /bin/sh -c echo '3287282315050839767' > '/testpd1/tracker0'' | |
Feb 28 20:04:45.431: INFO: Wrote value: "3287282315050839767" to PD1 ("spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4") from pod "pd-test-71292688-de99-11e5-a1fb-54ee75510eb4" container "mycontainer" | |
STEP: writing a file in the container | |
Feb 28 20:04:45.431: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- /bin/sh -c echo '8455666949293004623' > '/testpd2/tracker0'' | |
Feb 28 20:04:47.094: INFO: Wrote value: "8455666949293004623" to PD2 ("spotter-kube-rkt-6ef71f20-de99-11e5-a1fb-54ee75510eb4") from pod "pd-test-71292688-de99-11e5-a1fb-54ee75510eb4" container "mycontainer" | |
STEP: reading a file in the container | |
Feb 28 20:04:47.094: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker0' | |
Feb 28 20:04:48.762: INFO: Read file "/testpd1/tracker0" with content: 3287282315050839767 | |
STEP: reading a file in the container | |
Feb 28 20:04:48.762: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd2/tracker0' | |
Feb 28 20:04:50.412: INFO: Read file "/testpd2/tracker0" with content: 8455666949293004623 | |
STEP: deleting host0Pod | |
Feb 28 20:04:50.501: INFO: PD Read/Writer Iteration #1 | |
STEP: submitting host0Pod to kubernetes | |
Feb 28 20:04:50.590: INFO: Waiting up to 15m0s for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 status to be running | |
Feb 28 20:04:50.675: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (84.831137ms elapsed) | |
Feb 28 20:04:52.762: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2.171699876s elapsed) | |
Feb 28 20:04:54.849: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (4.258809035s elapsed) | |
Feb 28 20:04:56.933: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (6.343236297s elapsed) | |
Feb 28 20:04:59.017: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (8.427289741s elapsed) | |
Feb 28 20:05:01.104: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (10.51388732s elapsed) | |
Feb 28 20:05:03.193: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (12.602499443s elapsed) | |
Feb 28 20:05:05.288: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (14.697946192s elapsed) | |
Feb 28 20:05:07.372: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (16.782180264s elapsed) | |
Feb 28 20:05:09.457: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (18.867400381s elapsed) | |
Feb 28 20:05:11.544: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (20.954109316s elapsed) | |
Feb 28 20:05:13.631: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (23.040502282s elapsed) | |
Feb 28 20:05:15.719: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (25.128989442s elapsed) | |
Feb 28 20:05:17.804: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (27.213996411s elapsed) | |
Feb 28 20:05:19.887: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (29.297425861s elapsed) | |
Feb 28 20:05:21.969: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (31.378454787s elapsed) | |
Feb 28 20:05:24.055: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (33.465450971s elapsed) | |
Feb 28 20:05:26.139: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (35.548836148s elapsed) | |
Feb 28 20:05:28.223: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (37.633177727s elapsed) | |
Feb 28 20:05:30.310: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (39.72041677s elapsed) | |
Feb 28 20:05:32.392: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (41.801926745s elapsed) | |
Feb 28 20:05:34.478: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (43.888281737s elapsed) | |
Feb 28 20:05:36.568: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (45.977869122s elapsed) | |
Feb 28 20:05:38.655: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (48.064515795s elapsed) | |
Feb 28 20:05:40.739: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (50.148605163s elapsed) | |
Feb 28 20:05:42.831: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (52.241120985s elapsed) | |
Feb 28 20:05:44.917: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (54.326893439s elapsed) | |
Feb 28 20:05:47.002: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (56.411547998s elapsed) | |
Feb 28 20:05:49.084: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (58.493709473s elapsed) | |
Feb 28 20:05:51.169: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m0.57876794s elapsed) | |
Feb 28 20:05:53.255: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m2.665425713s elapsed) | |
Feb 28 20:05:55.338: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m4.747470667s elapsed) | |
Feb 28 20:05:57.421: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m6.831011435s elapsed) | |
Feb 28 20:05:59.509: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m8.918603353s elapsed) | |
Feb 28 20:06:01.593: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m11.002810743s elapsed) | |
Feb 28 20:06:03.678: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m13.087896345s elapsed) | |
Feb 28 20:06:05.763: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m15.172547529s elapsed) | |
Feb 28 20:06:07.848: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m17.257653261s elapsed) | |
Feb 28 20:06:09.933: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m19.342522503s elapsed) | |
Feb 28 20:06:12.022: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m21.432429469s elapsed) | |
Feb 28 20:06:14.108: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m23.518046918s elapsed) | |
Feb 28 20:06:16.200: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m25.610170535s elapsed) | |
Feb 28 20:06:18.281: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m27.691264272s elapsed) | |
Feb 28 20:06:20.365: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m29.775361543s elapsed) | |
Feb 28 20:06:22.451: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m31.860787908s elapsed) | |
Feb 28 20:06:24.537: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m33.947354234s elapsed) | |
Feb 28 20:06:26.638: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m36.048234359s elapsed) | |
Feb 28 20:06:28.725: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m38.134464079s elapsed) | |
Feb 28 20:06:30.815: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m40.22450481s elapsed) | |
Feb 28 20:06:32.901: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m42.311368914s elapsed) | |
Feb 28 20:06:34.985: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m44.394911295s elapsed) | |
Feb 28 20:06:37.070: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m46.480437769s elapsed) | |
Feb 28 20:06:39.153: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m48.562515849s elapsed) | |
Feb 28 20:06:41.240: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m50.650372999s elapsed) | |
Feb 28 20:06:43.324: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m52.733773161s elapsed) | |
Feb 28 20:06:45.409: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m54.818794619s elapsed) | |
Feb 28 20:06:47.496: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m56.905766474s elapsed) | |
Feb 28 20:06:49.579: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m58.988782517s elapsed) | |
Feb 28 20:06:51.664: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m1.074202064s elapsed) | |
Feb 28 20:06:53.750: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m3.159936256s elapsed) | |
Feb 28 20:06:55.837: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m5.247338862s elapsed) | |
Feb 28 20:06:57.921: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m7.331449663s elapsed) | |
Feb 28 20:07:00.005: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m9.414703258s elapsed) | |
Feb 28 20:07:02.091: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m11.501342753s elapsed) | |
Feb 28 20:07:04.175: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m13.585068558s elapsed) | |
Feb 28 20:07:06.266: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m15.675709313s elapsed) | |
Feb 28 20:07:08.349: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m17.758815773s elapsed) | |
Feb 28 20:07:10.434: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m19.844196378s elapsed) | |
Feb 28 20:07:12.516: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m21.926147408s elapsed) | |
Feb 28 20:07:14.601: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m24.01069809s elapsed) | |
Feb 28 20:07:16.683: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m26.093294069s elapsed) | |
Feb 28 20:07:18.770: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m28.179877541s elapsed) | |
Feb 28 20:07:20.855: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m30.264794774s elapsed) | |
Feb 28 20:07:22.937: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m32.346637744s elapsed) | |
Feb 28 20:07:25.023: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m34.433346661s elapsed) | |
Feb 28 20:07:27.110: INFO: Found pod 'pd-test-71292688-de99-11e5-a1fb-54ee75510eb4' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: reading a file in the container | |
Feb 28 20:07:27.110: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker0' | |
Feb 28 20:07:28.763: INFO: Read file "/testpd1/tracker0" with content: 3287282315050839767 | |
STEP: reading a file in the container | |
Feb 28 20:07:28.763: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd2/tracker0' | |
Feb 28 20:07:30.407: INFO: Read file "/testpd2/tracker0" with content: 8455666949293004623 | |
STEP: writing a file in the container | |
Feb 28 20:07:30.407: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- /bin/sh -c echo '5674185573265005150' > '/testpd1/tracker1'' | |
Feb 28 20:07:32.038: INFO: Wrote value: "5674185573265005150" to PD1 ("spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4") from pod "pd-test-71292688-de99-11e5-a1fb-54ee75510eb4" container "mycontainer" | |
STEP: writing a file in the container | |
Feb 28 20:07:32.038: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- /bin/sh -c echo '4717334516107809904' > '/testpd2/tracker1'' | |
Feb 28 20:07:33.683: INFO: Wrote value: "4717334516107809904" to PD2 ("spotter-kube-rkt-6ef71f20-de99-11e5-a1fb-54ee75510eb4") from pod "pd-test-71292688-de99-11e5-a1fb-54ee75510eb4" container "mycontainer" | |
STEP: reading a file in the container | |
Feb 28 20:07:33.683: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd2/tracker1' | |
Feb 28 20:07:35.324: INFO: Read file "/testpd2/tracker1" with content: 4717334516107809904 | |
STEP: reading a file in the container | |
Feb 28 20:07:35.324: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker0' | |
Feb 28 20:07:36.965: INFO: Read file "/testpd1/tracker0" with content: 3287282315050839767 | |
STEP: reading a file in the container | |
Feb 28 20:07:36.965: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd2/tracker0' | |
Feb 28 20:07:38.618: INFO: Read file "/testpd2/tracker0" with content: 8455666949293004623 | |
STEP: reading a file in the container | |
Feb 28 20:07:38.618: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker1' | |
Feb 28 20:07:40.264: INFO: Read file "/testpd1/tracker1" with content: 5674185573265005150 | |
STEP: deleting host0Pod | |
Feb 28 20:07:40.354: INFO: PD Read/Writer Iteration #2 | |
STEP: submitting host0Pod to kubernetes | |
Feb 28 20:07:40.446: INFO: Waiting up to 15m0s for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 status to be running | |
Feb 28 20:07:40.531: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (85.668508ms elapsed) | |
Feb 28 20:07:42.616: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2.169907049s elapsed) | |
Feb 28 20:07:44.708: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (4.262294785s elapsed) | |
Feb 28 20:07:46.792: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (6.346024602s elapsed) | |
Feb 28 20:07:48.880: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (8.434042553s elapsed) | |
Feb 28 20:07:50.963: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (10.51692728s elapsed) | |
Feb 28 20:07:53.055: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (12.609645822s elapsed) | |
Feb 28 20:07:55.139: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (14.693819912s elapsed) | |
Feb 28 20:07:57.226: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (16.780336262s elapsed) | |
Feb 28 20:07:59.312: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (18.866211325s elapsed) | |
Feb 28 20:08:01.396: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (20.950486987s elapsed) | |
Feb 28 20:08:03.493: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (23.046976699s elapsed) | |
Feb 28 20:08:05.577: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (25.131732328s elapsed) | |
Feb 28 20:08:07.660: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (27.214277627s elapsed) | |
Feb 28 20:08:09.742: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (29.296284211s elapsed) | |
Feb 28 20:08:11.833: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (31.387450368s elapsed) | |
Feb 28 20:08:13.915: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (33.469432861s elapsed) | |
Feb 28 20:08:16.002: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (35.556519261s elapsed) | |
Feb 28 20:08:18.089: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (37.643372959s elapsed) | |
Feb 28 20:08:20.176: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (39.730830162s elapsed) | |
Feb 28 20:08:22.260: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (41.81401595s elapsed) | |
Feb 28 20:08:24.364: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (43.918245816s elapsed) | |
Feb 28 20:08:26.451: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (46.005169484s elapsed) | |
Feb 28 20:08:28.536: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (48.089987137s elapsed) | |
Feb 28 20:08:30.623: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (50.177789883s elapsed) | |
Feb 28 20:08:32.713: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (52.266982795s elapsed) | |
Feb 28 20:08:34.799: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (54.353370912s elapsed) | |
Feb 28 20:08:36.884: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (56.438483035s elapsed) | |
Feb 28 20:08:38.968: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (58.522062857s elapsed) | |
Feb 28 20:08:41.052: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m0.606669989s elapsed) | |
Feb 28 20:08:43.142: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m2.696093385s elapsed) | |
Feb 28 20:08:45.227: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m4.781741461s elapsed) | |
Feb 28 20:08:47.323: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m6.877035198s elapsed) | |
Feb 28 20:08:49.409: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m8.962869745s elapsed) | |
Feb 28 20:08:51.494: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m11.048498838s elapsed) | |
Feb 28 20:08:53.583: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m13.137059259s elapsed) | |
Feb 28 20:08:55.665: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m15.219570019s elapsed) | |
Feb 28 20:08:57.749: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m17.30306116s elapsed) | |
Feb 28 20:08:59.832: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m19.386803127s elapsed) | |
Feb 28 20:09:01.913: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m21.467775792s elapsed) | |
Feb 28 20:09:03.996: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m23.549857111s elapsed) | |
Feb 28 20:09:06.078: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m25.632331301s elapsed) | |
Feb 28 20:09:08.161: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m27.715795941s elapsed) | |
Feb 28 20:09:10.247: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m29.800847085s elapsed) | |
Feb 28 20:09:12.330: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m31.884715832s elapsed) | |
Feb 28 20:09:14.418: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m33.972008849s elapsed) | |
Feb 28 20:09:16.506: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m36.060341406s elapsed) | |
Feb 28 20:09:18.591: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m38.145382196s elapsed) | |
Feb 28 20:09:20.673: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m40.227835367s elapsed) | |
Feb 28 20:09:22.760: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m42.314703134s elapsed) | |
Feb 28 20:09:24.843: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m44.397584941s elapsed) | |
Feb 28 20:09:26.926: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m46.480168049s elapsed) | |
Feb 28 20:09:29.013: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m48.567161604s elapsed) | |
Feb 28 20:09:31.100: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m50.654521299s elapsed) | |
Feb 28 20:09:33.186: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m52.740758993s elapsed) | |
Feb 28 20:09:35.284: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m54.838663096s elapsed) | |
Feb 28 20:09:37.369: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m56.923789388s elapsed) | |
Feb 28 20:09:39.455: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (1m59.009497257s elapsed) | |
Feb 28 20:09:41.551: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m1.105080346s elapsed) | |
Feb 28 20:09:43.638: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m3.192520751s elapsed) | |
Feb 28 20:09:45.724: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m5.278314316s elapsed) | |
Feb 28 20:09:47.810: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m7.36469798s elapsed) | |
Feb 28 20:09:49.896: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m9.450605986s elapsed) | |
Feb 28 20:09:51.984: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m11.53830264s elapsed) | |
Feb 28 20:09:54.065: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m13.619703933s elapsed) | |
Feb 28 20:09:56.150: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m15.704140252s elapsed) | |
Feb 28 20:09:58.236: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m17.790376493s elapsed) | |
Feb 28 20:10:00.320: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m19.873944564s elapsed) | |
Feb 28 20:10:02.403: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m21.957445057s elapsed) | |
Feb 28 20:10:04.490: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m24.044074705s elapsed) | |
Feb 28 20:10:06.578: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m26.132084416s elapsed) | |
Feb 28 20:10:08.657: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m28.211123915s elapsed) | |
Feb 28 20:10:10.741: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m30.295449324s elapsed) | |
Feb 28 20:10:12.826: INFO: Waiting for pod pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pod-disks-qglfs' status to be 'running'(found phase: "Pending", readiness: false) (2m32.380677799s elapsed) | |
Feb 28 20:10:14.908: INFO: Found pod 'pd-test-71292688-de99-11e5-a1fb-54ee75510eb4' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: reading a file in the container | |
Feb 28 20:10:14.908: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker0' | |
Feb 28 20:10:16.553: INFO: Read file "/testpd1/tracker0" with content: 3287282315050839767 | |
STEP: reading a file in the container | |
Feb 28 20:10:16.553: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd2/tracker0' | |
Feb 28 20:10:18.181: INFO: Read file "/testpd2/tracker0" with content: 8455666949293004623 | |
STEP: reading a file in the container | |
Feb 28 20:10:18.181: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker1' | |
Feb 28 20:10:19.831: INFO: Read file "/testpd1/tracker1" with content: 5674185573265005150 | |
STEP: reading a file in the container | |
Feb 28 20:10:19.831: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd2/tracker1' | |
Feb 28 20:10:21.466: INFO: Read file "/testpd2/tracker1" with content: 4717334516107809904 | |
STEP: writing a file in the container | |
Feb 28 20:10:21.466: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- /bin/sh -c echo '3577370774921728789' > '/testpd1/tracker2'' | |
Feb 28 20:10:23.131: INFO: Wrote value: "3577370774921728789" to PD1 ("spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4") from pod "pd-test-71292688-de99-11e5-a1fb-54ee75510eb4" container "mycontainer" | |
STEP: writing a file in the container | |
Feb 28 20:10:23.131: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- /bin/sh -c echo '6340120684542354492' > '/testpd2/tracker2'' | |
Feb 28 20:10:24.812: INFO: Wrote value: "6340120684542354492" to PD2 ("spotter-kube-rkt-6ef71f20-de99-11e5-a1fb-54ee75510eb4") from pod "pd-test-71292688-de99-11e5-a1fb-54ee75510eb4" container "mycontainer" | |
STEP: reading a file in the container | |
Feb 28 20:10:24.812: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker0' | |
Feb 28 20:10:26.449: INFO: Read file "/testpd1/tracker0" with content: 3287282315050839767 | |
STEP: reading a file in the container | |
Feb 28 20:10:26.449: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd2/tracker0' | |
Feb 28 20:10:28.074: INFO: Read file "/testpd2/tracker0" with content: 8455666949293004623 | |
STEP: reading a file in the container | |
Feb 28 20:10:28.074: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker1' | |
Feb 28 20:10:29.722: INFO: Read file "/testpd1/tracker1" with content: 5674185573265005150 | |
STEP: reading a file in the container | |
Feb 28 20:10:29.722: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd2/tracker1' | |
Feb 28 20:10:31.370: INFO: Read file "/testpd2/tracker1" with content: 4717334516107809904 | |
STEP: reading a file in the container | |
Feb 28 20:10:31.370: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd1/tracker2' | |
Feb 28 20:10:33.036: INFO: Read file "/testpd1/tracker2" with content: 3577370774921728789 | |
STEP: reading a file in the container | |
Feb 28 20:10:33.036: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-pod-disks-qglfs pd-test-71292688-de99-11e5-a1fb-54ee75510eb4 -c=mycontainer -- cat /testpd2/tracker2' | |
Feb 28 20:10:34.691: INFO: Read file "/testpd2/tracker2" with content: 6340120684542354492 | |
STEP: deleting host0Pod | |
STEP: cleaning up PD-RW test environment | |
E0228 20:10:41.335254 11176 gce.go:405] GCE operation failed: googleapi: Error 400: Invalid value for field 'disk': 'spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4'. | |
STEP: Waiting for PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4" to detach from "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:10:42.688: INFO: GCE PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4" appears to have successfully detached from "spotter-kube-rkt-minion-8b1u". | |
STEP: Deleting PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4" | |
Feb 28 20:10:43.852: INFO: Error deleting PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4": googleapi: Error 400: The disk resource 'spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-8b1u', resourceInUseByAnotherResource | |
Feb 28 20:10:43.852: INFO: Couldn't delete PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4". Sleeping 5 seconds (googleapi: Error 400: The disk resource 'spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-8b1u', resourceInUseByAnotherResource) | |
Feb 28 20:10:49.964: INFO: Error deleting PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4": googleapi: Error 400: The disk resource 'spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-8b1u', resourceInUseByAnotherResource | |
Feb 28 20:10:49.964: INFO: Couldn't delete PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4". Sleeping 5 seconds (googleapi: Error 400: The disk resource 'spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-8b1u', resourceInUseByAnotherResource) | |
Feb 28 20:10:56.067: INFO: Error deleting PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4": googleapi: Error 400: The disk resource 'spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-8b1u', resourceInUseByAnotherResource | |
Feb 28 20:10:56.067: INFO: Couldn't delete PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4". Sleeping 5 seconds (googleapi: Error 400: The disk resource 'spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-8b1u', resourceInUseByAnotherResource) | |
Feb 28 20:11:02.205: INFO: Error deleting PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4": googleapi: Error 400: The disk resource 'spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-8b1u', resourceInUseByAnotherResource | |
Feb 28 20:11:02.205: INFO: Couldn't delete PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4". Sleeping 5 seconds (googleapi: Error 400: The disk resource 'spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4' is already being used by 'spotter-kube-rkt-minion-8b1u', resourceInUseByAnotherResource) | |
Feb 28 20:11:12.093: INFO: Successfully deleted PD "spotter-kube-rkt-6bcec342-de99-11e5-a1fb-54ee75510eb4". | |
E0228 20:11:17.854317 11176 gce.go:405] GCE operation failed: googleapi: Error 400: Invalid value for field 'disk': 'spotter-kube-rkt-6ef71f20-de99-11e5-a1fb-54ee75510eb4'. | |
STEP: Waiting for PD "spotter-kube-rkt-6ef71f20-de99-11e5-a1fb-54ee75510eb4" to detach from "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:11:19.099: INFO: GCE PD "spotter-kube-rkt-6ef71f20-de99-11e5-a1fb-54ee75510eb4" appears to have successfully detached from "spotter-kube-rkt-minion-8b1u". | |
STEP: Deleting PD "spotter-kube-rkt-6ef71f20-de99-11e5-a1fb-54ee75510eb4" | |
Feb 28 20:11:24.008: INFO: Successfully deleted PD "spotter-kube-rkt-6ef71f20-de99-11e5-a1fb-54ee75510eb4". | |
[AfterEach] Pod Disks | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:11:24.008: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pod-disks-qglfs" for this suite. | |
• [SLOW TEST:469.390 seconds] | |
Pod Disks | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:267 | |
should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:266 | |
------------------------------ | |
Mesos | |
schedules pods annotated with roles on correct slaves | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:118 | |
[BeforeEach] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:11:29.435: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:11:29.524: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-819q5 | |
Feb 28 20:11:29.607: INFO: Service account default in ns e2e-tests-pods-819q5 had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:11:31.692: INFO: Service account default in ns e2e-tests-pods-819q5 with secrets found. (2.167481023s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:11:31.692: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-819q5 | |
Feb 28 20:11:31.772: INFO: Service account default in ns e2e-tests-pods-819q5 with secrets found. (79.848032ms) | |
[BeforeEach] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:41 | |
Feb 28 20:11:31.772: SKIP: Only supported for providers [mesos/docker] (not gce) | |
[AfterEach] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:11:31.772: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-819q5" for this suite. | |
S [SKIPPING] in Spec Setup (BeforeEach) [7.762 seconds] | |
Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:119 | |
schedules pods annotated with roles on correct slaves [BeforeEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:118 | |
Feb 28 20:11:31.772: Only supported for providers [mesos/docker] (not gce) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:301 | |
------------------------------ | |
Pods | |
should contain environment variables for services [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:638 | |
[BeforeEach] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:11:37.197: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:11:37.286: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-utq0h | |
Feb 28 20:11:37.376: INFO: Service account default in ns e2e-tests-pods-utq0h with secrets found. (89.572605ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:11:37.376: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-utq0h | |
Feb 28 20:11:37.464: INFO: Service account default in ns e2e-tests-pods-utq0h with secrets found. (88.354129ms) | |
[It] should contain environment variables for services [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:638 | |
Feb 28 20:11:37.558: INFO: Waiting up to 5m0s for pod server-envvars-86ec8348-de9a-11e5-a1fb-54ee75510eb4 status to be running | |
Feb 28 20:11:37.649: INFO: Waiting for pod server-envvars-86ec8348-de9a-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pods-utq0h' status to be 'running'(found phase: "Pending", readiness: false) (91.532701ms elapsed) | |
Feb 28 20:11:39.735: INFO: Waiting for pod server-envvars-86ec8348-de9a-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pods-utq0h' status to be 'running'(found phase: "Pending", readiness: false) (2.176872847s elapsed) | |
Feb 28 20:11:41.819: INFO: Found pod 'server-envvars-86ec8348-de9a-11e5-a1fb-54ee75510eb4' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: Creating a pod to test service env | |
Feb 28 20:11:42.005: INFO: Waiting up to 5m0s for pod client-envvars-89935af5-de9a-11e5-a1fb-54ee75510eb4 status to be success or failure | |
Feb 28 20:11:42.087: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-89935af5-de9a-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-pods-utq0h' so far | |
Feb 28 20:11:42.087: INFO: Waiting for pod client-envvars-89935af5-de9a-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pods-utq0h' status to be 'success or failure'(found phase: "Pending", readiness: false) (81.984103ms elapsed) | |
Feb 28 20:11:44.167: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-89935af5-de9a-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-pods-utq0h' so far | |
Feb 28 20:11:44.168: INFO: Waiting for pod client-envvars-89935af5-de9a-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pods-utq0h' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.162373648s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node spotter-kube-rkt-minion-8b1u pod client-envvars-89935af5-de9a-11e5-a1fb-54ee75510eb4 container env3cont: <nil> | |
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT=443 | |
FOOSERVICE_PORT_8765_TCP_PORT=8765 | |
USER=root | |
FOOSERVICE_PORT_8765_TCP_PROTO=tcp | |
AC_APP_NAME=env3cont | |
SHLVL=1 | |
HOME=/root | |
FOOSERVICE_PORT_8765_TCP=tcp://10.0.105.153:8765 | |
LOGNAME=root | |
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 | |
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin | |
KUBERNETES_PORT_443_TCP_PORT=443 | |
KUBERNETES_PORT_443_TCP_PROTO=tcp | |
SHELL=/bin/sh | |
FOOSERVICE_SERVICE_HOST=10.0.105.153 | |
KUBERNETES_SERVICE_PORT_HTTPS=443 | |
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 | |
PWD=/ | |
KUBERNETES_SERVICE_HOST=10.0.0.1 | |
FOOSERVICE_PORT=tcp://10.0.105.153:8765 | |
FOOSERVICE_SERVICE_PORT=8765 | |
FOOSERVICE_PORT_8765_TCP_ADDR=10.0.105.153 | |
[AfterEach] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:11:46.709: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-utq0h" for this suite. | |
• [SLOW TEST:14.935 seconds] | |
Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1263 | |
should contain environment variables for services [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:638 | |
------------------------------ | |
Kubectl client Guestbook application | |
should create and stop a working application [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:172 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:11:52.132: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:11:52.219: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-i6gxc | |
Feb 28 20:11:52.301: INFO: Get service account default in ns e2e-tests-kubectl-i6gxc failed, ignoring for 2s: serviceaccounts "default" not found | |
Feb 28 20:11:54.383: INFO: Service account default in ns e2e-tests-kubectl-i6gxc with secrets found. (2.163820743s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:11:54.383: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-i6gxc | |
Feb 28 20:11:54.466: INFO: Service account default in ns e2e-tests-kubectl-i6gxc with secrets found. (83.418835ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[BeforeEach] Guestbook application | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:160 | |
[It] should create and stop a working application [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:172 | |
STEP: creating all guestbook components | |
Feb 28 20:11:54.551: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-i6gxc' | |
Feb 28 20:11:55.969: INFO: stdout: "replicationcontroller \"frontend\" created\nservice \"frontend\" created\nreplicationcontroller \"redis-master\" created\nservice \"redis-master\" created\nreplicationcontroller \"redis-slave\" created\nservice \"redis-slave\" created\n" | |
Feb 28 20:11:55.969: INFO: stderr: "" | |
STEP: validating guestbook app | |
Feb 28 20:11:55.969: INFO: Waiting for frontend to serve content. | |
Feb 28 20:11:56.134: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:01.298: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:06.466: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:11.629: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:16.804: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:21.967: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:27.132: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:32.298: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:37.472: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:42.634: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:47.801: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:52.968: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:12:58.138: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:03.301: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:08.459: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:13.624: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:18.790: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:23.953: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:29.118: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:34.288: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:39.454: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:44.616: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:49.782: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:13:54.950: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:00.116: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:05.281: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:10.450: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:15.620: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:20.783: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:25.952: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:31.120: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:36.289: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:41.459: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:46.623: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:51.785: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:14:56.952: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:02.119: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:07.288: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:12.452: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:17.615: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:22.786: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:27.958: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:33.132: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:38.298: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:43.465: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:48.629: INFO: Failed to get response from guestbook. err: an error on the server has prevented the request from succeeding (get services frontend), response: | |
Feb 28 20:15:53.790: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:15:58.955: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:04.120: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:09.284: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:14.452: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:19.616: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:24.785: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:29.959: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:35.127: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:40.292: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:45.457: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:50.627: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:16:55.790: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:00.953: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:06.118: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:11.284: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:16.452: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:21.617: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:26.782: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:31.948: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:37.112: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:42.274: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:47.437: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:52.603: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:17:57.770: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:02.934: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:08.103: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:13.276: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:18.446: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:23.608: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:28.774: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:33.941: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:39.099: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:44.265: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:49.436: INFO: Failed to get response from guestbook. err: an error on the server has prevented the request from succeeding (get services frontend), response: | |
Feb 28 20:18:54.603: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:18:59.765: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:04.928: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:10.097: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:15.262: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:20.433: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:25.598: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:30.764: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:35.933: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:41.100: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:46.266: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:51.432: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:19:56.598: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:01.759: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:06.923: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:12.087: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:17.249: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:22.420: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:27.587: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:32.771: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:37.941: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:43.106: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:48.278: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:53.446: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:20:58.613: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:03.782: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:08.957: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:14.121: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:19.280: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:24.445: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:29.611: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:34.775: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:39.944: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:45.109: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:50.276: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:21:55.443: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Feb 28 20:22:00.443: FAIL: Frontend service did not start serving content in 600 seconds. | |
STEP: using delete to clean up resources | |
Feb 28 20:22:00.443: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config delete --grace-period=0 -f /home/spotter/gocode/src/k8s.io/y-kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-i6gxc' | |
Feb 28 20:22:09.355: INFO: stdout: "replicationcontroller \"frontend\" deleted\nservice \"frontend\" deleted\nreplicationcontroller \"redis-master\" deleted\nservice \"redis-master\" deleted\nreplicationcontroller \"redis-slave\" deleted\nservice \"redis-slave\" deleted\n" | |
Feb 28 20:22:09.355: INFO: stderr: "" | |
Feb 28 20:22:09.355: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get rc,svc -l app=guestbook,tier=frontend --no-headers --namespace=e2e-tests-kubectl-i6gxc' | |
Feb 28 20:22:10.091: INFO: stdout: "" | |
Feb 28 20:22:10.091: INFO: stderr: "" | |
Feb 28 20:22:10.091: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -l app=guestbook,tier=frontend --namespace=e2e-tests-kubectl-i6gxc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Feb 28 20:22:10.735: INFO: stdout: "" | |
Feb 28 20:22:10.735: INFO: stderr: "" | |
Feb 28 20:22:10.736: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get rc,svc -l app=redis,role=master --no-headers --namespace=e2e-tests-kubectl-i6gxc' | |
Feb 28 20:22:11.479: INFO: stdout: "" | |
Feb 28 20:22:11.479: INFO: stderr: "" | |
Feb 28 20:22:11.479: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -l app=redis,role=master --namespace=e2e-tests-kubectl-i6gxc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Feb 28 20:22:12.134: INFO: stdout: "" | |
Feb 28 20:22:12.134: INFO: stderr: "" | |
Feb 28 20:22:12.134: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get rc,svc -l app=redis,role=slave --no-headers --namespace=e2e-tests-kubectl-i6gxc' | |
Feb 28 20:22:12.896: INFO: stdout: "" | |
Feb 28 20:22:12.896: INFO: stderr: "" | |
Feb 28 20:22:12.896: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -l app=redis,role=slave --namespace=e2e-tests-kubectl-i6gxc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Feb 28 20:22:13.541: INFO: stdout: "" | |
Feb 28 20:22:13.541: INFO: stderr: "" | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-kubectl-i6gxc". | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:54 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Pulling: pulling image "gcr.io/google_samples/gb-frontend:v4" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for frontend: {replication-controller } SuccessfulCreate: Created pod: frontend-5p3bn | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for frontend: {replication-controller } SuccessfulCreate: Created pod: frontend-6yr3g | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for frontend: {replication-controller } SuccessfulCreate: Created pod: frontend-b7zrx | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "gcr.io/google_samples/gb-frontend:v4" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for frontend-5p3bn: {default-scheduler } Scheduled: Successfully assigned frontend-5p3bn to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for frontend-6yr3g: {default-scheduler } Scheduled: Successfully assigned frontend-6yr3g to spotter-kube-rkt-minion-yo39 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for frontend-b7zrx: {default-scheduler } Scheduled: Successfully assigned frontend-b7zrx to spotter-kube-rkt-minion-yii0 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Pulling: pulling image "gcr.io/google_samples/gb-frontend:v4" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for redis-master: {replication-controller } SuccessfulCreate: Created pod: redis-master-d97p7 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for redis-master-d97p7: {default-scheduler } Scheduled: Successfully assigned redis-master-d97p7 to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for redis-slave: {replication-controller } SuccessfulCreate: Created pod: redis-slave-lande | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for redis-slave: {replication-controller } SuccessfulCreate: Created pod: redis-slave-4kzo7 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for redis-slave-4kzo7: {default-scheduler } Scheduled: Successfully assigned redis-slave-4kzo7 to spotter-kube-rkt-minion-yo39 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:11:55 -0800 PST - event for redis-slave-lande: {default-scheduler } Scheduled: Successfully assigned redis-slave-lande to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:12:57 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Successfully pulled image "gcr.io/google_samples/gb-frontend:v4" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:12:57 -0800 PST - event for redis-slave-4kzo7: {kubelet spotter-kube-rkt-minion-yo39} Pulling: pulling image "gcr.io/google_samples/gb-redisslave:v1" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:12:59 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "gcr.io/google_samples/gb-frontend:v4" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:12:59 -0800 PST - event for redis-master-d97p7: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "redis" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:00 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Failed: Failed to pull image "gcr.io/google_samples/gb-frontend:v4": failed to run [fetch docker://gcr.io/google_samples/gb-frontend:v4]: exit status 1 | |
stdout: | |
stderr: image: remote fetching from URL "docker://gcr.io/google_samples/gb-frontend:v4" | |
Downloading 77e39ee82117: 0 B/51.4 MB | |
Downloading 77e39ee82117: 585 B/51.4 MB | |
Downloading 77e39ee82117: 16.8 MB/51.4 MB | |
Downloading 77e39ee82117: 51.4 MB/51.4 MB | |
Downloading 5eb1402f0414: 0 B/32 B | |
Downloading 5eb1402f0414: 32 B/32 B | |
Downloading 9875148deea6: 0 B/8.72 MB | |
Downloading 9875148deea6: 587 B/8.72 MB | |
Downloading 9875148deea6: 8.72 MB/8.72 MB | |
Downloading 84afe4c7837f: 0 B/69.3 MB | |
Downloading 84afe4c7837f: 585 B/69.3 MB | |
Downloading 84afe4c7837f: 32.3 MB/69.3 MB | |
Downloading 84afe4c7837f: 64.8 MB/69.3 MB | |
Downloading 84afe4c7837f: 69.3 MB/69.3 MB | |
Downloading 712c316b6968: 0 B/32 B | |
Downloading 712c316b6968: 32 B/32 B | |
Downloading 28039777256d: 0 B/179 B | |
Downloading 28039777256d: 179 B/179 B | |
Downloading ad4328ae804a: 0 B/2.84 MB | |
Downloading ad4328ae804a: 587 B/2.84 MB | |
Downloading ad4328ae804a: 2.84 MB/2.84 MB | |
Downloading b142f2174b5e: 0 B/326 B | |
Downloading b142f2174b5e: 326 B/326 B | |
Downloading ffdd177b39f2: 0 B/425 B | |
Downloading ffdd177b39f2: 425 B/425 B | |
Downloading 811710e15889: 0 B/3.36 KB | |
Downloading 811710e15889: 3.31 KB/3.36 KB | |
Downloading 811710e15889: 3.36 KB/3.36 KB | |
Downloading 110c69a010cb: 0 B/865 B | |
Downloading 110c69a010cb: 865 B/865 B | |
Downloading 7e542773f358: 0 B/32 B | |
Downloading 7e542773f358: 32 B/32 B | |
Downloading f2bca23b1438: 0 B/32 B | |
Downloading f2bca23b1438: 32 B/32 B | |
Downloading b17828ca5684: 0 B/32 B | |
Downloading b17828ca5684: 32 B/32 B | |
Downloading 1a390283c088: 0 B/7.58 KB | |
Downloading 1a390283c088: 3.31 KB/7.58 KB | |
Downloading 1a390283c088: 7.58 KB/7.58 KB | |
Downloading b5e5700e2e7c: 0 B/32 B | |
Downloading b5e5700e2e7c: 32 B/32 B | |
Downloading cba21160070c: 0 B/32 B | |
Downloading cba21160070c: 32 B/32 B | |
Downloading 95bdb0420a10: 0 B/32 B | |
Downloading 95bdb0420a10: 32 B/32 B | |
Downloading 06bebfc5c372: 0 B/32.1 MB | |
Downloading 06bebfc5c372: 3.3 KB/32.1 MB | |
Downloading 06bebfc5c372: 32.1 MB/32.1 MB | |
Downloading 6f4ecdc07386: 0 B/1.6 KB | |
Downloading 6f4ecdc07386: 593 B/1.6 KB | |
Downloading 6f4ecdc07386: 1.6 KB/1.6 KB | |
Downloading 6fd5cfc7dc52: 0 B/291 B | |
Downloading 6fd5cfc7dc52: 291 B/291 B | |
Downloading 687bc5b4af21: 0 B/32 B | |
Downloading 687bc5b4af21: 32 B/32 B | |
Downloading 58a8d12ac53e: 0 B/32 B | |
Downloading 58a8d12ac53e: 32 B/32 B | |
Downloading 4d7fa46fa09d: 0 B/32 B | |
Downloading 4d7fa46fa09d: 32 B/32 B | |
Downloading 214af5356748: 0 B/9.38 MB | |
Downloading 214af5356748: 587 B/9.38 MB | |
Downloading 214af5356748: 9.38 MB/9.38 MB | |
Downloading ee83cbd0620c: 0 B/5.65 MB | |
Downloading ee83cbd0620c: 3.3 KB/5.65 MB | |
Downloading ee83cbd0620c: 5.65 MB/5.65 MB | |
Downloading 315a8fde694e: 0 B/1.07 KB | |
Downloading 315a8fde694e: 593 B/1.07 KB | |
Downloading 315a8fde694e: 1.07 KB/1.07 KB | |
Downloading 6d58be200c5e: 0 B/449 KB | |
Downloading 6d58be200c5e: 581 B/449 KB | |
Downloading 6d58be200c5e: 449 KB/449 KB | |
Downloading e9bae97bee81: 0 B/610 B | |
Downloading e9bae97bee81: 595 B/610 B | |
Downloading e9bae97bee81: 610 B/610 B | |
Downloading 862e2f3425c1: 0 B/560 B | |
Downloading 862e2f3425c1: 560 B/560 B | |
Downloading 96e118be539d: 0 B/639 B | |
Downloading 96e118be539d: 595 B/639 B | |
Downloading 96e118be539d: 639 B/639 B | |
fetch: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:05 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id b38b68c1 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:05 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id b38b68c1 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:19 -0800 PST - event for redis-slave-4kzo7: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Successfully pulled image "gcr.io/google_samples/gb-redisslave:v1" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:23 -0800 PST - event for redis-master-d97p7: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "redis" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:23 -0800 PST - event for redis-slave-lande: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "gcr.io/google_samples/gb-redisslave:v1" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:30 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 2c2c2c45 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:30 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 2c2c2c45 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:31 -0800 PST - event for redis-slave-4kzo7: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 61e9416d | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:31 -0800 PST - event for redis-slave-4kzo7: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 61e9416d | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:32 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Container image "gcr.io/google_samples/gb-frontend:v4" already present on machine | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:36 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 4d2bd8bc | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:36 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 4d2bd8bc | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:38 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Container image "gcr.io/google_samples/gb-frontend:v4" already present on machine | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:39 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 6b13fb5e | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:39 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 6b13fb5e | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:39 -0800 PST - event for redis-master-d97p7: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 30f9b4dd | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:39 -0800 PST - event for redis-master-d97p7: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 30f9b4dd | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:42 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Failed: Failed to create rkt container with error: failed to run [prepare --quiet --pod-manifest /tmp/manifest-frontend-6yr3g-141077074]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: error setting up stage0: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:42 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} FailedSync: Error syncing pod, skipping: failed to SyncPod: failed to run [prepare --quiet --pod-manifest /tmp/manifest-frontend-6yr3g-141077074]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: error setting up stage0: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:48 -0800 PST - event for redis-slave-lande: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "gcr.io/google_samples/gb-redisslave:v1" | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:51 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id b9e747db | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:51 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id b9e747db | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:57 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Failed: Failed to create rkt container with error: failed to run [prepare --quiet --pod-manifest /tmp/manifest-frontend-5p3bn-969652553]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: error setting up stage0: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:57 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} FailedSync: Error syncing pod, skipping: failed to SyncPod: failed to run [prepare --quiet --pod-manifest /tmp/manifest-frontend-5p3bn-969652553]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: error setting up stage0: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:58 -0800 PST - event for redis-slave-lande: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 1228da03 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:13:58 -0800 PST - event for redis-slave-lande: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 1228da03 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:14:04 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 26d2a10a | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:14:04 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 26d2a10a | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:14:16 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 93639001 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:14:16 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 93639001 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:14:19 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Pulled: Container image "gcr.io/google_samples/gb-frontend:v4" already present on machine | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:14:43 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id e66cd701 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:24 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 16a784bc | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:24 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 16a784bc | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:38 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 07e3df10 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:38 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 07e3df10 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:46 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id a7b71c47 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:46 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id a7b71c47 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:47 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id f1f41477 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:47 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id f1f41477 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:54 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 7ce95cc3 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:54 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 7ce95cc3 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:55 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 5f6fef49 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:15:55 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 5f6fef49 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:16:13 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id e66cd701 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:05 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 01ea0267 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:05 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 01ea0267 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:27 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id fc1299d7 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:27 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id fc1299d7 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:32 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id d521f02a | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:32 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id d521f02a | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:36 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 442dbd1d | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:36 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 442dbd1d | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:39 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id b3f8a575 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:39 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id b3f8a575 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:47 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id 7be61150 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:47 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id 7be61150 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:48 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 66cf570b | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:17:48 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 66cf570b | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:18:47 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 867755af | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:18:47 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 867755af | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:18:51 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id 70559e7d | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:18:51 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id 70559e7d | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:18:52 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Started: (events with common reason combined) | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:18:52 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Created: (events with common reason combined) | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:18:56 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Started: (events with common reason combined) | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:18:56 -0800 PST - event for frontend-6yr3g: {kubelet spotter-kube-rkt-minion-yo39} Created: (events with common reason combined) | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:19:52 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id 36a3032d | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:19:52 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id 36a3032d | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:20:51 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id e3ce3988 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:20:51 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id e3ce3988 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:21:00 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id 5f141a0c | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:21:00 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id 5f141a0c | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:21:51 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} Failed: Failed to create rkt container with error: failed to run [prepare --quiet --pod-manifest /tmp/manifest-frontend-5p3bn-302775088]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: error setting up stage0: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:21:51 -0800 PST - event for frontend-5p3bn: {kubelet spotter-kube-rkt-minion-8b1u} FailedSync: Error syncing pod, skipping: failed to SyncPod: failed to run [prepare --quiet --pod-manifest /tmp/manifest-frontend-5p3bn-302775088]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: error setting up stage0: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:02 -0800 PST - event for frontend: {replication-controller } SuccessfulDelete: Deleted pod: frontend-6yr3g | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:02 -0800 PST - event for frontend: {replication-controller } SuccessfulDelete: Deleted pod: frontend-5p3bn | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:02 -0800 PST - event for frontend: {replication-controller } SuccessfulDelete: Deleted pod: frontend-b7zrx | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:03 -0800 PST - event for frontend: {service-controller } DeletingLoadBalancer: Deleting load balancer | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:04 -0800 PST - event for frontend: {service-controller } DeletedLoadBalancer: Deleted load balancer | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:05 -0800 PST - event for redis-master: {replication-controller } SuccessfulDelete: Deleted pod: redis-master-d97p7 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:05 -0800 PST - event for redis-master-d97p7: {kubelet spotter-kube-rkt-minion-8b1u} Killing: Killing with rkt id 30f9b4dd | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:06 -0800 PST - event for redis-master: {service-controller } DeletedLoadBalancer: Deleted load balancer | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:06 -0800 PST - event for redis-master: {service-controller } DeletingLoadBalancer: Deleting load balancer | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:07 -0800 PST - event for redis-slave: {replication-controller } SuccessfulDelete: Deleted pod: redis-slave-4kzo7 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:07 -0800 PST - event for redis-slave: {replication-controller } SuccessfulDelete: Deleted pod: redis-slave-lande | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:07 -0800 PST - event for redis-slave-4kzo7: {kubelet spotter-kube-rkt-minion-yo39} Killing: Killing with rkt id 61e9416d | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:07 -0800 PST - event for redis-slave-lande: {kubelet spotter-kube-rkt-minion-8b1u} Killing: Killing with rkt id 1228da03 | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:08 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id b684a0ec | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:08 -0800 PST - event for frontend-b7zrx: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id b684a0ec | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:09 -0800 PST - event for redis-slave: {service-controller } DeletingLoadBalancer: Deleting load balancer | |
Feb 28 20:22:13.819: INFO: At 2016-02-28 20:22:09 -0800 PST - event for redis-slave: {service-controller } DeletedLoadBalancer: Deleted load balancer | |
Feb 28 20:22:13.906: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:22:13.906: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:46 -0800 PST }] | |
Feb 28 20:22:13.906: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:44 -0800 PST }] | |
Feb 28 20:22:13.906: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:22:13.906: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:22:13.906: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:22:13.906: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:22:13.906: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:22:13.906: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:22:13.906: INFO: | |
Feb 28 20:22:13.990: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:22:14.075: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 1890 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:22:08 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:22:08 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:22:14.075: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:22:14.163: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:22:14.322: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:22:14.322: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:22:14.409: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 1875 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:22:05 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:22:05 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[registry-1.docker.io/library/redis:latest] 152067072} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:22:14.409: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:22:14.498: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:22:14.769: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:22:15.100: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:22:15.100: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:1m43.46578s} | |
Feb 28 20:22:15.100: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m43.46578s} | |
Feb 28 20:22:15.100: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:1m41.57352s} | |
Feb 28 20:22:15.100: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:22:15.191: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 1895 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:22:09 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:22:09 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:22:15.191: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:22:15.282: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:22:15.566: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:22:15.884: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:22:15.884: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:5m44.381562s} | |
Feb 28 20:22:15.884: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:5m44.381562s} | |
Feb 28 20:22:15.884: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.5 Latency:5m44.381562s} | |
Feb 28 20:22:15.884: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:1m10.034059s} | |
Feb 28 20:22:15.884: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:1m10.034059s} | |
Feb 28 20:22:15.884: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m10.034059s} | |
Feb 28 20:22:15.884: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:22:15.967: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 1901 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:22:15 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:22:15 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:22:15.967: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:22:16.054: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:22:16.324: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:22:16.632: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:22:16.633: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-i6gxc" for this suite. | |
• Failure [629.928 seconds] | |
Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1082 | |
Guestbook application | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:173 | |
should create and stop a working application [Conformance] [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:172 | |
Feb 28 20:22:00.443: Frontend service did not start serving content in 600 seconds. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1170 | |
------------------------------ | |
S | |
------------------------------ | |
Kubectl client Kubectl expose | |
should create services for rc [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:722 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:22:22.060: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:22:22.146: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-pq1gy | |
Feb 28 20:22:22.228: INFO: Service account default in ns e2e-tests-kubectl-pq1gy with secrets found. (81.886035ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:22:22.228: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-pq1gy | |
Feb 28 20:22:22.312: INFO: Service account default in ns e2e-tests-kubectl-pq1gy with secrets found. (84.172237ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[It] should create services for rc [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:722 | |
STEP: creating Redis RC | |
Feb 28 20:22:22.312: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-pq1gy' | |
Feb 28 20:22:23.081: INFO: stdout: "replicationcontroller \"redis-master\" created\n" | |
Feb 28 20:22:23.081: INFO: stderr: "" | |
Feb 28 20:22:23.163: INFO: Waiting up to 5m0s for pod redis-master-j2a35 status to be running | |
Feb 28 20:22:23.245: INFO: Waiting for pod redis-master-j2a35 in namespace 'e2e-tests-kubectl-pq1gy' status to be 'running'(found phase: "Pending", readiness: false) (82.386684ms elapsed) | |
Feb 28 20:22:25.324: INFO: Waiting for pod redis-master-j2a35 in namespace 'e2e-tests-kubectl-pq1gy' status to be 'running'(found phase: "Pending", readiness: false) (2.161369176s elapsed) | |
Feb 28 20:22:27.407: INFO: Waiting for pod redis-master-j2a35 in namespace 'e2e-tests-kubectl-pq1gy' status to be 'running'(found phase: "Pending", readiness: false) (4.244676953s elapsed) | |
Feb 28 20:22:29.493: INFO: Found pod 'redis-master-j2a35' on node 'spotter-kube-rkt-minion-8b1u' | |
Feb 28 20:22:29.493: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config log redis-master-j2a35 redis-master --namespace=e2e-tests-kubectl-pq1gy' | |
Feb 28 20:22:30.425: INFO: stdout: "4:C 29 Feb 04:22:27.475 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n4:M 29 Feb 04:22:27.476 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.\n4:M 29 Feb 04:22:27.476 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.\n4:M 29 Feb 04:22:27.476 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.\n_._\n_.-``__ ''-._\n_.-`` `. `_. ''-._ Redis 3.0.7 (00000000/0) 64 bit\n.-`` .-```. ```\\/ _.,_ ''-._\n( ' , .-` | `, ) Running in standalone mode\n|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n| `-._ `._ / _.-' | PID: 4\n`-._ `-._ `-./ _.-' _.-'\n|`-._`-._ `-.__.-' _.-'_.-'|\n| `-._`-._ _.-'_.-' | http://redis.io\n`-._ `-._`-.__.-'_.-' _.-'\n|`-._`-._ `-.__.-' _.-'_.-'|\n| `-._`-._ _.-'_.-' |\n`-._ `-._`-.__.-'_.-' _.-'\n`-._ `-.__.-' _.-'\n`-._ _.-'\n`-.__.-'\n4:M 29 Feb 04:22:27.477 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n4:M 29 Feb 04:22:27.477 # Server started, Redis version 3.0.7\n4:M 29 Feb 04:22:27.477 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n4:M 29 Feb 04:22:27.477 * The server is now ready to accept connections on port 6379\n" | |
Feb 28 20:22:30.425: INFO: stderr: "" | |
STEP: exposing RC | |
Feb 28 20:22:30.425: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-pq1gy' | |
Feb 28 20:22:31.165: INFO: stdout: "service \"rm2\" exposed\n" | |
Feb 28 20:22:31.165: INFO: stderr: "" | |
Feb 28 20:22:31.246: INFO: Service rm2 in namespace e2e-tests-kubectl-pq1gy found. | |
STEP: exposing service | |
Feb 28 20:22:33.409: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-pq1gy' | |
Feb 28 20:22:34.176: INFO: stdout: "service \"rm3\" exposed\n" | |
Feb 28 20:22:34.176: INFO: stderr: "" | |
Feb 28 20:22:34.260: INFO: Service rm3 in namespace e2e-tests-kubectl-pq1gy found. | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:22:36.432: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-pq1gy" for this suite. | |
• [SLOW TEST:19.788 seconds] | |
Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1082 | |
Kubectl expose | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:723 | |
should create services for rc [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:722 | |
------------------------------ | |
S | |
------------------------------ | |
Pods | |
should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:665 | |
[BeforeEach] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:22:41.849: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:22:41.938: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-17obj | |
Feb 28 20:22:42.024: INFO: Service account default in ns e2e-tests-pods-17obj with secrets found. (86.533693ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:22:42.025: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-17obj | |
Feb 28 20:22:42.110: INFO: Service account default in ns e2e-tests-pods-17obj with secrets found. (85.199975ms) | |
[It] should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:665 | |
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-17obj | |
Feb 28 20:22:42.200: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending | |
Feb 28 20:22:42.289: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-17obj' status to be '!pending'(found phase: "Pending", readiness: false) (88.921614ms elapsed) | |
Feb 28 20:22:44.376: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-17obj' status to be '!pending'(found phase: "Pending", readiness: false) (2.175935091s elapsed) | |
Feb 28 20:22:46.457: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-17obj' out of pending state (found '"Running"') | |
Feb 28 20:22:46.457: INFO: Started pod liveness-exec in namespace e2e-tests-pods-17obj | |
STEP: checking the pod's current state and verifying that restartCount is present | |
Feb 28 20:22:46.539: INFO: Initial restart count of pod liveness-exec is 0 | |
Feb 28 20:23:07.509: INFO: Restart count of pod e2e-tests-pods-17obj/liveness-exec is now 1 (20.969793761s elapsed) | |
STEP: deleting the pod | |
[AfterEach] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:23:07.596: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-17obj" for this suite. | |
• [SLOW TEST:26.088 seconds] | |
Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1263 | |
should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:665 | |
------------------------------ | |
S | |
------------------------------ | |
Kubectl client Kubectl run rc | |
should create an rc from an image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:902 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:23:07.937: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:23:08.029: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-pqto6 | |
Feb 28 20:23:08.130: INFO: Service account default in ns e2e-tests-kubectl-pqto6 with secrets found. (101.552479ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:23:08.130: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-pqto6 | |
Feb 28 20:23:08.213: INFO: Service account default in ns e2e-tests-kubectl-pqto6 with secrets found. (82.811078ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[BeforeEach] Kubectl run rc | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:870 | |
[It] should create an rc from an image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:902 | |
STEP: running the image nginx | |
Feb 28 20:23:08.213: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config run e2e-test-nginx-rc --image=nginx --generator=run/v1 --namespace=e2e-tests-kubectl-pqto6' | |
Feb 28 20:23:08.864: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" created\n" | |
Feb 28 20:23:08.864: INFO: stderr: "" | |
STEP: verifying the rc e2e-test-nginx-rc was created | |
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created | |
[AfterEach] Kubectl run rc | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:874 | |
Feb 28 20:23:09.035: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pqto6' | |
Feb 28 20:23:12.179: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" | |
Feb 28 20:23:12.179: INFO: stderr: "" | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:23:12.179: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-pqto6" for this suite. | |
• [SLOW TEST:9.673 seconds] | |
Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1082 | |
Kubectl run rc | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:904 | |
should create an rc from an image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:902 | |
------------------------------ | |
S | |
------------------------------ | |
PrivilegedPod | |
should test privileged pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:66 | |
[BeforeEach] PrivilegedPod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:23:17.610: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:23:17.697: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-privilegedpod-vgzkx | |
Feb 28 20:23:17.781: INFO: Service account default in ns e2e-tests-e2e-privilegedpod-vgzkx had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:23:19.865: INFO: Service account default in ns e2e-tests-e2e-privilegedpod-vgzkx with secrets found. (2.168232987s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:23:19.865: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-privilegedpod-vgzkx | |
Feb 28 20:23:19.951: INFO: Service account default in ns e2e-tests-e2e-privilegedpod-vgzkx with secrets found. (85.81945ms) | |
[It] should test privileged pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:66 | |
Feb 28 20:23:20.040: INFO: Waiting up to 5m0s for pod hostexec status to be running | |
Feb 28 20:23:20.121: INFO: Waiting for pod hostexec in namespace 'e2e-tests-e2e-privilegedpod-vgzkx' status to be 'running'(found phase: "Pending", readiness: false) (80.618439ms elapsed) | |
Feb 28 20:23:22.214: INFO: Waiting for pod hostexec in namespace 'e2e-tests-e2e-privilegedpod-vgzkx' status to be 'running'(found phase: "Pending", readiness: false) (2.17421348s elapsed) | |
Feb 28 20:23:24.314: INFO: Waiting for pod hostexec in namespace 'e2e-tests-e2e-privilegedpod-vgzkx' status to be 'running'(found phase: "Pending", readiness: false) (4.274023978s elapsed) | |
Feb 28 20:23:26.413: INFO: Waiting for pod hostexec in namespace 'e2e-tests-e2e-privilegedpod-vgzkx' status to be 'running'(found phase: "Pending", readiness: false) (6.372940448s elapsed) | |
Feb 28 20:23:28.492: INFO: Waiting for pod hostexec in namespace 'e2e-tests-e2e-privilegedpod-vgzkx' status to be 'running'(found phase: "Pending", readiness: false) (8.452222507s elapsed) | |
Feb 28 20:23:30.579: INFO: Found pod 'hostexec' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: Creating a privileged pod | |
Feb 28 20:23:30.678: INFO: Waiting up to 5m0s for pod privileged-pod status to be running | |
Feb 28 20:23:30.761: INFO: Waiting for pod privileged-pod in namespace 'e2e-tests-e2e-privilegedpod-vgzkx' status to be 'running'(found phase: "Pending", readiness: false) (83.174918ms elapsed) | |
Feb 28 20:23:32.842: INFO: Waiting for pod privileged-pod in namespace 'e2e-tests-e2e-privilegedpod-vgzkx' status to be 'running'(found phase: "Pending", readiness: false) (2.163669875s elapsed) | |
Feb 28 20:23:34.926: INFO: Found pod 'privileged-pod' on node 'spotter-kube-rkt-minion-yo39' | |
STEP: Executing privileged command on privileged container | |
STEP: Exec-ing into container over http. Running command:curl -q 'http://10.245.2.3:8080/shell?shellCommand=ip+link+add+dummy1+type+dummy' | |
Feb 28 20:23:35.013: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-privilegedpod-vgzkx hostexec -- /bin/sh -c curl -q 'http://10.245.2.3:8080/shell?shellCommand=ip+link+add+dummy1+type+dummy'' | |
Feb 28 20:23:36.646: INFO: stdout: "{}" | |
Feb 28 20:23:36.646: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 2 100 2 0 0 102 0 --:--:-- --:--:-- --:--:-- 105\n" | |
Feb 28 20:23:36.646: INFO: Deserialized output is {} | |
STEP: Executing privileged command on non-privileged container | |
STEP: Exec-ing into container over http. Running command:curl -q 'http://10.245.2.3:9090/shell?shellCommand=ip+link+add+dummy1+type+dummy' | |
Feb 28 20:23:36.647: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-privilegedpod-vgzkx hostexec -- /bin/sh -c curl -q 'http://10.245.2.3:9090/shell?shellCommand=ip+link+add+dummy1+type+dummy'' | |
Feb 28 20:23:38.306: INFO: stdout: "{\"error\":\"exit status 2\",\"output\":\"RTNETLINK answers: File exists\\n\"}" | |
Feb 28 20:23:38.306: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 69 100 69 0 0 18100 0 --:--:-- --:--:-- --:--:-- 23000\n" | |
Feb 28 20:23:38.306: INFO: Deserialized output is {"error":"exit status 2","output":"RTNETLINK answers: File exists\n"} | |
[AfterEach] PrivilegedPod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:23:38.306: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-e2e-privilegedpod-vgzkx" for this suite. | |
• [SLOW TEST:21.056 seconds] | |
PrivilegedPod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:67 | |
should test privileged pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:66 | |
------------------------------ | |
Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController | |
Should scale from 5 pods to 3 pods and from 3 to 1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62 | |
[BeforeEach] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:23:38.666: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:23:38.761: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-horizontal-pod-autoscaling-llnqx | |
Feb 28 20:23:38.850: INFO: Service account default in ns e2e-tests-horizontal-pod-autoscaling-llnqx with secrets found. (89.22373ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:23:38.850: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-horizontal-pod-autoscaling-llnqx | |
Feb 28 20:23:38.935: INFO: Service account default in ns e2e-tests-horizontal-pod-autoscaling-llnqx with secrets found. (84.441818ms) | |
[It] Should scale from 5 pods to 3 pods and from 3 to 1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62 | |
STEP: Running consuming RC rc via replicationController with 5 replicas | |
STEP: creating replication controller rc in namespace e2e-tests-horizontal-pod-autoscaling-llnqx | |
Feb 28 20:23:39.116: INFO: Created replication controller with name: rc, namespace: e2e-tests-horizontal-pod-autoscaling-llnqx, replica count: 5 | |
Feb 28 20:23:49.116: INFO: rc Pods: 5 out of 5 created, 0 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:23:59.116: INFO: rc Pods: 5 out of 5 created, 0 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:24:09.116: INFO: rc Pods: 5 out of 5 created, 4 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:24:19.117: INFO: rc Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:24:29.117: INFO: RC rc: consume 400 millicores in total | |
Feb 28 20:24:29.117: INFO: RC rc: consume 400 millicores in total | |
Feb 28 20:24:29.117: INFO: RC rc: consume 0 MB in total | |
Feb 28 20:24:29.117: INFO: RC rc: consume 0 MB in total | |
Feb 28 20:24:29.117: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:24:29.117: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:24:29.667866 11176 request.go:627] Throttling request took 77.661443ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:29.867901 11176 request.go:627] Throttling request took 277.50367ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:30.067886 11176 request.go:627] Throttling request took 476.010933ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/replicationcontrollers/rc | |
W0228 20:24:30.267900 11176 request.go:627] Throttling request took 669.543867ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:30.467882 11176 request.go:627] Throttling request took 864.632157ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:30.667945 11176 request.go:627] Throttling request took 1.052910213s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:30.867888 11176 request.go:627] Throttling request took 1.247899935s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:31.067886 11176 request.go:627] Throttling request took 1.437925991s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:31.267897 11176 request.go:627] Throttling request took 1.622974898s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:31.467911 11176 request.go:627] Throttling request took 1.817618721s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:31.667945 11176 request.go:627] Throttling request took 2.002410306s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:24:31.867905 11176 request.go:627] Throttling request took 1.715710768s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/replicationcontrollers/rc | |
Feb 28 20:24:31.948: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:24:52.115: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:24:59.117: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:24:59.117: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:24:59.867899 11176 request.go:627] Throttling request took 244.132395ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:00.067889 11176 request.go:627] Throttling request took 443.927743ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:00.267865 11176 request.go:627] Throttling request took 643.82123ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:00.467935 11176 request.go:627] Throttling request took 843.794386ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:00.667999 11176 request.go:627] Throttling request took 1.043790668s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:00.867845 11176 request.go:627] Throttling request took 1.243566596s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:01.067848 11176 request.go:627] Throttling request took 1.443494247s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:25:12.284: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:25:29.118: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:25:29.118: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:25:29.867893 11176 request.go:627] Throttling request took 239.396597ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:30.067904 11176 request.go:627] Throttling request took 439.343743ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:30.267863 11176 request.go:627] Throttling request took 639.228118ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:30.467901 11176 request.go:627] Throttling request took 839.196487ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:30.667899 11176 request.go:627] Throttling request took 1.039128535s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:30.867884 11176 request.go:627] Throttling request took 1.239047354s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:31.067888 11176 request.go:627] Throttling request took 1.43898865s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:25:32.459: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:25:52.632: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:25:59.118: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:25:59.118: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:25:59.667896 11176 request.go:627] Throttling request took 56.589557ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:25:59.867911 11176 request.go:627] Throttling request took 256.472303ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:00.067887 11176 request.go:627] Throttling request took 456.384358ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:00.267932 11176 request.go:627] Throttling request took 656.326536ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:00.467876 11176 request.go:627] Throttling request took 856.205518ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:00.667884 11176 request.go:627] Throttling request took 1.056133882s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:00.867940 11176 request.go:627] Throttling request took 1.255800477s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:01.067853 11176 request.go:627] Throttling request took 1.45563556s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:26:12.806: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:26:29.118: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:26:29.118: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:26:30.286898 11176 request.go:627] Throttling request took 211.910189ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:30.467887 11176 request.go:627] Throttling request took 392.511027ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:30.667932 11176 request.go:627] Throttling request took 592.421916ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:30.867852 11176 request.go:627] Throttling request took 791.833787ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:31.067846 11176 request.go:627] Throttling request took 991.734372ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:26:32.969: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:26:53.136: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:26:59.118: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:26:59.118: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:26:59.667866 11176 request.go:627] Throttling request took 72.645058ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:26:59.867874 11176 request.go:627] Throttling request took 272.556246ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:00.067855 11176 request.go:627] Throttling request took 472.290536ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:00.267847 11176 request.go:627] Throttling request took 672.082046ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:00.467866 11176 request.go:627] Throttling request took 872.028761ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:00.667897 11176 request.go:627] Throttling request took 1.071977419s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:00.867935 11176 request.go:627] Throttling request took 1.271935619s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:01.067941 11176 request.go:627] Throttling request took 1.471866518s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:27:13.306: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:27:29.118: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:27:29.118: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:27:29.667900 11176 request.go:627] Throttling request took 78.736412ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:29.867892 11176 request.go:627] Throttling request took 278.653088ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:30.067873 11176 request.go:627] Throttling request took 476.042737ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:30.267919 11176 request.go:627] Throttling request took 675.958485ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:30.467908 11176 request.go:627] Throttling request took 874.699483ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:30.667909 11176 request.go:627] Throttling request took 1.070160288s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:30.867896 11176 request.go:627] Throttling request took 1.269927577s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:31.067889 11176 request.go:627] Throttling request took 1.468671643s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:27:33.474: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:27:53.648: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:27:59.118: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:27:59.119: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:27:59.667844 11176 request.go:627] Throttling request took 78.308889ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:27:59.867944 11176 request.go:627] Throttling request took 278.126925ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:00.067894 11176 request.go:627] Throttling request took 477.936008ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:00.267888 11176 request.go:627] Throttling request took 677.860244ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:00.467900 11176 request.go:627] Throttling request took 877.806392ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:00.667908 11176 request.go:627] Throttling request took 1.076934026s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:00.867913 11176 request.go:627] Throttling request took 1.275510744s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:01.067897 11176 request.go:627] Throttling request took 1.471285124s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:28:13.820: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:28:29.119: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:28:29.119: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:28:29.667840 11176 request.go:627] Throttling request took 82.661504ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:29.867876 11176 request.go:627] Throttling request took 282.538607ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:30.067902 11176 request.go:627] Throttling request took 482.313168ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:30.267909 11176 request.go:627] Throttling request took 682.22268ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:30.467945 11176 request.go:627] Throttling request took 882.186166ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:30.667902 11176 request.go:627] Throttling request took 1.082029143s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:30.867895 11176 request.go:627] Throttling request took 1.2784841s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:31.067932 11176 request.go:627] Throttling request took 1.464504936s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:28:33.987: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:28:54.157: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:28:59.119: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:28:59.119: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:28:59.667875 11176 request.go:627] Throttling request took 78.281776ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:28:59.867898 11176 request.go:627] Throttling request took 278.089046ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:00.067891 11176 request.go:627] Throttling request took 475.979359ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:00.267903 11176 request.go:627] Throttling request took 675.883824ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:00.467886 11176 request.go:627] Throttling request took 875.551132ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:00.669916 11176 request.go:627] Throttling request took 1.075418589s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:00.867874 11176 request.go:627] Throttling request took 1.270793607s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:01.067866 11176 request.go:627] Throttling request took 1.470655643s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:29:14.319: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:29:29.119: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:29:29.119: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:29:29.667877 11176 request.go:627] Throttling request took 75.289863ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:29.867868 11176 request.go:627] Throttling request took 275.145938ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:30.067890 11176 request.go:627] Throttling request took 474.998754ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:30.267939 11176 request.go:627] Throttling request took 674.307012ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:30.467894 11176 request.go:627] Throttling request took 864.835725ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:30.667953 11176 request.go:627] Throttling request took 1.064805959s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:30.867942 11176 request.go:627] Throttling request took 1.264563696s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:31.067931 11176 request.go:627] Throttling request took 1.464485238s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:29:34.486: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:29:54.665: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:29:59.119: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:29:59.119: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:29:59.667895 11176 request.go:627] Throttling request took 74.517978ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:29:59.867907 11176 request.go:627] Throttling request took 274.226783ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:00.067911 11176 request.go:627] Throttling request took 473.723589ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:00.267895 11176 request.go:627] Throttling request took 638.338715ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:00.467903 11176 request.go:627] Throttling request took 828.522624ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:00.667892 11176 request.go:627] Throttling request took 1.028416841s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:00.867889 11176 request.go:627] Throttling request took 1.228336663s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:01.067896 11176 request.go:627] Throttling request took 1.428266807s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:30:14.829: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:30:29.119: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:30:29.120: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:30:29.667879 11176 request.go:627] Throttling request took 54.750463ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:29.867889 11176 request.go:627] Throttling request took 233.26945ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:30.067946 11176 request.go:627] Throttling request took 433.235405ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:30.267896 11176 request.go:627] Throttling request took 633.114141ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:30.467907 11176 request.go:627] Throttling request took 833.052246ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:30.667922 11176 request.go:627] Throttling request took 1.032979728s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:30.867881 11176 request.go:627] Throttling request took 1.232884771s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:31.067892 11176 request.go:627] Throttling request took 1.432814768s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:30:34.994: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:30:55.171: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:30:59.120: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:30:59.120: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:30:59.667886 11176 request.go:627] Throttling request took 77.165054ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:30:59.867874 11176 request.go:627] Throttling request took 276.96698ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:00.067855 11176 request.go:627] Throttling request took 476.311191ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:00.267853 11176 request.go:627] Throttling request took 676.136743ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:00.467901 11176 request.go:627] Throttling request took 875.985203ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:00.667905 11176 request.go:627] Throttling request took 1.075904566s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:00.867889 11176 request.go:627] Throttling request took 1.26698145s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:01.067903 11176 request.go:627] Throttling request took 1.45736588s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:31:15.338: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:31:29.120: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:31:29.120: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:31:29.867915 11176 request.go:627] Throttling request took 244.956261ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:30.067886 11176 request.go:627] Throttling request took 444.784177ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:30.267893 11176 request.go:627] Throttling request took 644.716264ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:30.467877 11176 request.go:627] Throttling request took 844.063423ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:30.667885 11176 request.go:627] Throttling request took 1.043428066s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:30.867877 11176 request.go:627] Throttling request took 1.233359019s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:31.067895 11176 request.go:627] Throttling request took 1.433047534s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:31:35.502: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:31:55.668: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:31:59.120: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:31:59.120: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:31:59.667853 11176 request.go:627] Throttling request took 52.987093ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:31:59.867904 11176 request.go:627] Throttling request took 252.938421ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:00.067892 11176 request.go:627] Throttling request took 452.859745ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:00.267960 11176 request.go:627] Throttling request took 652.848419ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:00.467883 11176 request.go:627] Throttling request took 852.720012ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:00.667875 11176 request.go:627] Throttling request took 1.052636793s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:00.867868 11176 request.go:627] Throttling request took 1.252549791s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:01.067875 11176 request.go:627] Throttling request took 1.45251776s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:32:15.837: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:32:29.120: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:32:29.121: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:32:29.667848 11176 request.go:627] Throttling request took 77.667717ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:29.867868 11176 request.go:627] Throttling request took 277.280159ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:30.067883 11176 request.go:627] Throttling request took 477.181943ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:30.267882 11176 request.go:627] Throttling request took 676.535816ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:30.467900 11176 request.go:627] Throttling request took 876.455889ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:30.667893 11176 request.go:627] Throttling request took 1.075012345s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:30.867874 11176 request.go:627] Throttling request took 1.270139133s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:31.067890 11176 request.go:627] Throttling request took 1.462222179s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:32:36.004: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:32:56.173: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:32:59.120: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:32:59.122: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:32:59.667863 11176 request.go:627] Throttling request took 82.075548ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:32:59.867895 11176 request.go:627] Throttling request took 282.016685ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:00.067887 11176 request.go:627] Throttling request took 481.182623ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:00.267899 11176 request.go:627] Throttling request took 680.926758ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:00.467942 11176 request.go:627] Throttling request took 880.702875ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:00.667943 11176 request.go:627] Throttling request took 1.080118712s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:00.867876 11176 request.go:627] Throttling request took 1.258617849s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:01.067884 11176 request.go:627] Throttling request took 1.458531999s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:33:16.343: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:33:29.121: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:33:29.122: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:33:29.667873 11176 request.go:627] Throttling request took 79.115062ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:29.867854 11176 request.go:627] Throttling request took 279.006694ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:30.067889 11176 request.go:627] Throttling request took 478.967652ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:30.267877 11176 request.go:627] Throttling request took 671.345326ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:30.467895 11176 request.go:627] Throttling request took 849.366324ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:30.667880 11176 request.go:627] Throttling request took 1.049253672s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:30.867931 11176 request.go:627] Throttling request took 1.249236s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:31.067887 11176 request.go:627] Throttling request took 1.449117001s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:33:36.506: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:33:56.683: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:33:59.121: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:33:59.122: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:33:59.687681 11176 request.go:627] Throttling request took 92.526531ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:33:59.867871 11176 request.go:627] Throttling request took 272.632322ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:00.067876 11176 request.go:627] Throttling request took 472.534403ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:00.267882 11176 request.go:627] Throttling request took 672.468318ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:00.467899 11176 request.go:627] Throttling request took 872.420514ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:00.667881 11176 request.go:627] Throttling request took 1.072318687s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:00.867869 11176 request.go:627] Throttling request took 1.271345102s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:01.067861 11176 request.go:627] Throttling request took 1.471173863s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:34:16.851: INFO: replicationController: current replicas number 5 waiting to be 3 | |
Feb 28 20:34:29.121: INFO: RC rc: sending 20 requests to consume 20 millicores each and 1 request to consume 0 millicores | |
Feb 28 20:34:29.123: INFO: RC rc: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
W0228 20:34:29.667844 11176 request.go:627] Throttling request took 58.518806ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:29.867863 11176 request.go:627] Throttling request took 258.40579ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:30.067936 11176 request.go:627] Throttling request took 458.251242ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:30.267963 11176 request.go:627] Throttling request took 658.065173ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:30.467896 11176 request.go:627] Throttling request took 857.98164ms, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:30.667898 11176 request.go:627] Throttling request took 1.057922543s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:30.867881 11176 request.go:627] Throttling request took 1.243138714s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
W0228 20:34:31.067882 11176 request.go:627] Throttling request took 1.43812418s, request: https://104.196.32.11/api/v1/proxy/namespaces/e2e-tests-horizontal-pod-autoscaling-llnqx/services/rc/ConsumeCPU?durationSec=30&millicores=20 | |
Feb 28 20:34:36.851: FAIL: timeout waiting 10m0s for pods size to be 3 | |
STEP: Removing consuming RC rc | |
STEP: deleting replication controller rc in namespace e2e-tests-horizontal-pod-autoscaling-llnqx | |
Feb 28 20:34:49.561: INFO: Deleting RC rc took: 2.625558057s | |
Feb 28 20:34:49.643: INFO: Terminating RC rc pods took: 82.62494ms | |
[AfterEach] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-horizontal-pod-autoscaling-llnqx". | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-ln05w | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-1n1tj | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-r8bhn | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-mwmpt | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-tsv8y | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc-1n1tj: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc-1n1tj: {default-scheduler } Scheduled: Successfully assigned rc-1n1tj to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc-ln05w: {default-scheduler } Scheduled: Successfully assigned rc-ln05w to spotter-kube-rkt-minion-yo39 | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc-mwmpt: {default-scheduler } Scheduled: Successfully assigned rc-mwmpt to spotter-kube-rkt-minion-yo39 | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc-mwmpt: {kubelet spotter-kube-rkt-minion-yo39} Pulling: pulling image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc-r8bhn: {default-scheduler } Scheduled: Successfully assigned rc-r8bhn to spotter-kube-rkt-minion-yii0 | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc-r8bhn: {kubelet spotter-kube-rkt-minion-yii0} Pulling: pulling image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:39 -0800 PST - event for rc-tsv8y: {default-scheduler } Scheduled: Successfully assigned rc-tsv8y to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:53 -0800 PST - event for rc-1n1tj: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:53 -0800 PST - event for rc-ln05w: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Successfully pulled image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:53 -0800 PST - event for rc-ln05w: {kubelet spotter-kube-rkt-minion-yo39} Pulling: pulling image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:53 -0800 PST - event for rc-mwmpt: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Successfully pulled image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:53 -0800 PST - event for rc-r8bhn: {kubelet spotter-kube-rkt-minion-yii0} Pulled: Successfully pulled image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:53 -0800 PST - event for rc-tsv8y: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:53 -0800 PST - event for rc-tsv8y: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "gcr.io/google_containers/resource_consumer:beta2" | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:55 -0800 PST - event for rc-r8bhn: {kubelet spotter-kube-rkt-minion-yii0} Failed: Failed to create rkt container with error: failed to run [prepare --quiet --pod-manifest /tmp/manifest-rc-r8bhn-382442246]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: error setting up stage0: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:23:55 -0800 PST - event for rc-r8bhn: {kubelet spotter-kube-rkt-minion-yii0} FailedSync: Error syncing pod, skipping: failed to SyncPod: failed to run [prepare --quiet --pod-manifest /tmp/manifest-rc-r8bhn-382442246]: exit status 1 | |
stdout: | |
stderr: image: using image from file /opt/rkt/stage1-coreos.aci | |
prepare: error setting up stage0: cannot acquire lock: resource temporarily unavailable | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:05 -0800 PST - event for rc-1n1tj: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id bc3825fc | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:05 -0800 PST - event for rc-1n1tj: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id bc3825fc | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:05 -0800 PST - event for rc-ln05w: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 920befe5 | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:05 -0800 PST - event for rc-ln05w: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 920befe5 | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:05 -0800 PST - event for rc-mwmpt: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 13d598b9 | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:05 -0800 PST - event for rc-mwmpt: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 13d598b9 | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:05 -0800 PST - event for rc-tsv8y: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 925610fd | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:05 -0800 PST - event for rc-tsv8y: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 925610fd | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:07 -0800 PST - event for rc-r8bhn: {kubelet spotter-kube-rkt-minion-yii0} Pulled: Container image "gcr.io/google_containers/resource_consumer:beta2" already present on machine | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:16 -0800 PST - event for rc-r8bhn: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id c820d96e | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:16 -0800 PST - event for rc-r8bhn: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id c820d96e | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:51 -0800 PST - event for rc: {horizontal-pod-autoscaler } FailedComputeReplicas: failed to get cpu utilization: failed to get CPU consumption and request: metrics obtained for 0/5 of pods | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:24:51 -0800 PST - event for rc: {horizontal-pod-autoscaler } FailedGetMetrics: failed to get CPU consumption and request: metrics obtained for 0/5 of pods | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc: {replication-controller } SuccessfulDelete: Deleted pod: rc-mwmpt | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc: {replication-controller } SuccessfulDelete: Deleted pod: rc-ln05w | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc: {replication-controller } SuccessfulDelete: Deleted pod: rc-1n1tj | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc: {replication-controller } SuccessfulDelete: Deleted pod: rc-r8bhn | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc: {replication-controller } SuccessfulDelete: Deleted pod: rc-tsv8y | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc-1n1tj: {kubelet spotter-kube-rkt-minion-8b1u} Killing: Killing with rkt id bc3825fc | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc-ln05w: {kubelet spotter-kube-rkt-minion-yo39} Killing: Killing with rkt id 920befe5 | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc-mwmpt: {kubelet spotter-kube-rkt-minion-yo39} Killing: Killing with rkt id 13d598b9 | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc-r8bhn: {kubelet spotter-kube-rkt-minion-yii0} Killing: Killing with rkt id c820d96e | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:48 -0800 PST - event for rc-tsv8y: {kubelet spotter-kube-rkt-minion-8b1u} Killing: Killing with rkt id 925610fd | |
Feb 28 20:34:49.923: INFO: At 2016-02-28 20:34:49 -0800 PST - event for rc: {service-controller } DeletingLoadBalancer: Deleting load balancer | |
Feb 28 20:34:50.013: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:34:50.013: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:46 -0800 PST }] | |
Feb 28 20:34:50.013: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:44 -0800 PST }] | |
Feb 28 20:34:50.013: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:34:50.013: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:34:50.013: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:34:50.013: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:34:50.013: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:34:50.013: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:34:50.013: INFO: | |
Feb 28 20:34:50.096: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:34:50.186: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 2351 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:34:49 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:34:49 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:34:50.186: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:34:50.273: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:34:50.443: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:34:50.443: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:34:50.523: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 2343 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:34:48 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:34:48 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[registry-1.docker.io/library/redis:latest] 152067072} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:34:50.523: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:34:50.609: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:34:50.867: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:34:51.249: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:34:51.249: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:34:51.333: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 2327 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0 beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:34:41 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:34:41 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:34:51.333: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:34:51.421: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:34:51.689: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:34:51.980: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:34:51.980: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:34:52.066: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 2328 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:34:47 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:34:47 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:34:52.066: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:34:52.152: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:34:52.416: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:34:52.728: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:34:52.728: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-horizontal-pod-autoscaling-llnqx" for this suite. | |
• Failure [679.485 seconds] | |
Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:64 | |
ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:63 | |
Should scale from 5 pods to 3 pods and from 3 to 1 [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62 | |
Feb 28 20:34:36.851: timeout waiting 10m0s for pods size to be 3 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:247 | |
------------------------------ | |
Etcd failure [Disruptive] | |
should recover from SIGKILL | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:65 | |
[BeforeEach] Etcd failure [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:34:58.151: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:34:58.242: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-etcd-failure-0hktr | |
Feb 28 20:34:58.326: INFO: Service account default in ns e2e-tests-etcd-failure-0hktr with secrets found. (83.857445ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:34:58.326: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-etcd-failure-0hktr | |
Feb 28 20:34:58.414: INFO: Service account default in ns e2e-tests-etcd-failure-0hktr with secrets found. (87.813565ms) | |
[BeforeEach] Etcd failure [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:49 | |
STEP: creating replication controller baz in namespace e2e-tests-etcd-failure-0hktr | |
Feb 28 20:34:58.505: INFO: Created replication controller with name: baz, namespace: e2e-tests-etcd-failure-0hktr, replica count: 1 | |
Feb 28 20:35:08.506: INFO: baz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
[It] should recover from SIGKILL | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:65 | |
STEP: failing etcd | |
STEP: assert that the pre-existing replication controller recovers | |
STEP: deleting pods from existing replication controller | |
Feb 28 20:35:31.442: INFO: apiserver has recovered | |
STEP: waiting for replication controller to recover | |
STEP: Creating replication controller my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4 | |
Feb 28 20:35:35.699: INFO: Pod name my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4: Found 2 pods out of 2 | |
STEP: Ensuring each pod is running | |
Feb 28 20:35:35.699: INFO: Waiting up to 5m0s for pod my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-e431n status to be running | |
Feb 28 20:35:35.785: INFO: Waiting for pod my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-e431n in namespace 'e2e-tests-etcd-failure-0hktr' status to be 'running'(found phase: "Pending", readiness: false) (85.491183ms elapsed) | |
Feb 28 20:35:37.868: INFO: Waiting for pod my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-e431n in namespace 'e2e-tests-etcd-failure-0hktr' status to be 'running'(found phase: "Pending", readiness: false) (2.168668003s elapsed) | |
Feb 28 20:35:39.956: INFO: Found pod 'my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-e431n' on node 'spotter-kube-rkt-minion-8b1u' | |
Feb 28 20:35:39.956: INFO: Waiting up to 5m0s for pod my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-fpxol status to be running | |
Feb 28 20:35:40.037: INFO: Found pod 'my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-fpxol' on node 'spotter-kube-rkt-minion-yo39' | |
STEP: Trying to dial each unique pod | |
Feb 28 20:35:45.369: INFO: Controller my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4: Got expected result from replica 1 [my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-e431n]: "my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-e431n", 1 of 2 required successes so far | |
Feb 28 20:35:45.625: INFO: Controller my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4: Got expected result from replica 2 [my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-fpxol]: "my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4-fpxol", 2 of 2 required successes so far | |
STEP: deleting replication controller my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4 in namespace e2e-tests-etcd-failure-0hktr | |
Feb 28 20:35:48.317: INFO: Deleting RC my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4 took: 2.605598031s | |
Feb 28 20:35:48.400: INFO: Terminating RC my-hostname-basic-e0133e0e-de9d-11e5-a1fb-54ee75510eb4 pods took: 83.322075ms | |
[AfterEach] Etcd failure [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:35:48.400: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-etcd-failure-0hktr" for this suite. | |
• [SLOW TEST:55.671 seconds] | |
Etcd failure [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:66 | |
should recover from SIGKILL | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:65 | |
------------------------------ | |
Job | |
should keep restarting failed pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:111 | |
[BeforeEach] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:35:53.822: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:35:53.910: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-bon1f | |
Feb 28 20:35:53.993: INFO: Service account default in ns e2e-tests-job-bon1f had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:35:56.082: INFO: Service account default in ns e2e-tests-job-bon1f with secrets found. (2.171791874s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:35:56.082: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-bon1f | |
Feb 28 20:35:56.162: INFO: Service account default in ns e2e-tests-job-bon1f with secrets found. (80.127182ms) | |
[It] should keep restarting failed pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:111 | |
STEP: Creating a job | |
STEP: Ensuring job shows many failures | |
[AfterEach] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:36:14.341: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-job-bon1f" for this suite. | |
• [SLOW TEST:25.960 seconds] | |
Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198 | |
should keep restarting failed pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:111 | |
------------------------------ | |
S | |
------------------------------ | |
Namespaces [Serial] | |
should delete fast enough (90 percent of 100 namespaces in 150 seconds) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:110 | |
[BeforeEach] Namespaces [Serial] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:36:19.782: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:36:19.873: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-namespaces-07b91 | |
Feb 28 20:36:19.956: INFO: Service account default in ns e2e-tests-namespaces-07b91 with secrets found. (83.378798ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:36:19.956: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-namespaces-07b91 | |
Feb 28 20:36:20.040: INFO: Service account default in ns e2e-tests-namespaces-07b91 with secrets found. (83.603669ms) | |
[It] should delete fast enough (90 percent of 100 namespaces in 150 seconds) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:110 | |
STEP: Creating testing namespaces | |
Feb 28 20:36:20.129: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-0-3ujbm | |
W0228 20:36:20.183530 11176 request.go:627] Throttling request took 142.883515ms, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:20.289: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-1-rig2d | |
W0228 20:36:20.383480 11176 request.go:627] Throttling request took 342.804129ms, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:20.386: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-2-yyem9 | |
Feb 28 20:36:20.404: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-4-r06ht | |
Feb 28 20:36:20.410: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-6-ixeqj | |
Feb 28 20:36:20.421: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-5-sh3eu | |
Feb 28 20:36:20.443: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-3-ucbc9 | |
Feb 28 20:36:20.443: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-7-ufgv1 | |
Feb 28 20:36:20.452: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-8-n3wc0 | |
Feb 28 20:36:20.470: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-9-p91vh | |
W0228 20:36:20.583513 11176 request.go:627] Throttling request took 542.804527ms, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:20.669: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-10-45l7v | |
W0228 20:36:20.783511 11176 request.go:627] Throttling request took 742.77235ms, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:20.879: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-11-jeaw2 | |
W0228 20:36:20.983573 11176 request.go:627] Throttling request took 942.799543ms, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:21.070: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-12-4a2b5 | |
W0228 20:36:21.183571 11176 request.go:627] Throttling request took 1.142771512s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:21.271: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-13-oap7q | |
W0228 20:36:21.383573 11176 request.go:627] Throttling request took 1.34274621s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:21.471: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-14-4jjw7 | |
W0228 20:36:21.583607 11176 request.go:627] Throttling request took 1.542751994s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:21.670: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-15-mk7x8 | |
W0228 20:36:21.783571 11176 request.go:627] Throttling request took 1.742690034s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:21.871: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-16-39mv6 | |
W0228 20:36:21.983607 11176 request.go:627] Throttling request took 1.942698817s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:22.071: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-17-ynjk9 | |
W0228 20:36:22.183554 11176 request.go:627] Throttling request took 2.14261899s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:22.273: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-18-8tir5 | |
W0228 20:36:22.383566 11176 request.go:627] Throttling request took 2.342601163s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:22.470: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-19-38vis | |
W0228 20:36:22.583568 11176 request.go:627] Throttling request took 2.542576033s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:22.676: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-20-767jt | |
W0228 20:36:22.783606 11176 request.go:627] Throttling request took 2.742587868s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:22.869: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-21-5q9m6 | |
W0228 20:36:22.983550 11176 request.go:627] Throttling request took 2.942515792s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:23.067: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-22-ijefx | |
W0228 20:36:23.183546 11176 request.go:627] Throttling request took 3.142474147s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:23.270: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-23-h9g5i | |
W0228 20:36:23.383568 11176 request.go:627] Throttling request took 3.342465153s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:23.472: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-24-kp5xd | |
W0228 20:36:23.583542 11176 request.go:627] Throttling request took 3.542415688s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:23.672: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-25-ih94t | |
W0228 20:36:23.783564 11176 request.go:627] Throttling request took 3.742396898s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:23.867: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-26-5e2vy | |
W0228 20:36:23.983521 11176 request.go:627] Throttling request took 3.942348562s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:24.074: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-27-1r7v2 | |
W0228 20:36:24.183550 11176 request.go:627] Throttling request took 4.142353202s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:24.266: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-28-y69yr | |
W0228 20:36:24.383559 11176 request.go:627] Throttling request took 4.342335175s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:24.469: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-29-mcupa | |
W0228 20:36:24.583562 11176 request.go:627] Throttling request took 4.542299936s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:24.671: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-30-ol6ab | |
W0228 20:36:24.783555 11176 request.go:627] Throttling request took 4.742267636s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:24.869: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-31-7q1ap | |
W0228 20:36:24.983559 11176 request.go:627] Throttling request took 4.942244685s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:25.072: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-32-iske8 | |
W0228 20:36:25.183523 11176 request.go:627] Throttling request took 5.142189729s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:25.272: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-33-8ch77 | |
W0228 20:36:25.383533 11176 request.go:627] Throttling request took 5.342172661s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:25.470: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-34-r6rfj | |
W0228 20:36:25.583509 11176 request.go:627] Throttling request took 5.542119614s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:25.671: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-35-p2u4m | |
W0228 20:36:25.783541 11176 request.go:627] Throttling request took 5.742125981s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:25.889: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-36-b48pb | |
W0228 20:36:25.983537 11176 request.go:627] Throttling request took 5.94209619s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:26.072: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-37-jdogo | |
W0228 20:36:26.183522 11176 request.go:627] Throttling request took 6.1420512s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:26.300: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-38-mysen | |
W0228 20:36:26.383524 11176 request.go:627] Throttling request took 6.34202584s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:26.470: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-39-oeqxa | |
W0228 20:36:26.583541 11176 request.go:627] Throttling request took 6.54201781s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:26.672: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-40-d6kmf | |
W0228 20:36:26.783514 11176 request.go:627] Throttling request took 6.741887491s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:26.880: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-41-6amic | |
W0228 20:36:26.983520 11176 request.go:627] Throttling request took 6.941854258s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:27.126: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-42-f0fyn | |
W0228 20:36:27.183502 11176 request.go:627] Throttling request took 7.141804745s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:27.277: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-43-nxvar | |
W0228 20:36:27.383555 11176 request.go:627] Throttling request took 7.341827584s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:27.476: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-44-v2vzb | |
W0228 20:36:27.583545 11176 request.go:627] Throttling request took 7.541790631s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:27.672: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-45-uda0p | |
W0228 20:36:27.783555 11176 request.go:627] Throttling request took 7.741773033s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:27.881: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-46-6arhu | |
W0228 20:36:27.983568 11176 request.go:627] Throttling request took 7.941758653s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:28.092: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-47-0sufj | |
W0228 20:36:28.183521 11176 request.go:627] Throttling request took 8.141683961s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:28.282: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-48-ngrnu | |
W0228 20:36:28.383558 11176 request.go:627] Throttling request took 8.341693626s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:28.500: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-49-yd6kh | |
W0228 20:36:28.583545 11176 request.go:627] Throttling request took 8.541653942s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:28.689: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-50-ulg90 | |
W0228 20:36:28.783540 11176 request.go:627] Throttling request took 8.741619902s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:28.894: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-51-b3551 | |
W0228 20:36:28.983536 11176 request.go:627] Throttling request took 8.941590074s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:29.078: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-52-dbg3j | |
W0228 20:36:29.183550 11176 request.go:627] Throttling request took 9.141577331s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:29.297: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-53-t5a3c | |
W0228 20:36:29.383546 11176 request.go:627] Throttling request took 9.341544705s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:29.495: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-54-z27ue | |
W0228 20:36:29.583529 11176 request.go:627] Throttling request took 9.541502994s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:29.682: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-55-sac9d | |
W0228 20:36:29.783516 11176 request.go:627] Throttling request took 9.741461489s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:29.910: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-56-sqrnk | |
W0228 20:36:29.983520 11176 request.go:627] Throttling request took 9.941432697s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:30.090: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-57-tutea | |
W0228 20:36:30.183563 11176 request.go:627] Throttling request took 10.141445634s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:30.276: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-58-gn1rl | |
W0228 20:36:30.383546 11176 request.go:627] Throttling request took 10.341405632s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:30.469: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-59-mo8qe | |
W0228 20:36:30.583550 11176 request.go:627] Throttling request took 10.541382833s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:30.673: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-60-pfg1a | |
W0228 20:36:30.783597 11176 request.go:627] Throttling request took 10.741392801s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:30.882: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-61-4udxp | |
W0228 20:36:30.983603 11176 request.go:627] Throttling request took 10.941381083s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:31.073: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-62-ql1pv | |
W0228 20:36:31.183539 11176 request.go:627] Throttling request took 11.141290162s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:31.270: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-63-wgxex | |
W0228 20:36:31.383604 11176 request.go:627] Throttling request took 11.341327598s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:31.470: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-64-k6in1 | |
W0228 20:36:31.583556 11176 request.go:627] Throttling request took 11.541241503s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:31.672: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-65-zfk6c | |
W0228 20:36:31.783567 11176 request.go:627] Throttling request took 11.741235521s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:31.880: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-66-0uy91 | |
W0228 20:36:31.983592 11176 request.go:627] Throttling request took 11.941233109s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:32.074: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-67-8r4g2 | |
W0228 20:36:32.183541 11176 request.go:627] Throttling request took 12.141154645s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:32.269: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-68-okc6n | |
W0228 20:36:32.383604 11176 request.go:627] Throttling request took 12.34119033s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:32.470: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-69-6tgj7 | |
W0228 20:36:32.583556 11176 request.go:627] Throttling request took 12.541103035s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:32.672: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-70-tit85 | |
W0228 20:36:32.783535 11176 request.go:627] Throttling request took 12.741068522s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:32.874: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-71-k1vfi | |
W0228 20:36:32.983516 11176 request.go:627] Throttling request took 12.941019092s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:33.072: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-72-joicx | |
W0228 20:36:33.183528 11176 request.go:627] Throttling request took 13.141007408s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:33.273: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-73-kzsdy | |
W0228 20:36:33.383555 11176 request.go:627] Throttling request took 13.341007373s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:33.473: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-74-rapip | |
W0228 20:36:33.583556 11176 request.go:627] Throttling request took 13.540979788s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:33.671: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-75-1s4p2 | |
W0228 20:36:33.783556 11176 request.go:627] Throttling request took 13.740951549s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:33.867: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-76-hjvbf | |
W0228 20:36:33.983540 11176 request.go:627] Throttling request took 13.940900492s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:34.075: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-77-h6gra | |
W0228 20:36:34.183513 11176 request.go:627] Throttling request took 14.140842039s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:34.272: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-78-yizce | |
W0228 20:36:34.383552 11176 request.go:627] Throttling request took 14.340857453s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:34.472: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-79-70vux | |
W0228 20:36:34.583559 11176 request.go:627] Throttling request took 14.540838017s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:34.673: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-80-8gm1o | |
W0228 20:36:34.783560 11176 request.go:627] Throttling request took 14.740812779s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:34.875: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-81-a5nt0 | |
W0228 20:36:34.983546 11176 request.go:627] Throttling request took 14.940771681s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:35.074: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-82-njyom | |
W0228 20:36:35.183540 11176 request.go:627] Throttling request took 15.140739631s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:35.269: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-83-c6nma | |
W0228 20:36:35.383558 11176 request.go:627] Throttling request took 15.340731624s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:35.475: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-84-8wrbc | |
W0228 20:36:35.583543 11176 request.go:627] Throttling request took 15.540688676s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:35.671: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-85-rtaz5 | |
W0228 20:36:35.783531 11176 request.go:627] Throttling request took 15.740651353s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:35.877: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-86-y5ah8 | |
W0228 20:36:35.983533 11176 request.go:627] Throttling request took 15.940625882s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:36.075: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-87-b1w4z | |
W0228 20:36:36.183507 11176 request.go:627] Throttling request took 16.140569296s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:36.270: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-88-59imz | |
W0228 20:36:36.383550 11176 request.go:627] Throttling request took 16.340586169s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:36.473: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-89-9tgd0 | |
W0228 20:36:36.583558 11176 request.go:627] Throttling request took 16.54056744s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:36.671: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-90-3aoot | |
W0228 20:36:36.783596 11176 request.go:627] Throttling request took 16.740577515s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:36.874: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-91-6tejb | |
W0228 20:36:36.983534 11176 request.go:627] Throttling request took 16.940477324s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:37.071: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-92-ktyxn | |
W0228 20:36:37.183550 11176 request.go:627] Throttling request took 17.140464695s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:37.272: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-93-mbyp8 | |
W0228 20:36:37.383555 11176 request.go:627] Throttling request took 17.34044183s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:37.471: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-94-a1qtx | |
W0228 20:36:37.583523 11176 request.go:627] Throttling request took 17.540384161s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:37.680: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-95-7b4ea | |
W0228 20:36:37.783535 11176 request.go:627] Throttling request took 17.74036711s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:37.877: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-96-wd3te | |
W0228 20:36:37.983550 11176 request.go:627] Throttling request took 17.940337293s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:38.072: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-97-g734h | |
W0228 20:36:38.183552 11176 request.go:627] Throttling request took 18.140295884s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:38.275: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-98-qnnqz | |
W0228 20:36:38.383558 11176 request.go:627] Throttling request took 18.34025473s, request: https://104.196.32.11/api/v1/namespaces | |
Feb 28 20:36:38.471: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-99-xycg2 | |
W0228 20:36:38.583558 11176 request.go:627] Throttling request took 18.453913373s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-0-3ujbm/serviceaccounts/default | |
Feb 28 20:36:38.668: INFO: Service account default in ns e2e-tests-nslifetest-0-3ujbm with secrets found. (18.539267253s) | |
W0228 20:36:38.783596 11176 request.go:627] Throttling request took 18.494264282s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-1-rig2d/serviceaccounts/default | |
Feb 28 20:36:38.867: INFO: Service account default in ns e2e-tests-nslifetest-1-rig2d with secrets found. (18.577875882s) | |
W0228 20:36:38.983533 11176 request.go:627] Throttling request took 18.596571334s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-2-yyem9/serviceaccounts/default | |
Feb 28 20:36:39.067: INFO: Service account default in ns e2e-tests-nslifetest-2-yyem9 with secrets found. (18.680753624s) | |
W0228 20:36:39.183577 11176 request.go:627] Throttling request took 18.778796762s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-4-r06ht/serviceaccounts/default | |
Feb 28 20:36:39.268: INFO: Service account default in ns e2e-tests-nslifetest-4-r06ht with secrets found. (18.863940364s) | |
W0228 20:36:39.383576 11176 request.go:627] Throttling request took 18.973305691s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-6-ixeqj/serviceaccounts/default | |
Feb 28 20:36:39.469: INFO: Service account default in ns e2e-tests-nslifetest-6-ixeqj with secrets found. (19.058940183s) | |
W0228 20:36:39.583557 11176 request.go:627] Throttling request took 19.161787518s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-5-sh3eu/serviceaccounts/default | |
Feb 28 20:36:39.667: INFO: Service account default in ns e2e-tests-nslifetest-5-sh3eu with secrets found. (19.246031337s) | |
W0228 20:36:39.783551 11176 request.go:627] Throttling request took 19.340316056s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-3-ucbc9/serviceaccounts/default | |
Feb 28 20:36:39.864: INFO: Service account default in ns e2e-tests-nslifetest-3-ucbc9 with secrets found. (19.421392153s) | |
W0228 20:36:39.983575 11176 request.go:627] Throttling request took 19.539530553s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-7-ufgv1/serviceaccounts/default | |
Feb 28 20:36:40.085: INFO: Service account default in ns e2e-tests-nslifetest-7-ufgv1 with secrets found. (19.641563599s) | |
W0228 20:36:40.183566 11176 request.go:627] Throttling request took 19.730686167s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-8-n3wc0/serviceaccounts/default | |
Feb 28 20:36:40.271: INFO: Service account default in ns e2e-tests-nslifetest-8-n3wc0 with secrets found. (19.818460325s) | |
W0228 20:36:40.383534 11176 request.go:627] Throttling request took 19.912748243s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-9-p91vh/serviceaccounts/default | |
Feb 28 20:36:40.472: INFO: Service account default in ns e2e-tests-nslifetest-9-p91vh with secrets found. (20.001976234s) | |
W0228 20:36:40.583554 11176 request.go:627] Throttling request took 19.913786522s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-10-45l7v/serviceaccounts/default | |
Feb 28 20:36:40.672: INFO: Service account default in ns e2e-tests-nslifetest-10-45l7v with secrets found. (20.002540012s) | |
W0228 20:36:40.783560 11176 request.go:627] Throttling request took 19.904267189s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-11-jeaw2/serviceaccounts/default | |
Feb 28 20:36:40.868: INFO: Service account default in ns e2e-tests-nslifetest-11-jeaw2 with secrets found. (19.989648103s) | |
W0228 20:36:40.983547 11176 request.go:627] Throttling request took 19.912627219s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-12-4a2b5/serviceaccounts/default | |
Feb 28 20:36:41.064: INFO: Service account default in ns e2e-tests-nslifetest-12-4a2b5 with secrets found. (19.993194935s) | |
W0228 20:36:41.183561 11176 request.go:627] Throttling request took 19.91200983s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-13-oap7q/serviceaccounts/default | |
Feb 28 20:36:41.270: INFO: Service account default in ns e2e-tests-nslifetest-13-oap7q with secrets found. (19.998559667s) | |
W0228 20:36:41.383561 11176 request.go:627] Throttling request took 19.912439837s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-14-4jjw7/serviceaccounts/default | |
Feb 28 20:36:41.469: INFO: Service account default in ns e2e-tests-nslifetest-14-4jjw7 with secrets found. (19.99872291s) | |
W0228 20:36:41.583551 11176 request.go:627] Throttling request took 19.913245509s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-15-mk7x8/serviceaccounts/default | |
Feb 28 20:36:41.667: INFO: Service account default in ns e2e-tests-nslifetest-15-mk7x8 with secrets found. (19.996739292s) | |
W0228 20:36:41.783519 11176 request.go:627] Throttling request took 19.912116368s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-16-39mv6/serviceaccounts/default | |
Feb 28 20:36:41.866: INFO: Service account default in ns e2e-tests-nslifetest-16-39mv6 with secrets found. (19.995107859s) | |
W0228 20:36:41.983500 11176 request.go:627] Throttling request took 19.911627031s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-17-ynjk9/serviceaccounts/default | |
Feb 28 20:36:42.073: INFO: Service account default in ns e2e-tests-nslifetest-17-ynjk9 with secrets found. (20.002102837s) | |
W0228 20:36:42.183554 11176 request.go:627] Throttling request took 19.909504372s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-18-8tir5/serviceaccounts/default | |
Feb 28 20:36:42.269: INFO: Service account default in ns e2e-tests-nslifetest-18-8tir5 with secrets found. (19.995097839s) | |
W0228 20:36:42.383544 11176 request.go:627] Throttling request took 19.913252253s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-19-38vis/serviceaccounts/default | |
Feb 28 20:36:42.467: INFO: Service account default in ns e2e-tests-nslifetest-19-38vis with secrets found. (19.997397454s) | |
W0228 20:36:42.583551 11176 request.go:627] Throttling request took 19.906935679s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-20-767jt/serviceaccounts/default | |
Feb 28 20:36:42.665: INFO: Service account default in ns e2e-tests-nslifetest-20-767jt with secrets found. (19.988623515s) | |
W0228 20:36:42.783509 11176 request.go:627] Throttling request took 19.91382346s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-21-5q9m6/serviceaccounts/default | |
Feb 28 20:36:42.866: INFO: Service account default in ns e2e-tests-nslifetest-21-5q9m6 with secrets found. (19.996868578s) | |
W0228 20:36:42.983554 11176 request.go:627] Throttling request took 19.916201621s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-22-ijefx/serviceaccounts/default | |
Feb 28 20:36:43.066: INFO: Service account default in ns e2e-tests-nslifetest-22-ijefx with secrets found. (19.999399315s) | |
W0228 20:36:43.183576 11176 request.go:627] Throttling request took 19.913439253s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-23-h9g5i/serviceaccounts/default | |
Feb 28 20:36:43.268: INFO: Service account default in ns e2e-tests-nslifetest-23-h9g5i with secrets found. (19.998593469s) | |
W0228 20:36:43.383574 11176 request.go:627] Throttling request took 19.91112471s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-24-kp5xd/serviceaccounts/default | |
Feb 28 20:36:43.464: INFO: Service account default in ns e2e-tests-nslifetest-24-kp5xd with secrets found. (19.992234915s) | |
W0228 20:36:43.583570 11176 request.go:627] Throttling request took 19.911063815s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-25-ih94t/serviceaccounts/default | |
Feb 28 20:36:43.670: INFO: Service account default in ns e2e-tests-nslifetest-25-ih94t with secrets found. (19.997863553s) | |
W0228 20:36:43.783606 11176 request.go:627] Throttling request took 19.916526653s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-26-5e2vy/serviceaccounts/default | |
Feb 28 20:36:43.868: INFO: Service account default in ns e2e-tests-nslifetest-26-5e2vy with secrets found. (20.001936997s) | |
W0228 20:36:43.983549 11176 request.go:627] Throttling request took 19.908671117s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-27-1r7v2/serviceaccounts/default | |
Feb 28 20:36:44.064: INFO: Service account default in ns e2e-tests-nslifetest-27-1r7v2 with secrets found. (19.989363178s) | |
W0228 20:36:44.183548 11176 request.go:627] Throttling request took 19.916581973s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-28-y69yr/serviceaccounts/default | |
Feb 28 20:36:44.267: INFO: Service account default in ns e2e-tests-nslifetest-28-y69yr with secrets found. (20.00027136s) | |
W0228 20:36:44.383550 11176 request.go:627] Throttling request took 19.913548473s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-29-mcupa/serviceaccounts/default | |
Feb 28 20:36:44.473: INFO: Service account default in ns e2e-tests-nslifetest-29-mcupa with secrets found. (20.003071504s) | |
W0228 20:36:44.583555 11176 request.go:627] Throttling request took 19.911671362s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-30-ol6ab/serviceaccounts/default | |
Feb 28 20:36:44.662: INFO: Service account default in ns e2e-tests-nslifetest-30-ol6ab with secrets found. (19.990184216s) | |
W0228 20:36:44.783556 11176 request.go:627] Throttling request took 19.913648903s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-31-7q1ap/serviceaccounts/default | |
Feb 28 20:36:44.862: INFO: Service account default in ns e2e-tests-nslifetest-31-7q1ap with secrets found. (19.992557746s) | |
W0228 20:36:44.983578 11176 request.go:627] Throttling request took 19.910649874s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-32-iske8/serviceaccounts/default | |
Feb 28 20:36:45.068: INFO: Service account default in ns e2e-tests-nslifetest-32-iske8 with secrets found. (19.996078752s) | |
W0228 20:36:45.183557 11176 request.go:627] Throttling request took 19.910585707s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-33-8ch77/serviceaccounts/default | |
Feb 28 20:36:45.272: INFO: Service account default in ns e2e-tests-nslifetest-33-8ch77 with secrets found. (19.999617381s) | |
W0228 20:36:45.383566 11176 request.go:627] Throttling request took 19.912991848s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-34-r6rfj/serviceaccounts/default | |
Feb 28 20:36:45.467: INFO: Service account default in ns e2e-tests-nslifetest-34-r6rfj with secrets found. (19.997007684s) | |
W0228 20:36:45.583577 11176 request.go:627] Throttling request took 19.911564946s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-35-p2u4m/serviceaccounts/default | |
Feb 28 20:36:45.666: INFO: Service account default in ns e2e-tests-nslifetest-35-p2u4m with secrets found. (19.994982297s) | |
W0228 20:36:45.783562 11176 request.go:627] Throttling request took 19.894016668s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-36-b48pb/serviceaccounts/default | |
Feb 28 20:36:45.870: INFO: Service account default in ns e2e-tests-nslifetest-36-b48pb with secrets found. (19.980554731s) | |
W0228 20:36:45.983557 11176 request.go:627] Throttling request took 19.910980999s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-37-jdogo/serviceaccounts/default | |
Feb 28 20:36:46.064: INFO: Service account default in ns e2e-tests-nslifetest-37-jdogo with secrets found. (19.991657141s) | |
W0228 20:36:46.183601 11176 request.go:627] Throttling request took 19.88323259s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-38-mysen/serviceaccounts/default | |
Feb 28 20:36:46.268: INFO: Service account default in ns e2e-tests-nslifetest-38-mysen with secrets found. (19.967975369s) | |
W0228 20:36:46.383553 11176 request.go:627] Throttling request took 19.913035922s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-39-oeqxa/serviceaccounts/default | |
Feb 28 20:36:46.466: INFO: Service account default in ns e2e-tests-nslifetest-39-oeqxa with secrets found. (19.996105474s) | |
W0228 20:36:46.583561 11176 request.go:627] Throttling request took 19.910718971s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-40-d6kmf/serviceaccounts/default | |
Feb 28 20:36:46.667: INFO: Service account default in ns e2e-tests-nslifetest-40-d6kmf with secrets found. (19.994424324s) | |
W0228 20:36:46.783567 11176 request.go:627] Throttling request took 19.903253208s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-41-6amic/serviceaccounts/default | |
Feb 28 20:36:46.863: INFO: Service account default in ns e2e-tests-nslifetest-41-6amic with secrets found. (19.983480187s) | |
W0228 20:36:46.983600 11176 request.go:627] Throttling request took 19.857283014s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-42-f0fyn/serviceaccounts/default | |
Feb 28 20:36:47.067: INFO: Service account default in ns e2e-tests-nslifetest-42-f0fyn with secrets found. (19.941061307s) | |
W0228 20:36:47.183547 11176 request.go:627] Throttling request took 19.906394892s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-43-nxvar/serviceaccounts/default | |
Feb 28 20:36:47.269: INFO: Service account default in ns e2e-tests-nslifetest-43-nxvar with secrets found. (19.992506568s) | |
W0228 20:36:47.383547 11176 request.go:627] Throttling request took 19.907261763s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-44-v2vzb/serviceaccounts/default | |
Feb 28 20:36:47.471: INFO: Service account default in ns e2e-tests-nslifetest-44-v2vzb with secrets found. (19.995270979s) | |
W0228 20:36:47.583546 11176 request.go:627] Throttling request took 19.910841145s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-45-uda0p/serviceaccounts/default | |
Feb 28 20:36:47.664: INFO: Service account default in ns e2e-tests-nslifetest-45-uda0p with secrets found. (19.992041672s) | |
W0228 20:36:47.783571 11176 request.go:627] Throttling request took 19.901698879s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-46-6arhu/serviceaccounts/default | |
Feb 28 20:36:47.866: INFO: Service account default in ns e2e-tests-nslifetest-46-6arhu with secrets found. (19.984703255s) | |
W0228 20:36:47.983558 11176 request.go:627] Throttling request took 19.89142612s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-47-0sufj/serviceaccounts/default | |
Feb 28 20:36:48.066: INFO: Service account default in ns e2e-tests-nslifetest-47-0sufj with secrets found. (19.973925344s) | |
W0228 20:36:48.183551 11176 request.go:627] Throttling request took 19.900511744s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-48-ngrnu/serviceaccounts/default | |
Feb 28 20:36:48.264: INFO: Service account default in ns e2e-tests-nslifetest-48-ngrnu with secrets found. (19.981609443s) | |
W0228 20:36:48.383565 11176 request.go:627] Throttling request took 19.883231231s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-49-yd6kh/serviceaccounts/default | |
Feb 28 20:36:48.468: INFO: Service account default in ns e2e-tests-nslifetest-49-yd6kh with secrets found. (19.968397425s) | |
W0228 20:36:48.583559 11176 request.go:627] Throttling request took 19.894360107s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-50-ulg90/serviceaccounts/default | |
Feb 28 20:36:48.668: INFO: Service account default in ns e2e-tests-nslifetest-50-ulg90 with secrets found. (19.978921226s) | |
W0228 20:36:48.783567 11176 request.go:627] Throttling request took 19.888551365s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-51-b3551/serviceaccounts/default | |
Feb 28 20:36:48.869: INFO: Service account default in ns e2e-tests-nslifetest-51-b3551 with secrets found. (19.97405139s) | |
W0228 20:36:48.983550 11176 request.go:627] Throttling request took 19.905209639s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-52-dbg3j/serviceaccounts/default | |
Feb 28 20:36:49.068: INFO: Service account default in ns e2e-tests-nslifetest-52-dbg3j with secrets found. (19.99067103s) | |
W0228 20:36:49.183607 11176 request.go:627] Throttling request took 19.885872006s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-53-t5a3c/serviceaccounts/default | |
Feb 28 20:36:49.267: INFO: Service account default in ns e2e-tests-nslifetest-53-t5a3c with secrets found. (19.970206994s) | |
W0228 20:36:49.383569 11176 request.go:627] Throttling request took 19.888331573s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-54-z27ue/serviceaccounts/default | |
Feb 28 20:36:49.466: INFO: Service account default in ns e2e-tests-nslifetest-54-z27ue with secrets found. (19.970990726s) | |
W0228 20:36:49.583613 11176 request.go:627] Throttling request took 19.901425228s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-55-sac9d/serviceaccounts/default | |
Feb 28 20:36:49.665: INFO: Service account default in ns e2e-tests-nslifetest-55-sac9d with secrets found. (19.982885318s) | |
W0228 20:36:49.783543 11176 request.go:627] Throttling request took 19.873044766s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-56-sqrnk/serviceaccounts/default | |
Feb 28 20:36:49.865: INFO: Service account default in ns e2e-tests-nslifetest-56-sqrnk with secrets found. (19.955321254s) | |
W0228 20:36:49.983521 11176 request.go:627] Throttling request took 19.893437401s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-57-tutea/serviceaccounts/default | |
Feb 28 20:36:50.067: INFO: Service account default in ns e2e-tests-nslifetest-57-tutea with secrets found. (19.977373447s) | |
W0228 20:36:50.183568 11176 request.go:627] Throttling request took 19.906687522s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-58-gn1rl/serviceaccounts/default | |
Feb 28 20:36:50.266: INFO: Service account default in ns e2e-tests-nslifetest-58-gn1rl with secrets found. (19.989428071s) | |
W0228 20:36:50.383557 11176 request.go:627] Throttling request took 19.914190323s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-59-mo8qe/serviceaccounts/default | |
Feb 28 20:36:50.466: INFO: Service account default in ns e2e-tests-nslifetest-59-mo8qe with secrets found. (19.997645136s) | |
W0228 20:36:50.583557 11176 request.go:627] Throttling request took 19.909861467s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-60-pfg1a/serviceaccounts/default | |
Feb 28 20:36:50.667: INFO: Service account default in ns e2e-tests-nslifetest-60-pfg1a with secrets found. (19.993718778s) | |
W0228 20:36:50.783563 11176 request.go:627] Throttling request took 19.90108633s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-61-4udxp/serviceaccounts/default | |
Feb 28 20:36:50.870: INFO: Service account default in ns e2e-tests-nslifetest-61-4udxp with secrets found. (19.988318259s) | |
W0228 20:36:50.983565 11176 request.go:627] Throttling request took 19.909673094s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-62-ql1pv/serviceaccounts/default | |
Feb 28 20:36:51.072: INFO: Service account default in ns e2e-tests-nslifetest-62-ql1pv with secrets found. (19.99898865s) | |
W0228 20:36:51.183557 11176 request.go:627] Throttling request took 19.91311434s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-63-wgxex/serviceaccounts/default | |
Feb 28 20:36:51.268: INFO: Service account default in ns e2e-tests-nslifetest-63-wgxex with secrets found. (19.998251846s) | |
W0228 20:36:51.383524 11176 request.go:627] Throttling request took 19.912829658s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-64-k6in1/serviceaccounts/default | |
Feb 28 20:36:51.468: INFO: Service account default in ns e2e-tests-nslifetest-64-k6in1 with secrets found. (19.998213304s) | |
W0228 20:36:51.583506 11176 request.go:627] Throttling request took 19.911213624s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-65-zfk6c/serviceaccounts/default | |
Feb 28 20:36:51.664: INFO: Service account default in ns e2e-tests-nslifetest-65-zfk6c with secrets found. (19.992222961s) | |
W0228 20:36:51.783550 11176 request.go:627] Throttling request took 19.903487958s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-66-0uy91/serviceaccounts/default | |
Feb 28 20:36:51.863: INFO: Service account default in ns e2e-tests-nslifetest-66-0uy91 with secrets found. (19.98381581s) | |
W0228 20:36:51.983520 11176 request.go:627] Throttling request took 19.908533074s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-67-8r4g2/serviceaccounts/default | |
Feb 28 20:36:52.067: INFO: Service account default in ns e2e-tests-nslifetest-67-8r4g2 with secrets found. (19.992435158s) | |
W0228 20:36:52.183534 11176 request.go:627] Throttling request took 19.91446625s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-68-okc6n/serviceaccounts/default | |
Feb 28 20:36:52.265: INFO: Service account default in ns e2e-tests-nslifetest-68-okc6n with secrets found. (19.996048117s) | |
W0228 20:36:52.383559 11176 request.go:627] Throttling request took 19.912692252s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-69-6tgj7/serviceaccounts/default | |
Feb 28 20:36:52.467: INFO: Service account default in ns e2e-tests-nslifetest-69-6tgj7 with secrets found. (19.996689028s) | |
W0228 20:36:52.583569 11176 request.go:627] Throttling request took 19.91114603s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-70-tit85/serviceaccounts/default | |
Feb 28 20:36:52.669: INFO: Service account default in ns e2e-tests-nslifetest-70-tit85 with secrets found. (19.996719718s) | |
W0228 20:36:52.783569 11176 request.go:627] Throttling request took 19.909387317s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-71-k1vfi/serviceaccounts/default | |
Feb 28 20:36:52.868: INFO: Service account default in ns e2e-tests-nslifetest-71-k1vfi with secrets found. (19.993864749s) | |
W0228 20:36:52.983612 11176 request.go:627] Throttling request took 19.911138897s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-72-joicx/serviceaccounts/default | |
Feb 28 20:36:53.067: INFO: Service account default in ns e2e-tests-nslifetest-72-joicx with secrets found. (19.994580813s) | |
W0228 20:36:53.183554 11176 request.go:627] Throttling request took 19.910459284s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-73-kzsdy/serviceaccounts/default | |
Feb 28 20:36:53.271: INFO: Service account default in ns e2e-tests-nslifetest-73-kzsdy with secrets found. (19.998243818s) | |
W0228 20:36:53.383559 11176 request.go:627] Throttling request took 19.910271949s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-74-rapip/serviceaccounts/default | |
Feb 28 20:36:53.468: INFO: Service account default in ns e2e-tests-nslifetest-74-rapip with secrets found. (19.994968769s) | |
W0228 20:36:53.583562 11176 request.go:627] Throttling request took 19.911612333s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-75-1s4p2/serviceaccounts/default | |
Feb 28 20:36:53.664: INFO: Service account default in ns e2e-tests-nslifetest-75-1s4p2 with secrets found. (19.992778066s) | |
W0228 20:36:53.783576 11176 request.go:627] Throttling request took 19.91572414s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-76-hjvbf/serviceaccounts/default | |
Feb 28 20:36:53.872: INFO: Service account default in ns e2e-tests-nslifetest-76-hjvbf with secrets found. (20.004927869s) | |
W0228 20:36:53.983536 11176 request.go:627] Throttling request took 19.907908093s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-77-h6gra/serviceaccounts/default | |
Feb 28 20:36:54.067: INFO: Service account default in ns e2e-tests-nslifetest-77-h6gra with secrets found. (19.992088031s) | |
W0228 20:36:54.183558 11176 request.go:627] Throttling request took 19.911329699s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-78-yizce/serviceaccounts/default | |
Feb 28 20:36:54.269: INFO: Service account default in ns e2e-tests-nslifetest-78-yizce with secrets found. (19.99773021s) | |
W0228 20:36:54.383591 11176 request.go:627] Throttling request took 19.911012811s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-79-70vux/serviceaccounts/default | |
Feb 28 20:36:54.466: INFO: Service account default in ns e2e-tests-nslifetest-79-70vux with secrets found. (19.993709802s) | |
W0228 20:36:54.583527 11176 request.go:627] Throttling request took 19.910199871s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-80-8gm1o/serviceaccounts/default | |
Feb 28 20:36:54.667: INFO: Service account default in ns e2e-tests-nslifetest-80-8gm1o with secrets found. (19.994088604s) | |
W0228 20:36:54.783536 11176 request.go:627] Throttling request took 19.908332486s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-81-a5nt0/serviceaccounts/default | |
Feb 28 20:36:54.866: INFO: Service account default in ns e2e-tests-nslifetest-81-a5nt0 with secrets found. (19.991227921s) | |
W0228 20:36:54.983603 11176 request.go:627] Throttling request took 19.909385286s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-82-njyom/serviceaccounts/default | |
Feb 28 20:36:55.064: INFO: Service account default in ns e2e-tests-nslifetest-82-njyom with secrets found. (19.989946966s) | |
W0228 20:36:55.183607 11176 request.go:627] Throttling request took 19.914134081s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-83-c6nma/serviceaccounts/default | |
Feb 28 20:36:55.266: INFO: Service account default in ns e2e-tests-nslifetest-83-c6nma with secrets found. (19.996772525s) | |
W0228 20:36:55.383553 11176 request.go:627] Throttling request took 19.908259683s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-84-8wrbc/serviceaccounts/default | |
Feb 28 20:36:55.469: INFO: Service account default in ns e2e-tests-nslifetest-84-8wrbc with secrets found. (19.99405475s) | |
W0228 20:36:55.583553 11176 request.go:627] Throttling request took 19.912277909s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-85-rtaz5/serviceaccounts/default | |
Feb 28 20:36:55.666: INFO: Service account default in ns e2e-tests-nslifetest-85-rtaz5 with secrets found. (19.995402435s) | |
W0228 20:36:55.783573 11176 request.go:627] Throttling request took 19.905720743s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-86-y5ah8/serviceaccounts/default | |
Feb 28 20:36:55.867: INFO: Service account default in ns e2e-tests-nslifetest-86-y5ah8 with secrets found. (19.989830051s) | |
W0228 20:36:55.983572 11176 request.go:627] Throttling request took 19.907901679s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-87-b1w4z/serviceaccounts/default | |
Feb 28 20:36:56.064: INFO: Service account default in ns e2e-tests-nslifetest-87-b1w4z with secrets found. (19.989326206s) | |
W0228 20:36:56.183545 11176 request.go:627] Throttling request took 19.912532346s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-88-59imz/serviceaccounts/default | |
Feb 28 20:36:56.268: INFO: Service account default in ns e2e-tests-nslifetest-88-59imz with secrets found. (19.997605474s) | |
W0228 20:36:56.383547 11176 request.go:627] Throttling request took 19.910136079s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-89-9tgd0/serviceaccounts/default | |
Feb 28 20:36:56.468: INFO: Service account default in ns e2e-tests-nslifetest-89-9tgd0 with secrets found. (19.995570065s) | |
W0228 20:36:56.583565 11176 request.go:627] Throttling request took 19.912142248s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-90-3aoot/serviceaccounts/default | |
Feb 28 20:36:56.666: INFO: Service account default in ns e2e-tests-nslifetest-90-3aoot with secrets found. (19.994786047s) | |
W0228 20:36:56.783566 11176 request.go:627] Throttling request took 19.909313234s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-91-6tejb/serviceaccounts/default | |
Feb 28 20:36:56.867: INFO: Service account default in ns e2e-tests-nslifetest-91-6tejb with secrets found. (19.993138334s) | |
W0228 20:36:56.983578 11176 request.go:627] Throttling request took 19.912305055s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-92-ktyxn/serviceaccounts/default | |
Feb 28 20:36:57.061: INFO: Service account default in ns e2e-tests-nslifetest-92-ktyxn with secrets found. (19.990509857s) | |
W0228 20:36:57.183543 11176 request.go:627] Throttling request took 19.911298677s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-93-mbyp8/serviceaccounts/default | |
Feb 28 20:36:57.266: INFO: Service account default in ns e2e-tests-nslifetest-93-mbyp8 with secrets found. (19.994748144s) | |
W0228 20:36:57.383597 11176 request.go:627] Throttling request took 19.912426253s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-94-a1qtx/serviceaccounts/default | |
Feb 28 20:36:57.465: INFO: Service account default in ns e2e-tests-nslifetest-94-a1qtx with secrets found. (19.994401214s) | |
W0228 20:36:57.583584 11176 request.go:627] Throttling request took 19.90348036s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-95-7b4ea/serviceaccounts/default | |
Feb 28 20:36:57.665: INFO: Service account default in ns e2e-tests-nslifetest-95-7b4ea with secrets found. (19.985231269s) | |
W0228 20:36:57.783526 11176 request.go:627] Throttling request took 19.90631875s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-96-wd3te/serviceaccounts/default | |
Feb 28 20:36:57.869: INFO: Service account default in ns e2e-tests-nslifetest-96-wd3te with secrets found. (19.992034466s) | |
W0228 20:36:57.983559 11176 request.go:627] Throttling request took 19.911138088s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-97-g734h/serviceaccounts/default | |
Feb 28 20:36:58.067: INFO: Service account default in ns e2e-tests-nslifetest-97-g734h with secrets found. (19.994929265s) | |
W0228 20:36:58.183574 11176 request.go:627] Throttling request took 19.907677319s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-98-qnnqz/serviceaccounts/default | |
Feb 28 20:36:58.266: INFO: Service account default in ns e2e-tests-nslifetest-98-qnnqz with secrets found. (19.99069899s) | |
W0228 20:36:58.383608 11176 request.go:627] Throttling request took 19.911676025s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-99-xycg2/serviceaccounts/default | |
Feb 28 20:36:58.466: INFO: Service account default in ns e2e-tests-nslifetest-99-xycg2 with secrets found. (19.994704271s) | |
STEP: Waiting 10 seconds | |
STEP: Deleting namespaces | |
Feb 28 20:37:08.728: INFO: namespace : e2e-tests-nslifetest-0-3ujbm api call to delete is complete | |
Feb 28 20:37:08.730: INFO: namespace : e2e-tests-nslifetest-1-rig2d api call to delete is complete | |
W0228 20:37:08.783586 11176 request.go:627] Throttling request took 140.312404ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-18-8tir5 | |
Feb 28 20:37:08.898: INFO: namespace : e2e-tests-nslifetest-10-45l7v api call to delete is complete | |
Feb 28 20:37:08.901: INFO: namespace : e2e-tests-nslifetest-11-jeaw2 api call to delete is complete | |
W0228 20:37:08.983492 11176 request.go:627] Throttling request took 340.208501ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-19-38vis | |
Feb 28 20:37:08.989: INFO: namespace : e2e-tests-nslifetest-13-oap7q api call to delete is complete | |
Feb 28 20:37:08.991: INFO: namespace : e2e-tests-nslifetest-12-4a2b5 api call to delete is complete | |
Feb 28 20:37:09.004: INFO: namespace : e2e-tests-nslifetest-14-4jjw7 api call to delete is complete | |
Feb 28 20:37:09.010: INFO: namespace : e2e-tests-nslifetest-17-ynjk9 api call to delete is complete | |
Feb 28 20:37:09.023: INFO: namespace : e2e-tests-nslifetest-15-mk7x8 api call to delete is complete | |
Feb 28 20:37:09.024: INFO: namespace : e2e-tests-nslifetest-16-39mv6 api call to delete is complete | |
Feb 28 20:37:09.041: INFO: namespace : e2e-tests-nslifetest-18-8tir5 api call to delete is complete | |
Feb 28 20:37:09.071: INFO: namespace : e2e-tests-nslifetest-19-38vis api call to delete is complete | |
W0228 20:37:09.183544 11176 request.go:627] Throttling request took 540.240931ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-2-yyem9 | |
Feb 28 20:37:09.271: INFO: namespace : e2e-tests-nslifetest-2-yyem9 api call to delete is complete | |
W0228 20:37:09.383548 11176 request.go:627] Throttling request took 740.247793ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-20-767jt | |
Feb 28 20:37:09.471: INFO: namespace : e2e-tests-nslifetest-20-767jt api call to delete is complete | |
W0228 20:37:09.583545 11176 request.go:627] Throttling request took 940.239105ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-21-5q9m6 | |
Feb 28 20:37:09.679: INFO: namespace : e2e-tests-nslifetest-21-5q9m6 api call to delete is complete | |
W0228 20:37:09.783555 11176 request.go:627] Throttling request took 1.14024057s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-22-ijefx | |
Feb 28 20:37:09.874: INFO: namespace : e2e-tests-nslifetest-22-ijefx api call to delete is complete | |
W0228 20:37:09.983533 11176 request.go:627] Throttling request took 1.340214765s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-23-h9g5i | |
Feb 28 20:37:10.071: INFO: namespace : e2e-tests-nslifetest-23-h9g5i api call to delete is complete | |
W0228 20:37:10.183611 11176 request.go:627] Throttling request took 1.540269676s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-24-kp5xd | |
Feb 28 20:37:10.266: INFO: namespace : e2e-tests-nslifetest-24-kp5xd api call to delete is complete | |
W0228 20:37:10.383518 11176 request.go:627] Throttling request took 1.740186843s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-25-ih94t | |
Feb 28 20:37:10.469: INFO: namespace : e2e-tests-nslifetest-25-ih94t api call to delete is complete | |
W0228 20:37:10.583533 11176 request.go:627] Throttling request took 1.940191748s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-26-5e2vy | |
Feb 28 20:37:10.671: INFO: namespace : e2e-tests-nslifetest-26-5e2vy api call to delete is complete | |
W0228 20:37:10.783557 11176 request.go:627] Throttling request took 2.140204954s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-27-1r7v2 | |
Feb 28 20:37:10.870: INFO: namespace : e2e-tests-nslifetest-27-1r7v2 api call to delete is complete | |
W0228 20:37:10.983555 11176 request.go:627] Throttling request took 2.340196798s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-28-y69yr | |
Feb 28 20:37:11.068: INFO: namespace : e2e-tests-nslifetest-28-y69yr api call to delete is complete | |
W0228 20:37:11.183554 11176 request.go:627] Throttling request took 2.540186866s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-29-mcupa | |
Feb 28 20:37:11.271: INFO: namespace : e2e-tests-nslifetest-29-mcupa api call to delete is complete | |
W0228 20:37:11.383532 11176 request.go:627] Throttling request took 2.740161089s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-3-ucbc9 | |
Feb 28 20:37:11.472: INFO: namespace : e2e-tests-nslifetest-3-ucbc9 api call to delete is complete | |
W0228 20:37:11.583573 11176 request.go:627] Throttling request took 2.940194534s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-30-ol6ab | |
Feb 28 20:37:11.669: INFO: namespace : e2e-tests-nslifetest-30-ol6ab api call to delete is complete | |
W0228 20:37:11.783564 11176 request.go:627] Throttling request took 3.140179879s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-31-7q1ap | |
Feb 28 20:37:11.870: INFO: namespace : e2e-tests-nslifetest-31-7q1ap api call to delete is complete | |
W0228 20:37:11.983552 11176 request.go:627] Throttling request took 3.340162357s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-32-iske8 | |
Feb 28 20:37:12.070: INFO: namespace : e2e-tests-nslifetest-32-iske8 api call to delete is complete | |
W0228 20:37:12.183556 11176 request.go:627] Throttling request took 3.540159383s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-33-8ch77 | |
Feb 28 20:37:12.268: INFO: namespace : e2e-tests-nslifetest-33-8ch77 api call to delete is complete | |
W0228 20:37:12.383565 11176 request.go:627] Throttling request took 3.740153379s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-34-r6rfj | |
Feb 28 20:37:12.469: INFO: namespace : e2e-tests-nslifetest-34-r6rfj api call to delete is complete | |
W0228 20:37:12.583533 11176 request.go:627] Throttling request took 3.940120605s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-35-p2u4m | |
Feb 28 20:37:12.667: INFO: namespace : e2e-tests-nslifetest-35-p2u4m api call to delete is complete | |
W0228 20:37:12.783566 11176 request.go:627] Throttling request took 4.140148715s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-36-b48pb | |
Feb 28 20:37:12.868: INFO: namespace : e2e-tests-nslifetest-36-b48pb api call to delete is complete | |
W0228 20:37:12.983546 11176 request.go:627] Throttling request took 4.340116211s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-37-jdogo | |
Feb 28 20:37:13.072: INFO: namespace : e2e-tests-nslifetest-37-jdogo api call to delete is complete | |
W0228 20:37:13.183559 11176 request.go:627] Throttling request took 4.540122606s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-38-mysen | |
Feb 28 20:37:13.277: INFO: namespace : e2e-tests-nslifetest-38-mysen api call to delete is complete | |
W0228 20:37:13.383548 11176 request.go:627] Throttling request took 4.740096855s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-39-oeqxa | |
Feb 28 20:37:13.477: INFO: namespace : e2e-tests-nslifetest-39-oeqxa api call to delete is complete | |
W0228 20:37:13.583530 11176 request.go:627] Throttling request took 4.940076052s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-4-r06ht | |
Feb 28 20:37:13.674: INFO: namespace : e2e-tests-nslifetest-4-r06ht api call to delete is complete | |
W0228 20:37:13.783513 11176 request.go:627] Throttling request took 5.140056864s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-40-d6kmf | |
Feb 28 20:37:13.868: INFO: namespace : e2e-tests-nslifetest-40-d6kmf api call to delete is complete | |
W0228 20:37:13.983545 11176 request.go:627] Throttling request took 5.340081393s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-41-6amic | |
Feb 28 20:37:14.071: INFO: namespace : e2e-tests-nslifetest-41-6amic api call to delete is complete | |
W0228 20:37:14.183560 11176 request.go:627] Throttling request took 5.540090596s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-42-f0fyn | |
Feb 28 20:37:14.270: INFO: namespace : e2e-tests-nslifetest-42-f0fyn api call to delete is complete | |
W0228 20:37:14.383516 11176 request.go:627] Throttling request took 5.740041286s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-43-nxvar | |
Feb 28 20:37:14.466: INFO: namespace : e2e-tests-nslifetest-43-nxvar api call to delete is complete | |
W0228 20:37:14.583601 11176 request.go:627] Throttling request took 5.940116075s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-44-v2vzb | |
Feb 28 20:37:14.670: INFO: namespace : e2e-tests-nslifetest-44-v2vzb api call to delete is complete | |
W0228 20:37:14.783560 11176 request.go:627] Throttling request took 6.140070352s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-45-uda0p | |
Feb 28 20:37:14.869: INFO: namespace : e2e-tests-nslifetest-45-uda0p api call to delete is complete | |
W0228 20:37:14.983595 11176 request.go:627] Throttling request took 6.340100118s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-46-6arhu | |
Feb 28 20:37:15.071: INFO: namespace : e2e-tests-nslifetest-46-6arhu api call to delete is complete | |
W0228 20:37:15.183571 11176 request.go:627] Throttling request took 6.540070739s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-47-0sufj | |
Feb 28 20:37:15.272: INFO: namespace : e2e-tests-nslifetest-47-0sufj api call to delete is complete | |
W0228 20:37:15.383601 11176 request.go:627] Throttling request took 6.740083736s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-48-ngrnu | |
Feb 28 20:37:15.466: INFO: namespace : e2e-tests-nslifetest-48-ngrnu api call to delete is complete | |
W0228 20:37:15.583553 11176 request.go:627] Throttling request took 6.940039848s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-49-yd6kh | |
Feb 28 20:37:15.667: INFO: namespace : e2e-tests-nslifetest-49-yd6kh api call to delete is complete | |
W0228 20:37:15.783543 11176 request.go:627] Throttling request took 7.14002257s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-5-sh3eu | |
Feb 28 20:37:15.871: INFO: namespace : e2e-tests-nslifetest-5-sh3eu api call to delete is complete | |
W0228 20:37:15.983543 11176 request.go:627] Throttling request took 7.340017263s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-50-ulg90 | |
Feb 28 20:37:16.068: INFO: namespace : e2e-tests-nslifetest-50-ulg90 api call to delete is complete | |
W0228 20:37:16.183548 11176 request.go:627] Throttling request took 7.540015588s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-51-b3551 | |
Feb 28 20:37:16.295: INFO: namespace : e2e-tests-nslifetest-51-b3551 api call to delete is complete | |
W0228 20:37:16.383553 11176 request.go:627] Throttling request took 7.740005591s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-52-dbg3j | |
Feb 28 20:37:16.466: INFO: namespace : e2e-tests-nslifetest-52-dbg3j api call to delete is complete | |
W0228 20:37:16.583573 11176 request.go:627] Throttling request took 7.940018875s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-53-t5a3c | |
Feb 28 20:37:16.673: INFO: namespace : e2e-tests-nslifetest-53-t5a3c api call to delete is complete | |
W0228 20:37:16.783600 11176 request.go:627] Throttling request took 8.140017525s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-54-z27ue | |
Feb 28 20:37:16.872: INFO: namespace : e2e-tests-nslifetest-54-z27ue api call to delete is complete | |
W0228 20:37:16.983538 11176 request.go:627] Throttling request took 8.339971805s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-55-sac9d | |
Feb 28 20:37:17.071: INFO: namespace : e2e-tests-nslifetest-55-sac9d api call to delete is complete | |
W0228 20:37:17.183564 11176 request.go:627] Throttling request took 8.539992461s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-56-sqrnk | |
Feb 28 20:37:17.273: INFO: namespace : e2e-tests-nslifetest-56-sqrnk api call to delete is complete | |
W0228 20:37:17.383550 11176 request.go:627] Throttling request took 8.739972607s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-57-tutea | |
Feb 28 20:37:17.473: INFO: namespace : e2e-tests-nslifetest-57-tutea api call to delete is complete | |
W0228 20:37:17.583575 11176 request.go:627] Throttling request took 8.939990264s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-58-gn1rl | |
Feb 28 20:37:17.675: INFO: namespace : e2e-tests-nslifetest-58-gn1rl api call to delete is complete | |
W0228 20:37:17.783573 11176 request.go:627] Throttling request took 9.139984385s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-59-mo8qe | |
Feb 28 20:37:17.874: INFO: namespace : e2e-tests-nslifetest-59-mo8qe api call to delete is complete | |
W0228 20:37:17.983559 11176 request.go:627] Throttling request took 9.339963459s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-6-ixeqj | |
Feb 28 20:37:18.066: INFO: namespace : e2e-tests-nslifetest-6-ixeqj api call to delete is complete | |
W0228 20:37:18.183574 11176 request.go:627] Throttling request took 9.539971092s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-60-pfg1a | |
Feb 28 20:37:18.273: INFO: namespace : e2e-tests-nslifetest-60-pfg1a api call to delete is complete | |
W0228 20:37:18.383559 11176 request.go:627] Throttling request took 9.739951754s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-61-4udxp | |
Feb 28 20:37:18.475: INFO: namespace : e2e-tests-nslifetest-61-4udxp api call to delete is complete | |
W0228 20:37:18.583574 11176 request.go:627] Throttling request took 9.939959543s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-62-ql1pv | |
Feb 28 20:37:18.670: INFO: namespace : e2e-tests-nslifetest-62-ql1pv api call to delete is complete | |
W0228 20:37:18.783552 11176 request.go:627] Throttling request took 10.139929467s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-63-wgxex | |
Feb 28 20:37:18.867: INFO: namespace : e2e-tests-nslifetest-63-wgxex api call to delete is complete | |
W0228 20:37:18.983611 11176 request.go:627] Throttling request took 10.339983347s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-64-k6in1 | |
Feb 28 20:37:19.070: INFO: namespace : e2e-tests-nslifetest-64-k6in1 api call to delete is complete | |
W0228 20:37:19.183573 11176 request.go:627] Throttling request took 10.539940013s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-65-zfk6c | |
Feb 28 20:37:19.268: INFO: namespace : e2e-tests-nslifetest-65-zfk6c api call to delete is complete | |
W0228 20:37:19.383574 11176 request.go:627] Throttling request took 10.739904381s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-66-0uy91 | |
Feb 28 20:37:19.470: INFO: namespace : e2e-tests-nslifetest-66-0uy91 api call to delete is complete | |
W0228 20:37:19.583579 11176 request.go:627] Throttling request took 10.939885538s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-67-8r4g2 | |
Feb 28 20:37:19.675: INFO: namespace : e2e-tests-nslifetest-67-8r4g2 api call to delete is complete | |
W0228 20:37:19.783518 11176 request.go:627] Throttling request took 11.139830185s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-68-okc6n | |
Feb 28 20:37:19.872: INFO: namespace : e2e-tests-nslifetest-68-okc6n api call to delete is complete | |
W0228 20:37:19.983563 11176 request.go:627] Throttling request took 11.339879674s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-69-6tgj7 | |
Feb 28 20:37:20.071: INFO: namespace : e2e-tests-nslifetest-69-6tgj7 api call to delete is complete | |
W0228 20:37:20.183558 11176 request.go:627] Throttling request took 11.539867005s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-7-ufgv1 | |
Feb 28 20:37:20.271: INFO: namespace : e2e-tests-nslifetest-7-ufgv1 api call to delete is complete | |
W0228 20:37:20.383544 11176 request.go:627] Throttling request took 11.739849057s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-70-tit85 | |
Feb 28 20:37:20.470: INFO: namespace : e2e-tests-nslifetest-70-tit85 api call to delete is complete | |
W0228 20:37:20.583540 11176 request.go:627] Throttling request took 11.939835382s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-71-k1vfi | |
Feb 28 20:37:20.668: INFO: namespace : e2e-tests-nslifetest-71-k1vfi api call to delete is complete | |
W0228 20:37:20.783562 11176 request.go:627] Throttling request took 12.139839682s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-72-joicx | |
Feb 28 20:37:20.865: INFO: namespace : e2e-tests-nslifetest-72-joicx api call to delete is complete | |
W0228 20:37:20.983574 11176 request.go:627] Throttling request took 12.339845699s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-73-kzsdy | |
Feb 28 20:37:21.067: INFO: namespace : e2e-tests-nslifetest-73-kzsdy api call to delete is complete | |
W0228 20:37:21.183607 11176 request.go:627] Throttling request took 12.539872091s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-74-rapip | |
Feb 28 20:37:21.267: INFO: namespace : e2e-tests-nslifetest-74-rapip api call to delete is complete | |
W0228 20:37:21.383555 11176 request.go:627] Throttling request took 12.73981493s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-75-1s4p2 | |
Feb 28 20:37:21.469: INFO: namespace : e2e-tests-nslifetest-75-1s4p2 api call to delete is complete | |
W0228 20:37:21.583554 11176 request.go:627] Throttling request took 12.939816598s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-76-hjvbf | |
Feb 28 20:37:21.669: INFO: namespace : e2e-tests-nslifetest-76-hjvbf api call to delete is complete | |
W0228 20:37:21.783524 11176 request.go:627] Throttling request took 13.1397797s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-77-h6gra | |
Feb 28 20:37:21.869: INFO: namespace : e2e-tests-nslifetest-77-h6gra api call to delete is complete | |
W0228 20:37:21.983553 11176 request.go:627] Throttling request took 13.339803665s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-78-yizce | |
Feb 28 20:37:22.067: INFO: namespace : e2e-tests-nslifetest-78-yizce api call to delete is complete | |
W0228 20:37:22.183591 11176 request.go:627] Throttling request took 13.539806918s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-79-70vux | |
Feb 28 20:37:22.274: INFO: namespace : e2e-tests-nslifetest-79-70vux api call to delete is complete | |
W0228 20:37:22.383520 11176 request.go:627] Throttling request took 13.739758315s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-8-n3wc0 | |
Feb 28 20:37:22.471: INFO: namespace : e2e-tests-nslifetest-8-n3wc0 api call to delete is complete | |
W0228 20:37:22.583514 11176 request.go:627] Throttling request took 13.939748127s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-80-8gm1o | |
Feb 28 20:37:22.677: INFO: namespace : e2e-tests-nslifetest-80-8gm1o api call to delete is complete | |
W0228 20:37:22.783563 11176 request.go:627] Throttling request took 14.139789734s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-81-a5nt0 | |
Feb 28 20:37:22.875: INFO: namespace : e2e-tests-nslifetest-81-a5nt0 api call to delete is complete | |
W0228 20:37:22.983561 11176 request.go:627] Throttling request took 14.339779385s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-82-njyom | |
Feb 28 20:37:23.072: INFO: namespace : e2e-tests-nslifetest-82-njyom api call to delete is complete | |
W0228 20:37:23.183538 11176 request.go:627] Throttling request took 14.539751693s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-83-c6nma | |
Feb 28 20:37:23.274: INFO: namespace : e2e-tests-nslifetest-83-c6nma api call to delete is complete | |
W0228 20:37:23.383518 11176 request.go:627] Throttling request took 14.739724891s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-84-8wrbc | |
Feb 28 20:37:23.466: INFO: namespace : e2e-tests-nslifetest-84-8wrbc api call to delete is complete | |
W0228 20:37:23.583561 11176 request.go:627] Throttling request took 14.939759796s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-85-rtaz5 | |
Feb 28 20:37:23.667: INFO: namespace : e2e-tests-nslifetest-85-rtaz5 api call to delete is complete | |
W0228 20:37:23.783542 11176 request.go:627] Throttling request took 15.13973419s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-86-y5ah8 | |
Feb 28 20:37:23.870: INFO: namespace : e2e-tests-nslifetest-86-y5ah8 api call to delete is complete | |
W0228 20:37:23.983560 11176 request.go:627] Throttling request took 15.339744241s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-87-b1w4z | |
Feb 28 20:37:24.072: INFO: namespace : e2e-tests-nslifetest-87-b1w4z api call to delete is complete | |
W0228 20:37:24.183544 11176 request.go:627] Throttling request took 15.539723693s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-88-59imz | |
Feb 28 20:37:24.273: INFO: namespace : e2e-tests-nslifetest-88-59imz api call to delete is complete | |
W0228 20:37:24.383537 11176 request.go:627] Throttling request took 15.739708991s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-89-9tgd0 | |
Feb 28 20:37:24.467: INFO: namespace : e2e-tests-nslifetest-89-9tgd0 api call to delete is complete | |
W0228 20:37:24.583517 11176 request.go:627] Throttling request took 15.939682651s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-9-p91vh | |
Feb 28 20:37:24.671: INFO: namespace : e2e-tests-nslifetest-9-p91vh api call to delete is complete | |
W0228 20:37:24.783539 11176 request.go:627] Throttling request took 16.139699863s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-90-3aoot | |
Feb 28 20:37:24.868: INFO: namespace : e2e-tests-nslifetest-90-3aoot api call to delete is complete | |
W0228 20:37:24.983542 11176 request.go:627] Throttling request took 16.339694724s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-91-6tejb | |
Feb 28 20:37:25.070: INFO: namespace : e2e-tests-nslifetest-91-6tejb api call to delete is complete | |
W0228 20:37:25.183522 11176 request.go:627] Throttling request took 16.53967013s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-92-ktyxn | |
Feb 28 20:37:25.273: INFO: namespace : e2e-tests-nslifetest-92-ktyxn api call to delete is complete | |
W0228 20:37:25.383545 11176 request.go:627] Throttling request took 16.739688248s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-93-mbyp8 | |
Feb 28 20:37:25.473: INFO: namespace : e2e-tests-nslifetest-93-mbyp8 api call to delete is complete | |
W0228 20:37:25.583546 11176 request.go:627] Throttling request took 16.939682257s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-94-a1qtx | |
Feb 28 20:37:25.672: INFO: namespace : e2e-tests-nslifetest-94-a1qtx api call to delete is complete | |
W0228 20:37:25.783542 11176 request.go:627] Throttling request took 17.139673512s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-95-7b4ea | |
Feb 28 20:37:25.871: INFO: namespace : e2e-tests-nslifetest-95-7b4ea api call to delete is complete | |
W0228 20:37:25.983543 11176 request.go:627] Throttling request took 17.339668493s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-96-wd3te | |
Feb 28 20:37:26.074: INFO: namespace : e2e-tests-nslifetest-96-wd3te api call to delete is complete | |
W0228 20:37:26.183539 11176 request.go:627] Throttling request took 17.539656552s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-97-g734h | |
Feb 28 20:37:26.274: INFO: namespace : e2e-tests-nslifetest-97-g734h api call to delete is complete | |
W0228 20:37:26.383505 11176 request.go:627] Throttling request took 17.739618189s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-98-qnnqz | |
Feb 28 20:37:26.467: INFO: namespace : e2e-tests-nslifetest-98-qnnqz api call to delete is complete | |
W0228 20:37:26.583569 11176 request.go:627] Throttling request took 17.939676287s, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-99-xycg2 | |
Feb 28 20:37:26.664: INFO: namespace : e2e-tests-nslifetest-99-xycg2 api call to delete is complete | |
STEP: Waiting for namespaces to vanish | |
Feb 28 20:37:28.835: INFO: Remaining namespaces : 68 | |
Feb 28 20:37:30.832: INFO: Remaining namespaces : 64 | |
Feb 28 20:37:32.834: INFO: Remaining namespaces : 62 | |
Feb 28 20:37:34.868: INFO: Remaining namespaces : 58 | |
Feb 28 20:37:36.842: INFO: Remaining namespaces : 56 | |
Feb 28 20:37:38.839: INFO: Remaining namespaces : 52 | |
Feb 28 20:37:40.839: INFO: Remaining namespaces : 50 | |
Feb 28 20:37:42.835: INFO: Remaining namespaces : 46 | |
Feb 28 20:37:44.839: INFO: Remaining namespaces : 44 | |
Feb 28 20:37:46.827: INFO: Remaining namespaces : 40 | |
Feb 28 20:37:48.832: INFO: Remaining namespaces : 37 | |
Feb 28 20:37:50.834: INFO: Remaining namespaces : 34 | |
Feb 28 20:37:52.829: INFO: Remaining namespaces : 30 | |
Feb 28 20:37:54.833: INFO: Remaining namespaces : 28 | |
Feb 28 20:37:56.751: INFO: Remaining namespaces : 24 | |
Feb 28 20:37:58.752: INFO: Remaining namespaces : 22 | |
Feb 28 20:38:00.749: INFO: Remaining namespaces : 18 | |
Feb 28 20:38:02.746: INFO: Remaining namespaces : 16 | |
Feb 28 20:38:04.749: INFO: Remaining namespaces : 12 | |
[AfterEach] Namespaces [Serial] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:38:06.754: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-namespaces-07b91" for this suite. | |
STEP: Destroying namespace "e2e-tests-nslifetest-0-3ujbm" for this suite. | |
Feb 28 20:38:17.270: INFO: Namespace e2e-tests-nslifetest-0-3ujbm was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-1-rig2d" for this suite. | |
Feb 28 20:38:17.358: INFO: Namespace e2e-tests-nslifetest-1-rig2d was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-2-yyem9" for this suite. | |
Feb 28 20:38:17.445: INFO: Namespace e2e-tests-nslifetest-2-yyem9 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-4-r06ht" for this suite. | |
Feb 28 20:38:17.531: INFO: Namespace e2e-tests-nslifetest-4-r06ht was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-6-ixeqj" for this suite. | |
Feb 28 20:38:17.614: INFO: Namespace e2e-tests-nslifetest-6-ixeqj was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-5-sh3eu" for this suite. | |
Feb 28 20:38:17.697: INFO: Namespace e2e-tests-nslifetest-5-sh3eu was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-3-ucbc9" for this suite. | |
Feb 28 20:38:17.781: INFO: Namespace e2e-tests-nslifetest-3-ucbc9 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-7-ufgv1" for this suite. | |
Feb 28 20:38:17.866: INFO: Namespace e2e-tests-nslifetest-7-ufgv1 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-8-n3wc0" for this suite. | |
Feb 28 20:38:17.954: INFO: Namespace e2e-tests-nslifetest-8-n3wc0 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-9-p91vh" for this suite. | |
Feb 28 20:38:18.038: INFO: Namespace e2e-tests-nslifetest-9-p91vh was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-10-45l7v" for this suite. | |
Feb 28 20:38:18.117: INFO: Namespace e2e-tests-nslifetest-10-45l7v was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-11-jeaw2" for this suite. | |
Feb 28 20:38:18.201: INFO: Namespace e2e-tests-nslifetest-11-jeaw2 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-12-4a2b5" for this suite. | |
Feb 28 20:38:18.289: INFO: Namespace e2e-tests-nslifetest-12-4a2b5 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-13-oap7q" for this suite. | |
Feb 28 20:38:18.368: INFO: Namespace e2e-tests-nslifetest-13-oap7q was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-14-4jjw7" for this suite. | |
Feb 28 20:38:18.469: INFO: Namespace e2e-tests-nslifetest-14-4jjw7 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-15-mk7x8" for this suite. | |
W0228 20:38:18.583534 11176 request.go:627] Throttling request took 113.756265ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-15-mk7x8 | |
Feb 28 20:38:18.673: INFO: Namespace e2e-tests-nslifetest-15-mk7x8 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-16-39mv6" for this suite. | |
W0228 20:38:18.783554 11176 request.go:627] Throttling request took 110.013925ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-16-39mv6 | |
Feb 28 20:38:18.870: INFO: Namespace e2e-tests-nslifetest-16-39mv6 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-17-ynjk9" for this suite. | |
W0228 20:38:18.983538 11176 request.go:627] Throttling request took 112.693823ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-17-ynjk9 | |
Feb 28 20:38:19.065: INFO: Namespace e2e-tests-nslifetest-17-ynjk9 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-18-8tir5" for this suite. | |
W0228 20:38:19.183540 11176 request.go:627] Throttling request took 118.12041ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-18-8tir5 | |
Feb 28 20:38:19.271: INFO: Namespace e2e-tests-nslifetest-18-8tir5 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-19-38vis" for this suite. | |
W0228 20:38:19.383559 11176 request.go:627] Throttling request took 112.277248ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-19-38vis | |
Feb 28 20:38:19.463: INFO: Namespace e2e-tests-nslifetest-19-38vis was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-20-767jt" for this suite. | |
W0228 20:38:19.583529 11176 request.go:627] Throttling request took 120.258252ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-20-767jt | |
Feb 28 20:38:19.666: INFO: Namespace e2e-tests-nslifetest-20-767jt was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-21-5q9m6" for this suite. | |
W0228 20:38:19.783584 11176 request.go:627] Throttling request took 116.820619ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-21-5q9m6 | |
Feb 28 20:38:19.867: INFO: Namespace e2e-tests-nslifetest-21-5q9m6 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-22-ijefx" for this suite. | |
W0228 20:38:19.983586 11176 request.go:627] Throttling request took 116.431113ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-22-ijefx | |
Feb 28 20:38:20.063: INFO: Namespace e2e-tests-nslifetest-22-ijefx was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-23-h9g5i" for this suite. | |
W0228 20:38:20.183586 11176 request.go:627] Throttling request took 120.513238ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-23-h9g5i | |
Feb 28 20:38:20.262: INFO: Namespace e2e-tests-nslifetest-23-h9g5i was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-24-kp5xd" for this suite. | |
W0228 20:38:20.383554 11176 request.go:627] Throttling request took 120.801859ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-24-kp5xd | |
Feb 28 20:38:20.467: INFO: Namespace e2e-tests-nslifetest-24-kp5xd was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-25-ih94t" for this suite. | |
W0228 20:38:20.583533 11176 request.go:627] Throttling request took 116.394966ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-25-ih94t | |
Feb 28 20:38:20.664: INFO: Namespace e2e-tests-nslifetest-25-ih94t was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-26-5e2vy" for this suite. | |
W0228 20:38:20.783540 11176 request.go:627] Throttling request took 118.880539ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-26-5e2vy | |
Feb 28 20:38:20.864: INFO: Namespace e2e-tests-nslifetest-26-5e2vy was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-27-1r7v2" for this suite. | |
W0228 20:38:20.983536 11176 request.go:627] Throttling request took 119.058486ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-27-1r7v2 | |
Feb 28 20:38:21.067: INFO: Namespace e2e-tests-nslifetest-27-1r7v2 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-28-y69yr" for this suite. | |
W0228 20:38:21.183587 11176 request.go:627] Throttling request took 116.048256ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-28-y69yr | |
Feb 28 20:38:21.272: INFO: Namespace e2e-tests-nslifetest-28-y69yr was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-29-mcupa" for this suite. | |
W0228 20:38:21.383591 11176 request.go:627] Throttling request took 111.203278ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-29-mcupa | |
Feb 28 20:38:21.469: INFO: Namespace e2e-tests-nslifetest-29-mcupa was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-30-ol6ab" for this suite. | |
W0228 20:38:21.583597 11176 request.go:627] Throttling request took 113.584827ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-30-ol6ab | |
Feb 28 20:38:21.669: INFO: Namespace e2e-tests-nslifetest-30-ol6ab was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-31-7q1ap" for this suite. | |
W0228 20:38:21.783530 11176 request.go:627] Throttling request took 113.909086ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-31-7q1ap | |
Feb 28 20:38:21.865: INFO: Namespace e2e-tests-nslifetest-31-7q1ap was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-32-iske8" for this suite. | |
W0228 20:38:21.983553 11176 request.go:627] Throttling request took 117.700144ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-32-iske8 | |
Feb 28 20:38:22.069: INFO: Namespace e2e-tests-nslifetest-32-iske8 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-33-8ch77" for this suite. | |
W0228 20:38:22.183562 11176 request.go:627] Throttling request took 114.261084ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-33-8ch77 | |
Feb 28 20:38:22.265: INFO: Namespace e2e-tests-nslifetest-33-8ch77 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-34-r6rfj" for this suite. | |
W0228 20:38:22.383535 11176 request.go:627] Throttling request took 118.262272ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-34-r6rfj | |
Feb 28 20:38:22.467: INFO: Namespace e2e-tests-nslifetest-34-r6rfj was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-35-p2u4m" for this suite. | |
W0228 20:38:22.583544 11176 request.go:627] Throttling request took 116.325019ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-35-p2u4m | |
Feb 28 20:38:22.667: INFO: Namespace e2e-tests-nslifetest-35-p2u4m was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-36-b48pb" for this suite. | |
W0228 20:38:22.783560 11176 request.go:627] Throttling request took 115.578009ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-36-b48pb | |
Feb 28 20:38:22.866: INFO: Namespace e2e-tests-nslifetest-36-b48pb was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-37-jdogo" for this suite. | |
W0228 20:38:22.983528 11176 request.go:627] Throttling request took 117.209188ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-37-jdogo | |
Feb 28 20:38:23.071: INFO: Namespace e2e-tests-nslifetest-37-jdogo was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-38-mysen" for this suite. | |
W0228 20:38:23.183566 11176 request.go:627] Throttling request took 112.226439ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-38-mysen | |
Feb 28 20:38:23.268: INFO: Namespace e2e-tests-nslifetest-38-mysen was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-39-oeqxa" for this suite. | |
W0228 20:38:23.383596 11176 request.go:627] Throttling request took 115.114758ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-39-oeqxa | |
Feb 28 20:38:23.467: INFO: Namespace e2e-tests-nslifetest-39-oeqxa was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-40-d6kmf" for this suite. | |
W0228 20:38:23.583588 11176 request.go:627] Throttling request took 115.709171ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-40-d6kmf | |
Feb 28 20:38:23.666: INFO: Namespace e2e-tests-nslifetest-40-d6kmf was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-41-6amic" for this suite. | |
W0228 20:38:23.783600 11176 request.go:627] Throttling request took 117.002676ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-41-6amic | |
Feb 28 20:38:23.863: INFO: Namespace e2e-tests-nslifetest-41-6amic was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-42-f0fyn" for this suite. | |
W0228 20:38:23.983536 11176 request.go:627] Throttling request took 120.03619ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-42-f0fyn | |
Feb 28 20:38:24.066: INFO: Namespace e2e-tests-nslifetest-42-f0fyn was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-43-nxvar" for this suite. | |
W0228 20:38:24.183546 11176 request.go:627] Throttling request took 117.368968ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-43-nxvar | |
Feb 28 20:38:24.267: INFO: Namespace e2e-tests-nslifetest-43-nxvar was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-44-v2vzb" for this suite. | |
W0228 20:38:24.383590 11176 request.go:627] Throttling request took 116.132891ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-44-v2vzb | |
Feb 28 20:38:24.463: INFO: Namespace e2e-tests-nslifetest-44-v2vzb was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-45-uda0p" for this suite. | |
W0228 20:38:24.583552 11176 request.go:627] Throttling request took 119.638408ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-45-uda0p | |
Feb 28 20:38:24.664: INFO: Namespace e2e-tests-nslifetest-45-uda0p was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-46-6arhu" for this suite. | |
W0228 20:38:24.783531 11176 request.go:627] Throttling request took 119.050054ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-46-6arhu | |
Feb 28 20:38:24.862: INFO: Namespace e2e-tests-nslifetest-46-6arhu was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-47-0sufj" for this suite. | |
W0228 20:38:24.983552 11176 request.go:627] Throttling request took 121.306259ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-47-0sufj | |
Feb 28 20:38:25.066: INFO: Namespace e2e-tests-nslifetest-47-0sufj was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-48-ngrnu" for this suite. | |
W0228 20:38:25.183512 11176 request.go:627] Throttling request took 117.293403ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-48-ngrnu | |
Feb 28 20:38:25.265: INFO: Namespace e2e-tests-nslifetest-48-ngrnu was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-49-yd6kh" for this suite. | |
W0228 20:38:25.383539 11176 request.go:627] Throttling request took 118.385586ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-49-yd6kh | |
Feb 28 20:38:25.464: INFO: Namespace e2e-tests-nslifetest-49-yd6kh was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-50-ulg90" for this suite. | |
W0228 20:38:25.583593 11176 request.go:627] Throttling request took 118.935705ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-50-ulg90 | |
Feb 28 20:38:25.664: INFO: Namespace e2e-tests-nslifetest-50-ulg90 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-51-b3551" for this suite. | |
W0228 20:38:25.783535 11176 request.go:627] Throttling request took 118.846947ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-51-b3551 | |
Feb 28 20:38:25.865: INFO: Namespace e2e-tests-nslifetest-51-b3551 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-52-dbg3j" for this suite. | |
W0228 20:38:25.983559 11176 request.go:627] Throttling request took 118.242846ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-52-dbg3j | |
Feb 28 20:38:26.067: INFO: Namespace e2e-tests-nslifetest-52-dbg3j was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-53-t5a3c" for this suite. | |
W0228 20:38:26.183537 11176 request.go:627] Throttling request took 116.358739ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-53-t5a3c | |
Feb 28 20:38:26.263: INFO: Namespace e2e-tests-nslifetest-53-t5a3c was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-54-z27ue" for this suite. | |
W0228 20:38:26.383650 11176 request.go:627] Throttling request took 120.01087ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-54-z27ue | |
Feb 28 20:38:26.468: INFO: Namespace e2e-tests-nslifetest-54-z27ue was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-55-sac9d" for this suite. | |
W0228 20:38:26.583560 11176 request.go:627] Throttling request took 114.616359ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-55-sac9d | |
Feb 28 20:38:26.666: INFO: Namespace e2e-tests-nslifetest-55-sac9d was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-56-sqrnk" for this suite. | |
W0228 20:38:26.783589 11176 request.go:627] Throttling request took 117.143956ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-56-sqrnk | |
Feb 28 20:38:26.864: INFO: Namespace e2e-tests-nslifetest-56-sqrnk was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-57-tutea" for this suite. | |
W0228 20:38:26.983557 11176 request.go:627] Throttling request took 119.309922ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-57-tutea | |
Feb 28 20:38:27.070: INFO: Namespace e2e-tests-nslifetest-57-tutea was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-58-gn1rl" for this suite. | |
W0228 20:38:27.183555 11176 request.go:627] Throttling request took 113.097904ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-58-gn1rl | |
Feb 28 20:38:27.268: INFO: Namespace e2e-tests-nslifetest-58-gn1rl was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-59-mo8qe" for this suite. | |
W0228 20:38:27.383549 11176 request.go:627] Throttling request took 114.879387ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-59-mo8qe | |
Feb 28 20:38:27.466: INFO: Namespace e2e-tests-nslifetest-59-mo8qe was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-60-pfg1a" for this suite. | |
W0228 20:38:27.583553 11176 request.go:627] Throttling request took 116.910379ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-60-pfg1a | |
Feb 28 20:38:27.668: INFO: Namespace e2e-tests-nslifetest-60-pfg1a was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-61-4udxp" for this suite. | |
W0228 20:38:27.783517 11176 request.go:627] Throttling request took 114.879834ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-61-4udxp | |
Feb 28 20:38:27.864: INFO: Namespace e2e-tests-nslifetest-61-4udxp was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-62-ql1pv" for this suite. | |
W0228 20:38:27.983529 11176 request.go:627] Throttling request took 118.916905ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-62-ql1pv | |
Feb 28 20:38:28.066: INFO: Namespace e2e-tests-nslifetest-62-ql1pv was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-63-wgxex" for this suite. | |
W0228 20:38:28.183557 11176 request.go:627] Throttling request took 117.132732ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-63-wgxex | |
Feb 28 20:38:28.263: INFO: Namespace e2e-tests-nslifetest-63-wgxex was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-64-k6in1" for this suite. | |
W0228 20:38:28.383546 11176 request.go:627] Throttling request took 119.659211ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-64-k6in1 | |
Feb 28 20:38:28.465: INFO: Namespace e2e-tests-nslifetest-64-k6in1 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-65-zfk6c" for this suite. | |
W0228 20:38:28.583625 11176 request.go:627] Throttling request took 117.838029ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-65-zfk6c | |
Feb 28 20:38:28.668: INFO: Namespace e2e-tests-nslifetest-65-zfk6c was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-66-0uy91" for this suite. | |
W0228 20:38:28.783537 11176 request.go:627] Throttling request took 114.719122ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-66-0uy91 | |
Feb 28 20:38:28.865: INFO: Namespace e2e-tests-nslifetest-66-0uy91 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-67-8r4g2" for this suite. | |
W0228 20:38:28.983534 11176 request.go:627] Throttling request took 118.336249ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-67-8r4g2 | |
Feb 28 20:38:29.069: INFO: Namespace e2e-tests-nslifetest-67-8r4g2 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-68-okc6n" for this suite. | |
W0228 20:38:29.183546 11176 request.go:627] Throttling request took 114.447772ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-68-okc6n | |
Feb 28 20:38:29.264: INFO: Namespace e2e-tests-nslifetest-68-okc6n was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-69-6tgj7" for this suite. | |
W0228 20:38:29.383504 11176 request.go:627] Throttling request took 118.878305ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-69-6tgj7 | |
Feb 28 20:38:29.468: INFO: Namespace e2e-tests-nslifetest-69-6tgj7 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-70-tit85" for this suite. | |
W0228 20:38:29.583556 11176 request.go:627] Throttling request took 114.899834ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-70-tit85 | |
Feb 28 20:38:29.665: INFO: Namespace e2e-tests-nslifetest-70-tit85 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-71-k1vfi" for this suite. | |
W0228 20:38:29.783591 11176 request.go:627] Throttling request took 118.391326ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-71-k1vfi | |
Feb 28 20:38:29.868: INFO: Namespace e2e-tests-nslifetest-71-k1vfi was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-72-joicx" for this suite. | |
W0228 20:38:29.983563 11176 request.go:627] Throttling request took 114.551727ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-72-joicx | |
Feb 28 20:38:30.072: INFO: Namespace e2e-tests-nslifetest-72-joicx was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-73-kzsdy" for this suite. | |
W0228 20:38:30.183508 11176 request.go:627] Throttling request took 111.235604ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-73-kzsdy | |
Feb 28 20:38:30.268: INFO: Namespace e2e-tests-nslifetest-73-kzsdy was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-74-rapip" for this suite. | |
W0228 20:38:30.383566 11176 request.go:627] Throttling request took 114.687844ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-74-rapip | |
Feb 28 20:38:30.468: INFO: Namespace e2e-tests-nslifetest-74-rapip was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-75-1s4p2" for this suite. | |
W0228 20:38:30.583596 11176 request.go:627] Throttling request took 115.00428ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-75-1s4p2 | |
Feb 28 20:38:30.671: INFO: Namespace e2e-tests-nslifetest-75-1s4p2 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-76-hjvbf" for this suite. | |
W0228 20:38:30.783563 11176 request.go:627] Throttling request took 112.197529ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-76-hjvbf | |
Feb 28 20:38:30.864: INFO: Namespace e2e-tests-nslifetest-76-hjvbf was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-77-h6gra" for this suite. | |
W0228 20:38:30.983552 11176 request.go:627] Throttling request took 119.054367ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-77-h6gra | |
Feb 28 20:38:31.064: INFO: Namespace e2e-tests-nslifetest-77-h6gra was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-78-yizce" for this suite. | |
W0228 20:38:31.183551 11176 request.go:627] Throttling request took 119.102735ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-78-yizce | |
Feb 28 20:38:31.270: INFO: Namespace e2e-tests-nslifetest-78-yizce was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-79-70vux" for this suite. | |
W0228 20:38:31.383593 11176 request.go:627] Throttling request took 113.234901ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-79-70vux | |
Feb 28 20:38:31.466: INFO: Namespace e2e-tests-nslifetest-79-70vux was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-80-8gm1o" for this suite. | |
W0228 20:38:31.583571 11176 request.go:627] Throttling request took 116.922892ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-80-8gm1o | |
Feb 28 20:38:31.668: INFO: Namespace e2e-tests-nslifetest-80-8gm1o was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-81-a5nt0" for this suite. | |
W0228 20:38:31.783558 11176 request.go:627] Throttling request took 114.800287ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-81-a5nt0 | |
Feb 28 20:38:31.869: INFO: Namespace e2e-tests-nslifetest-81-a5nt0 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-82-njyom" for this suite. | |
W0228 20:38:31.983595 11176 request.go:627] Throttling request took 114.502604ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-82-njyom | |
Feb 28 20:38:32.065: INFO: Namespace e2e-tests-nslifetest-82-njyom was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-83-c6nma" for this suite. | |
W0228 20:38:32.183539 11176 request.go:627] Throttling request took 117.909881ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-83-c6nma | |
Feb 28 20:38:32.269: INFO: Namespace e2e-tests-nslifetest-83-c6nma was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-84-8wrbc" for this suite. | |
W0228 20:38:32.383553 11176 request.go:627] Throttling request took 114.39497ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-84-8wrbc | |
Feb 28 20:38:32.465: INFO: Namespace e2e-tests-nslifetest-84-8wrbc was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-85-rtaz5" for this suite. | |
W0228 20:38:32.583588 11176 request.go:627] Throttling request took 117.769264ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-85-rtaz5 | |
Feb 28 20:38:32.667: INFO: Namespace e2e-tests-nslifetest-85-rtaz5 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-86-y5ah8" for this suite. | |
W0228 20:38:32.783565 11176 request.go:627] Throttling request took 115.656117ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-86-y5ah8 | |
Feb 28 20:38:32.866: INFO: Namespace e2e-tests-nslifetest-86-y5ah8 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-87-b1w4z" for this suite. | |
W0228 20:38:32.983588 11176 request.go:627] Throttling request took 117.489526ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-87-b1w4z | |
Feb 28 20:38:33.064: INFO: Namespace e2e-tests-nslifetest-87-b1w4z was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-88-59imz" for this suite. | |
W0228 20:38:33.183599 11176 request.go:627] Throttling request took 119.539474ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-88-59imz | |
Feb 28 20:38:33.264: INFO: Namespace e2e-tests-nslifetest-88-59imz was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-89-9tgd0" for this suite. | |
W0228 20:38:33.383554 11176 request.go:627] Throttling request took 119.208854ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-89-9tgd0 | |
Feb 28 20:38:33.464: INFO: Namespace e2e-tests-nslifetest-89-9tgd0 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-90-3aoot" for this suite. | |
W0228 20:38:33.583510 11176 request.go:627] Throttling request took 119.075301ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-90-3aoot | |
Feb 28 20:38:33.664: INFO: Namespace e2e-tests-nslifetest-90-3aoot was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-91-6tejb" for this suite. | |
W0228 20:38:33.783548 11176 request.go:627] Throttling request took 118.650409ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-91-6tejb | |
Feb 28 20:38:33.866: INFO: Namespace e2e-tests-nslifetest-91-6tejb was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-92-ktyxn" for this suite. | |
W0228 20:38:33.983549 11176 request.go:627] Throttling request took 116.584076ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-92-ktyxn | |
Feb 28 20:38:34.066: INFO: Namespace e2e-tests-nslifetest-92-ktyxn was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-93-mbyp8" for this suite. | |
W0228 20:38:34.183545 11176 request.go:627] Throttling request took 117.212624ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-93-mbyp8 | |
Feb 28 20:38:34.269: INFO: Namespace e2e-tests-nslifetest-93-mbyp8 was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-94-a1qtx" for this suite. | |
W0228 20:38:34.383530 11176 request.go:627] Throttling request took 114.056047ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-94-a1qtx | |
Feb 28 20:38:34.468: INFO: Namespace e2e-tests-nslifetest-94-a1qtx was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-95-7b4ea" for this suite. | |
W0228 20:38:34.583536 11176 request.go:627] Throttling request took 115.289663ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-95-7b4ea | |
Feb 28 20:38:34.670: INFO: Namespace e2e-tests-nslifetest-95-7b4ea was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-96-wd3te" for this suite. | |
W0228 20:38:34.783564 11176 request.go:627] Throttling request took 112.912187ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-96-wd3te | |
Feb 28 20:38:34.863: INFO: Namespace e2e-tests-nslifetest-96-wd3te was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-97-g734h" for this suite. | |
W0228 20:38:34.983550 11176 request.go:627] Throttling request took 120.157915ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-97-g734h | |
Feb 28 20:38:35.063: INFO: Namespace e2e-tests-nslifetest-97-g734h was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-98-qnnqz" for this suite. | |
W0228 20:38:35.183567 11176 request.go:627] Throttling request took 120.19075ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-98-qnnqz | |
Feb 28 20:38:35.263: INFO: Namespace e2e-tests-nslifetest-98-qnnqz was already deleted | |
STEP: Destroying namespace "e2e-tests-nslifetest-99-xycg2" for this suite. | |
W0228 20:38:35.383550 11176 request.go:627] Throttling request took 119.778276ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-nslifetest-99-xycg2 | |
Feb 28 20:38:35.470: INFO: Namespace e2e-tests-nslifetest-99-xycg2 was already deleted | |
• [SLOW TEST:135.689 seconds] | |
Namespaces [Serial] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:115 | |
should delete fast enough (90 percent of 100 namespaces in 150 seconds) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:110 | |
------------------------------ | |
Kubectl client Simple pod | |
should support exec through an HTTP proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:438 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:38:35.471: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:38:35.563: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-13ghn | |
Feb 28 20:38:35.650: INFO: Service account default in ns e2e-tests-kubectl-13ghn with secrets found. (87.246937ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:38:35.650: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-13ghn | |
Feb 28 20:38:35.732: INFO: Service account default in ns e2e-tests-kubectl-13ghn with secrets found. (81.329963ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[BeforeEach] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:183 | |
STEP: creating the pod from /home/spotter/gocode/src/k8s.io/y-kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml | |
Feb 28 20:38:35.732: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:38:36.503: INFO: stdout: "pod \"nginx\" created\n" | |
Feb 28 20:38:36.503: INFO: stderr: "" | |
Feb 28 20:38:36.503: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx] | |
Feb 28 20:38:36.503: INFO: Waiting up to 5m0s for pod nginx status to be running and ready | |
Feb 28 20:38:36.591: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (87.249594ms elapsed) | |
Feb 28 20:38:38.679: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.175874363s elapsed) | |
Feb 28 20:38:40.765: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.261717429s elapsed) | |
Feb 28 20:38:42.851: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (6.34707646s elapsed) | |
Feb 28 20:38:44.938: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Running", readiness: false) (8.434055195s elapsed) | |
Feb 28 20:38:47.022: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx] | |
[It] should support exec through an HTTP proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:438 | |
STEP: Finding a static kubectl for upload | |
STEP: Using the kubectl in /home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/386/kubectl | |
Feb 28 20:38:47.022: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/images/netexec/pod.yaml --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:38:47.795: INFO: stdout: "pod \"netexec\" created\n" | |
Feb 28 20:38:47.795: INFO: stderr: "" | |
Feb 28 20:38:47.795: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [netexec] | |
Feb 28 20:38:47.795: INFO: Waiting up to 5m0s for pod netexec status to be running and ready | |
Feb 28 20:38:47.879: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (83.978545ms elapsed) | |
Feb 28 20:38:49.963: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.167789022s elapsed) | |
Feb 28 20:38:52.052: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [netexec] | |
STEP: uploading kubeconfig to netexec | |
STEP: uploading kubectl to netexec | |
STEP: Running kubectl in netexec via an HTTP proxy using https_proxy | |
Feb 28 20:39:02.183: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:02.949: INFO: stdout: "pod \"goproxy\" created\n" | |
Feb 28 20:39:02.949: INFO: stderr: "" | |
Feb 28 20:39:02.949: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [goproxy] | |
Feb 28 20:39:02.949: INFO: Waiting up to 5m0s for pod goproxy status to be running and ready | |
Feb 28 20:39:03.037: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (88.608246ms elapsed) | |
Feb 28 20:39:05.125: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.176623058s elapsed) | |
Feb 28 20:39:07.212: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.263029176s elapsed) | |
Feb 28 20:39:09.301: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [goproxy] | |
Feb 28 20:39:09.391: INFO: About to remote exec: https_proxy=http://10.245.2.3:8080 ./uploads/upload310603448 --kubeconfig=/uploads/upload301175574 --server=https://104.196.32.11:443 --namespace=e2e-tests-kubectl-13ghn exec nginx echo running in container | |
Feb 28 20:39:10.630: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config log goproxy --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:11.618: INFO: stdout: "2016/02/29 04:39:09 [001] INFO: Running 0 CONNECT handlers\n2016/02/29 04:39:09 [001] INFO: Accepting CONNECT to 104.196.32.11:443\n2016/02/29 04:39:10 [002] INFO: Running 0 CONNECT handlers\n2016/02/29 04:39:10 [002] INFO: Accepting CONNECT to 104.196.32.11:443\n2016/02/29 04:39:10 [002] WARN: Error copying to client: read tcp 10.245.2.3:44470->104.196.32.11:443: read tcp 10.245.2.3:8080->10.240.0.5:54630: read: connection reset by peer\n" | |
Feb 28 20:39:11.618: INFO: stderr: "" | |
STEP: using delete to clean up resources | |
Feb 28 20:39:11.618: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config delete --grace-period=0 -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:12.362: INFO: stdout: "pod \"goproxy\" deleted\n" | |
Feb 28 20:39:12.362: INFO: stderr: "" | |
Feb 28 20:39:12.363: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get rc,svc -l name=goproxy --no-headers --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:13.089: INFO: stdout: "" | |
Feb 28 20:39:13.089: INFO: stderr: "" | |
Feb 28 20:39:13.089: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -l name=goproxy --namespace=e2e-tests-kubectl-13ghn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Feb 28 20:39:13.737: INFO: stdout: "" | |
Feb 28 20:39:13.737: INFO: stderr: "" | |
STEP: Running kubectl in netexec via an HTTP proxy using HTTPS_PROXY | |
Feb 28 20:39:13.737: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:14.511: INFO: stdout: "pod \"goproxy\" created\n" | |
Feb 28 20:39:14.511: INFO: stderr: "" | |
Feb 28 20:39:14.511: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [goproxy] | |
Feb 28 20:39:14.511: INFO: Waiting up to 5m0s for pod goproxy status to be running and ready | |
Feb 28 20:39:14.591: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (80.028719ms elapsed) | |
Feb 28 20:39:16.676: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-13ghn' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.165682799s elapsed) | |
Feb 28 20:39:18.758: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [goproxy] | |
Feb 28 20:39:18.841: INFO: About to remote exec: HTTPS_PROXY=http://10.245.2.4:8080 ./uploads/upload310603448 --kubeconfig=/uploads/upload301175574 --server=https://104.196.32.11:443 --namespace=e2e-tests-kubectl-13ghn exec nginx echo running in container | |
Feb 28 20:39:20.024: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config log goproxy --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:20.932: INFO: stdout: "2016/02/29 04:39:09 [001] INFO: Running 0 CONNECT handlers\n2016/02/29 04:39:09 [001] INFO: Accepting CONNECT to 104.196.32.11:443\n2016/02/29 04:39:10 [002] INFO: Running 0 CONNECT handlers\n2016/02/29 04:39:10 [002] INFO: Accepting CONNECT to 104.196.32.11:443\n2016/02/29 04:39:10 [002] WARN: Error copying to client: read tcp 10.245.2.3:44470->104.196.32.11:443: read tcp 10.245.2.3:8080->10.240.0.5:54630: read: connection reset by peer\n2016/02/29 04:39:19 [001] INFO: Running 0 CONNECT handlers\n2016/02/29 04:39:19 [001] INFO: Accepting CONNECT to 104.196.32.11:443\n2016/02/29 04:39:19 [002] INFO: Running 0 CONNECT handlers\n2016/02/29 04:39:19 [002] INFO: Accepting CONNECT to 104.196.32.11:443\n" | |
Feb 28 20:39:20.932: INFO: stderr: "" | |
STEP: using delete to clean up resources | |
Feb 28 20:39:20.932: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config delete --grace-period=0 -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:21.683: INFO: stdout: "pod \"goproxy\" deleted\n" | |
Feb 28 20:39:21.683: INFO: stderr: "" | |
Feb 28 20:39:21.683: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get rc,svc -l name=goproxy --no-headers --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:22.412: INFO: stdout: "" | |
Feb 28 20:39:22.412: INFO: stderr: "" | |
Feb 28 20:39:22.412: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -l name=goproxy --namespace=e2e-tests-kubectl-13ghn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Feb 28 20:39:23.059: INFO: stdout: "" | |
Feb 28 20:39:23.059: INFO: stderr: "" | |
STEP: using delete to clean up resources | |
Feb 28 20:39:23.059: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config delete --grace-period=0 -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/images/netexec/pod.yaml --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:23.857: INFO: stdout: "pod \"netexec\" deleted\n" | |
Feb 28 20:39:23.857: INFO: stderr: "" | |
Feb 28 20:39:23.857: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get rc,svc -l name=netexec --no-headers --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:24.592: INFO: stdout: "" | |
Feb 28 20:39:24.592: INFO: stderr: "" | |
Feb 28 20:39:24.592: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -l name=netexec --namespace=e2e-tests-kubectl-13ghn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Feb 28 20:39:25.243: INFO: stdout: "" | |
Feb 28 20:39:25.243: INFO: stderr: "" | |
[AfterEach] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:186 | |
STEP: using delete to clean up resources | |
Feb 28 20:39:25.243: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config delete --grace-period=0 -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:25.978: INFO: stdout: "pod \"nginx\" deleted\n" | |
Feb 28 20:39:25.978: INFO: stderr: "" | |
Feb 28 20:39:25.978: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-13ghn' | |
Feb 28 20:39:26.723: INFO: stdout: "" | |
Feb 28 20:39:26.723: INFO: stderr: "" | |
Feb 28 20:39:26.723: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-13ghn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Feb 28 20:39:27.375: INFO: stdout: "" | |
Feb 28 20:39:27.375: INFO: stderr: "" | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:39:27.375: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-13ghn" for this suite. | |
• [SLOW TEST:57.330 seconds] | |
Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1082 | |
Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:508 | |
should support exec through an HTTP proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:438 | |
------------------------------ | |
Deployment | |
deployment should label adopted RSs and pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71 | |
[BeforeEach] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:39:32.800: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:39:32.893: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-n5mt0 | |
Feb 28 20:39:32.975: INFO: Service account default in ns e2e-tests-deployment-n5mt0 had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:39:35.061: INFO: Service account default in ns e2e-tests-deployment-n5mt0 with secrets found. (2.16780449s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:39:35.061: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-n5mt0 | |
Feb 28 20:39:35.145: INFO: Service account default in ns e2e-tests-deployment-n5mt0 with secrets found. (83.580777ms) | |
[It] deployment should label adopted RSs and pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71 | |
Feb 28 20:39:35.316: INFO: Pod name nginx: Found 3 pods out of 3 | |
STEP: ensuring each pod is running | |
Feb 28 20:39:35.316: INFO: Waiting up to 5m0s for pod nginx-controller-dpwa9 status to be running | |
Feb 28 20:39:35.401: INFO: Waiting for pod nginx-controller-dpwa9 in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (84.630342ms elapsed) | |
Feb 28 20:39:37.485: INFO: Waiting for pod nginx-controller-dpwa9 in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (2.16889173s elapsed) | |
Feb 28 20:39:39.576: INFO: Found pod 'nginx-controller-dpwa9' on node 'spotter-kube-rkt-minion-8b1u' | |
Feb 28 20:39:39.576: INFO: Waiting up to 5m0s for pod nginx-controller-hmf8m status to be running | |
Feb 28 20:39:39.659: INFO: Waiting for pod nginx-controller-hmf8m in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (83.120309ms elapsed) | |
Feb 28 20:39:41.744: INFO: Waiting for pod nginx-controller-hmf8m in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (2.168513788s elapsed) | |
Feb 28 20:39:43.830: INFO: Waiting for pod nginx-controller-hmf8m in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (4.254524521s elapsed) | |
Feb 28 20:39:45.917: INFO: Waiting for pod nginx-controller-hmf8m in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (6.340723749s elapsed) | |
Feb 28 20:39:48.010: INFO: Waiting for pod nginx-controller-hmf8m in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (8.434530502s elapsed) | |
Feb 28 20:39:50.090: INFO: Waiting for pod nginx-controller-hmf8m in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (10.513960861s elapsed) | |
Feb 28 20:39:52.183: INFO: Waiting for pod nginx-controller-hmf8m in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (12.607181093s elapsed) | |
Feb 28 20:39:54.268: INFO: Waiting for pod nginx-controller-hmf8m in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (14.692481962s elapsed) | |
Feb 28 20:39:56.360: INFO: Waiting for pod nginx-controller-hmf8m in namespace 'e2e-tests-deployment-n5mt0' status to be 'running'(found phase: "Pending", readiness: false) (16.783751638s elapsed) | |
Feb 28 20:39:58.454: INFO: Found pod 'nginx-controller-hmf8m' on node 'spotter-kube-rkt-minion-yii0' | |
Feb 28 20:39:58.454: INFO: Waiting up to 5m0s for pod nginx-controller-n76b4 status to be running | |
Feb 28 20:39:58.549: INFO: Found pod 'nginx-controller-n76b4' on node 'spotter-kube-rkt-minion-yo39' | |
STEP: trying to dial each unique pod | |
Feb 28 20:39:58.885: INFO: Controller nginx: Got non-empty result from replica 1 [nginx-controller-dpwa9]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 1 of 3 required successes so far | |
Feb 28 20:40:14.167: INFO: Controller nginx: Got non-empty result from replica 2 [nginx-controller-hmf8m]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 2 of 3 required successes so far | |
Feb 28 20:40:14.416: INFO: Controller nginx: Got non-empty result from replica 3 [nginx-controller-n76b4]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 3 of 3 required successes so far | |
Feb 28 20:40:14.416: INFO: Creating deployment nginx-deployment | |
Feb 28 20:40:17.866: INFO: deleting deployment nginx-deployment | |
[AfterEach] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:40:18.214: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-n5mt0" for this suite. | |
• [SLOW TEST:50.842 seconds] | |
Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:72 | |
deployment should label adopted RSs and pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71 | |
------------------------------ | |
S | |
------------------------------ | |
KubeProxy | |
should test kube-proxy [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:101 | |
[BeforeEach] KubeProxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:40:23.642: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:40:23.732: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-kubeproxy-on0a5 | |
Feb 28 20:40:23.819: INFO: Service account default in ns e2e-tests-e2e-kubeproxy-on0a5 with secrets found. (86.585533ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:40:23.819: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-kubeproxy-on0a5 | |
Feb 28 20:40:23.906: INFO: Service account default in ns e2e-tests-e2e-kubeproxy-on0a5 with secrets found. (87.712659ms) | |
[It] should test kube-proxy [Slow] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:101 | |
STEP: cleaning up any pre-existing namespaces used by this test | |
STEP: Setting up for the tests | |
STEP: creating a selector | |
STEP: Getting node addresses | |
STEP: Creating the service pods in kubernetes | |
Feb 28 20:40:24.442: INFO: Waiting up to 5m0s for pod netserver-0 status to be running | |
Feb 28 20:40:24.525: INFO: Waiting for pod netserver-0 in namespace 'e2e-tests-e2e-kubeproxy-on0a5' status to be 'running'(found phase: "Pending", readiness: false) (83.114686ms elapsed) | |
Feb 28 20:40:26.605: INFO: Waiting for pod netserver-0 in namespace 'e2e-tests-e2e-kubeproxy-on0a5' status to be 'running'(found phase: "Pending", readiness: false) (2.163870374s elapsed) | |
Feb 28 20:40:28.692: INFO: Found pod 'netserver-0' on node 'spotter-kube-rkt-minion-8b1u' | |
Feb 28 20:40:28.783: INFO: Waiting up to 5m0s for pod netserver-1 status to be running | |
Feb 28 20:40:28.864: INFO: Waiting for pod netserver-1 in namespace 'e2e-tests-e2e-kubeproxy-on0a5' status to be 'running'(found phase: "Pending", readiness: false) (81.263855ms elapsed) | |
Feb 28 20:40:30.957: INFO: Waiting for pod netserver-1 in namespace 'e2e-tests-e2e-kubeproxy-on0a5' status to be 'running'(found phase: "Pending", readiness: false) (2.17404808s elapsed) | |
Feb 28 20:40:33.038: INFO: Waiting for pod netserver-1 in namespace 'e2e-tests-e2e-kubeproxy-on0a5' status to be 'running'(found phase: "Pending", readiness: false) (4.255037195s elapsed) | |
Feb 28 20:40:35.118: INFO: Found pod 'netserver-1' on node 'spotter-kube-rkt-minion-yii0' | |
Feb 28 20:40:35.200: INFO: Waiting up to 5m0s for pod netserver-2 status to be running | |
Feb 28 20:40:35.286: INFO: Found pod 'netserver-2' on node 'spotter-kube-rkt-minion-yo39' | |
STEP: Creating the service on top of the pods in kubernetes | |
Feb 28 20:40:35.560: INFO: Service node-port-service in namespace e2e-tests-e2e-kubeproxy-on0a5 found. | |
STEP: Creating test pods | |
Feb 28 20:40:35.815: INFO: Waiting up to 5m0s for pod test-container-pod status to be running | |
Feb 28 20:40:35.899: INFO: Waiting for pod test-container-pod in namespace 'e2e-tests-e2e-kubeproxy-on0a5' status to be 'running'(found phase: "Pending", readiness: false) (84.500435ms elapsed) | |
Feb 28 20:40:37.981: INFO: Waiting for pod test-container-pod in namespace 'e2e-tests-e2e-kubeproxy-on0a5' status to be 'running'(found phase: "Pending", readiness: false) (2.166000219s elapsed) | |
Feb 28 20:40:40.073: INFO: Found pod 'test-container-pod' on node 'spotter-kube-rkt-minion-8b1u' | |
Feb 28 20:40:40.073: INFO: Waiting up to 5m0s for pod host-test-container-pod status to be running | |
Feb 28 20:40:40.161: INFO: Waiting for pod host-test-container-pod in namespace 'e2e-tests-e2e-kubeproxy-on0a5' status to be 'running'(found phase: "Pending", readiness: false) (88.088187ms elapsed) | |
Feb 28 20:40:42.242: INFO: Waiting for pod host-test-container-pod in namespace 'e2e-tests-e2e-kubeproxy-on0a5' status to be 'running'(found phase: "Pending", readiness: false) (2.169688109s elapsed) | |
Feb 28 20:40:44.326: INFO: Found pod 'host-test-container-pod' on node 'spotter-kube-rkt-minion-yo39' | |
STEP: TODO: Need to add hit externalIPs test | |
STEP: Hit Test with All Endpoints | |
STEP: Hitting endpoints from host and container | |
STEP: dialing(udp) endpointPodIP:endpointUdpPort from node1 | |
STEP: Dialing from node. command:for i in $(seq 1 5); do echo 'hostName' | timeout -t 3 nc -w 1 -u 10.245.1.6 8081; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l | |
Feb 28 20:40:44.493: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c for i in $(seq 1 5); do echo 'hostName' | timeout -t 3 nc -w 1 -u 10.245.1.6 8081; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l' | |
Feb 28 20:40:51.133: INFO: stdout: "1\n" | |
Feb 28 20:40:51.133: INFO: stderr: "" | |
STEP: dialing(http) endpointPodIP:endpointHttpPort from node1 | |
STEP: Dialing from node. command:for i in $(seq 1 5); do curl -s --connect-timeout 1 http://10.245.1.6:8080/hostName; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l | |
Feb 28 20:40:51.134: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c for i in $(seq 1 5); do curl -s --connect-timeout 1 http://10.245.1.6:8080/hostName; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l' | |
Feb 28 20:40:52.803: INFO: stdout: "1\n" | |
Feb 28 20:40:52.803: INFO: stderr: "" | |
STEP: dialing(udp) endpointPodIP:endpointUdpPort from test container | |
STEP: Dialing from container. Running command:curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=udp&host=10.245.1.6&port=8081&tries=5' | |
Feb 28 20:40:52.803: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=udp&host=10.245.1.6&port=8081&tries=5'' | |
Feb 28 20:40:54.440: INFO: stdout: "{\"responses\":[\"netserver-0\",\"netserver-0\",\"netserver-0\",\"netserver-0\",\"netserver-0\"]}" | |
Feb 28 20:40:54.440: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 85 100 85 0 0 23862 0 --:--:-- --:--:-- --:--:-- 28333\n" | |
STEP: dialing(http) endpointPodIP:endpointHttpPort from test container | |
STEP: Dialing from container. Running command:curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=http&host=10.245.1.6&port=8080&tries=5' | |
Feb 28 20:40:54.441: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=http&host=10.245.1.6&port=8080&tries=5'' | |
Feb 28 20:40:56.052: INFO: stdout: "{\"responses\":[\"netserver-0\",\"netserver-0\",\"netserver-0\",\"netserver-0\",\"netserver-0\"]}" | |
Feb 28 20:40:56.052: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 85 100 85 0 0 20930 0 --:--:-- --:--:-- --:--:-- 28333\n" | |
STEP: dialing(udp) endpointPodIP:endpointUdpPort from node1 | |
STEP: Dialing from node. command:for i in $(seq 1 5); do echo 'hostName' | timeout -t 3 nc -w 1 -u 10.245.3.6 8081; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l | |
Feb 28 20:40:56.052: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c for i in $(seq 1 5); do echo 'hostName' | timeout -t 3 nc -w 1 -u 10.245.3.6 8081; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l' | |
Feb 28 20:41:02.671: INFO: stdout: "1\n" | |
Feb 28 20:41:02.671: INFO: stderr: "" | |
STEP: dialing(http) endpointPodIP:endpointHttpPort from node1 | |
STEP: Dialing from node. command:for i in $(seq 1 5); do curl -s --connect-timeout 1 http://10.245.3.6:8080/hostName; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l | |
Feb 28 20:41:02.671: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c for i in $(seq 1 5); do curl -s --connect-timeout 1 http://10.245.3.6:8080/hostName; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l' | |
Feb 28 20:41:04.337: INFO: stdout: "1\n" | |
Feb 28 20:41:04.337: INFO: stderr: "" | |
STEP: dialing(udp) endpointPodIP:endpointUdpPort from test container | |
STEP: Dialing from container. Running command:curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=udp&host=10.245.3.6&port=8081&tries=5' | |
Feb 28 20:41:04.337: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=udp&host=10.245.3.6&port=8081&tries=5'' | |
Feb 28 20:41:05.983: INFO: stdout: "{\"responses\":[\"netserver-1\",\"netserver-1\",\"netserver-1\",\"netserver-1\",\"netserver-1\"]}" | |
Feb 28 20:41:05.983: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 85 100 85 0 0 15170 0 --:--:-- --:--:-- --:--:-- 17000\n" | |
STEP: dialing(http) endpointPodIP:endpointHttpPort from test container | |
STEP: Dialing from container. Running command:curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=http&host=10.245.3.6&port=8080&tries=5' | |
Feb 28 20:41:05.984: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=http&host=10.245.3.6&port=8080&tries=5'' | |
Feb 28 20:41:07.619: INFO: stdout: "{\"responses\":[\"netserver-1\",\"netserver-1\",\"netserver-1\",\"netserver-1\",\"netserver-1\"]}" | |
Feb 28 20:41:07.619: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 85 100 85 0 0 11909 0 --:--:-- --:--:-- --:--:-- 14166\n" | |
STEP: dialing(udp) endpointPodIP:endpointUdpPort from node1 | |
STEP: Dialing from node. command:for i in $(seq 1 5); do echo 'hostName' | timeout -t 3 nc -w 1 -u 10.245.2.6 8081; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l | |
Feb 28 20:41:07.619: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c for i in $(seq 1 5); do echo 'hostName' | timeout -t 3 nc -w 1 -u 10.245.2.6 8081; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l' | |
Feb 28 20:41:14.235: INFO: stdout: "1\n" | |
Feb 28 20:41:14.235: INFO: stderr: "" | |
STEP: dialing(http) endpointPodIP:endpointHttpPort from node1 | |
STEP: Dialing from node. command:for i in $(seq 1 5); do curl -s --connect-timeout 1 http://10.245.2.6:8080/hostName; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l | |
Feb 28 20:41:14.235: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c for i in $(seq 1 5); do curl -s --connect-timeout 1 http://10.245.2.6:8080/hostName; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l' | |
Feb 28 20:41:15.870: INFO: stdout: "1\n" | |
Feb 28 20:41:15.870: INFO: stderr: "" | |
STEP: dialing(udp) endpointPodIP:endpointUdpPort from test container | |
STEP: Dialing from container. Running command:curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=udp&host=10.245.2.6&port=8081&tries=5' | |
Feb 28 20:41:15.870: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=udp&host=10.245.2.6&port=8081&tries=5'' | |
Feb 28 20:41:17.507: INFO: stdout: "{\"responses\":[\"netserver-2\",\"netserver-2\",\"netserver-2\",\"netserver-2\",\"netserver-2\"]}" | |
Feb 28 20:41:17.507: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 85 100 85 0 0 10211 0 --:--:-- --:--:-- --:--:-- 10625\n" | |
STEP: dialing(http) endpointPodIP:endpointHttpPort from test container | |
STEP: Dialing from container. Running command:curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=http&host=10.245.2.6&port=8080&tries=5' | |
Feb 28 20:41:17.507: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=http&host=10.245.2.6&port=8080&tries=5'' | |
Feb 28 20:41:19.172: INFO: stdout: "{\"responses\":[\"netserver-2\",\"netserver-2\",\"netserver-2\",\"netserver-2\",\"netserver-2\"]}" | |
Feb 28 20:41:19.172: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 85 100 85 0 0 11971 0 --:--:-- --:--:-- --:--:-- 14166\n" | |
STEP: Hitting clusterIP from host and container | |
STEP: dialing(udp) node1 --> clusterIP:clusterUdpPort | |
STEP: Dialing from node. command:for i in $(seq 1 24); do echo 'hostName' | timeout -t 3 nc -w 1 -u 10.0.20.224 90; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l | |
Feb 28 20:41:19.172: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c for i in $(seq 1 24); do echo 'hostName' | timeout -t 3 nc -w 1 -u 10.0.20.224 90; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l' | |
Feb 28 20:41:44.829: INFO: stdout: "3\n" | |
Feb 28 20:41:44.829: INFO: stderr: "" | |
STEP: dialing(http) node1 --> clusterIP:clusterHttpPort | |
STEP: Dialing from node. command:for i in $(seq 1 24); do curl -s --connect-timeout 1 http://10.0.20.224:80/hostName; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l | |
Feb 28 20:41:44.829: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c for i in $(seq 1 24); do curl -s --connect-timeout 1 http://10.0.20.224:80/hostName; echo; done | grep -v '^\s*$' |sort | uniq -c | wc -l' | |
Feb 28 20:41:46.578: INFO: stdout: "3\n" | |
Feb 28 20:41:46.578: INFO: stderr: "" | |
STEP: dialing(udp) test container --> clusterIP:clusterUdpPort | |
STEP: Dialing from container. Running command:curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=udp&host=10.0.20.224&port=90&tries=24' | |
Feb 28 20:41:46.578: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-on0a5 host-test-container-pod -- /bin/sh -c curl -q 'http://10.245.1.7:8080/dial?request=hostName&protocol=udp&host=10.0.20.224&port=90&tries=24'' | |
Feb 28 20:42:28.279: INFO: stdout: "{\"errors\":[\"reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout'\",\"reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout'\",\"reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout'\",\"reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout'\",\"reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout'\",\"reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout'\",\"reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout'\",\"reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout'\"],\"responses\":[\"netserver-1\",\"netserver-2\",\"netserver-2\",\"netserver-1\",\"netserver-1\",\"netserver-1\",\"netserver-2\",\"netserver-2\",\"netserver-2\",\"netserver-1\",\"netserver-2\",\"netserver-2\",\"netserver-1\",\"netserver-1\",\"netserver-2\",\"netserver-2\"]}" | |
Feb 28 20:42:28.279: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:08 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:11 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:12 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:13 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:15 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:16 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:17 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:18 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:19 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:20 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:21 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:22 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:23 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:24 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:25 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:26 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:27 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:28 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:29 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:30 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:31 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:32 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:33 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:34 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:35 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:36 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:37 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:38 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:39 --:--:-- 0\r100 898 100 898 0 0 22 0 0:00:40 0:00:40 --:--:-- 187\r100 898 100 898 0 0 22 0 0:00:40 0:00:40 --:--:-- 237\n" | |
[AfterEach] KubeProxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-e2e-kubeproxy-on0a5". | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:24 -0800 PST - event for netserver-0: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Container image "gcr.io/google_containers/netexec:1.4" already present on machine | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:24 -0800 PST - event for netserver-1: {kubelet spotter-kube-rkt-minion-yii0} Pulling: pulling image "gcr.io/google_containers/netexec:1.4" | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:24 -0800 PST - event for netserver-2: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Container image "gcr.io/google_containers/netexec:1.4" already present on machine | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:27 -0800 PST - event for netserver-0: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 7f752d4e | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:27 -0800 PST - event for netserver-0: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 7f752d4e | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:27 -0800 PST - event for netserver-2: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id 98c1f25a | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:27 -0800 PST - event for netserver-2: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id 98c1f25a | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:30 -0800 PST - event for netserver-1: {kubelet spotter-kube-rkt-minion-yii0} Pulled: Successfully pulled image "gcr.io/google_containers/netexec:1.4" | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:33 -0800 PST - event for netserver-1: {kubelet spotter-kube-rkt-minion-yii0} Started: Started with rkt id 3a10d179 | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:33 -0800 PST - event for netserver-1: {kubelet spotter-kube-rkt-minion-yii0} Created: Created with rkt id 3a10d179 | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:35 -0800 PST - event for host-test-container-pod: {kubelet spotter-kube-rkt-minion-yo39} Pulling: pulling image "gcr.io/google_containers/hostexec:1.2" | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:35 -0800 PST - event for host-test-container-pod: {default-scheduler } Scheduled: Successfully assigned host-test-container-pod to spotter-kube-rkt-minion-yo39 | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:35 -0800 PST - event for test-container-pod: {default-scheduler } Scheduled: Successfully assigned test-container-pod to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:35 -0800 PST - event for test-container-pod: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Container image "gcr.io/google_containers/netexec:1.4" already present on machine | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:38 -0800 PST - event for test-container-pod: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id fe374dc3 | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:38 -0800 PST - event for test-container-pod: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id fe374dc3 | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:39 -0800 PST - event for host-test-container-pod: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Successfully pulled image "gcr.io/google_containers/hostexec:1.2" | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:42 -0800 PST - event for host-test-container-pod: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id ab20552f | |
Feb 28 20:42:28.455: INFO: At 2016-02-28 20:40:42 -0800 PST - event for host-test-container-pod: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id ab20552f | |
Feb 28 20:42:28.633: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:42:28.633: INFO: host-test-container-pod spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:40:43 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: netserver-0 spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:40:28 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: netserver-1 spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:40:33 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: netserver-2 spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:40:27 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: test-container-pod spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:40:39 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:42:28.633: INFO: | |
Feb 28 20:42:28.715: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:42:28.797: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 3660 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:42:20 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:42:20 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:42:28.797: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:42:28.875: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:42:29.041: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:42:29.041: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:42:29.130: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 3661 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:42:21 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:42:21 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[registry-1.docker.io/library/redis:latest] 152067072} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:42:29.130: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:42:29.218: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:42:29.474: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:29.474: INFO: netserver-0 started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:29.474: INFO: test-container-pod started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:29.955: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:42:29.955: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:42:30.040: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 3662 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:42:23 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:42:23 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/netexec:1.4] 7513088} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:42:30.040: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:42:30.123: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:42:30.380: INFO: netserver-1 started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:30.380: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:30.672: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:42:30.672: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:42:30.753: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 3663 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:42:29 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:42:29 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/hostexec:1.2] 14018048} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/goproxy:0.1] 5495808} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:42:30.753: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:42:30.863: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:42:31.116: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:31.116: INFO: netserver-2 started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:31.116: INFO: host-test-container-pod started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:31.460: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:42:31.460: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-e2e-kubeproxy-on0a5" for this suite. | |
• Failure [133.247 seconds] | |
KubeProxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:102 | |
should test kube-proxy [Slow] [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:101 | |
Response was:map[errors:[reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout' reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout' reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout' reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout' reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout' reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout' reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout' reading from udp connection failed. err:'read udp 10.0.20.224:90: i/o timeout'] responses:[netserver-1 netserver-2 netserver-2 netserver-1 netserver-1 netserver-1 netserver-2 netserver-2 netserver-2 netserver-1 netserver-2 netserver-2 netserver-1 netserver-1 netserver-2 netserver-2]] | |
Expected | |
<int>: 2 | |
to be == | |
<int>: 3 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:241 | |
------------------------------ | |
Proxy version v1 | |
should proxy to cadvisor using proxy subresource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:61 | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:42:36.890: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:42:36.978: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-ku8k2 | |
Feb 28 20:42:37.058: INFO: Service account default in ns e2e-tests-proxy-ku8k2 had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:42:39.144: INFO: Service account default in ns e2e-tests-proxy-ku8k2 with secrets found. (2.166801662s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:42:39.144: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-ku8k2 | |
Feb 28 20:42:39.227: INFO: Service account default in ns e2e-tests-proxy-ku8k2 with secrets found. (82.837087ms) | |
[It] should proxy to cadvisor using proxy subresource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:61 | |
Feb 28 20:42:39.403: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 90.308156ms) | |
Feb 28 20:42:39.486: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 83.222435ms) | |
Feb 28 20:42:39.571: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 85.400918ms) | |
Feb 28 20:42:39.657: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 85.58856ms) | |
Feb 28 20:42:39.743: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 85.80036ms) | |
Feb 28 20:42:39.834: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 90.945102ms) | |
Feb 28 20:42:39.921: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 87.184869ms) | |
Feb 28 20:42:40.007: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 85.996694ms) | |
Feb 28 20:42:40.096: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 89.195884ms) | |
Feb 28 20:42:40.185: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 89.145073ms) | |
Feb 28 20:42:40.271: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 85.516719ms) | |
Feb 28 20:42:40.356: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 84.666996ms) | |
Feb 28 20:42:40.444: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 88.443672ms) | |
Feb 28 20:42:40.530: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 85.501501ms) | |
Feb 28 20:42:40.623: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 93.466989ms) | |
W0228 20:42:40.691773 11176 request.go:627] Throttling request took 67.957776ms, request: https://104.196.32.11/api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/ | |
Feb 28 20:42:40.783: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 159.658973ms) | |
W0228 20:42:40.891762 11176 request.go:627] Throttling request took 108.257493ms, request: https://104.196.32.11/api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/ | |
Feb 28 20:42:40.983: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 200.000532ms) | |
W0228 20:42:41.091769 11176 request.go:627] Throttling request took 108.218681ms, request: https://104.196.32.11/api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/ | |
Feb 28 20:42:41.180: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 196.53552ms) | |
W0228 20:42:41.291778 11176 request.go:627] Throttling request took 111.669288ms, request: https://104.196.32.11/api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/ | |
Feb 28 20:42:41.376: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 196.589688ms) | |
W0228 20:42:41.491740 11176 request.go:627] Throttling request took 114.981875ms, request: https://104.196.32.11/api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/ | |
Feb 28 20:42:41.585: INFO: /api/v1/nodes/spotter-kube-rkt-minion-8b1u:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 208.302821ms) | |
[AfterEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:42:41.585: INFO: Waiting up to 1m0s for all nodes to be ready | |
W0228 20:42:41.691730 11176 request.go:627] Throttling request took 106.594392ms, request: https://104.196.32.11/api/v1/nodes | |
STEP: Destroying namespace "e2e-tests-proxy-ku8k2" for this suite. | |
W0228 20:42:41.891749 11176 request.go:627] Throttling request took 115.455052ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-proxy-ku8k2 | |
W0228 20:42:42.091769 11176 request.go:627] Throttling request took 112.141402ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-proxy-ku8k2 | |
W0228 20:42:42.291791 11176 request.go:627] Throttling request took 118.072351ms, request: https://104.196.32.11/api/v1/namespaces/e2e-tests-proxy-ku8k2/pods | |
• [SLOW TEST:5.485 seconds] | |
Proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40 | |
version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39 | |
should proxy to cadvisor using proxy subresource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:61 | |
------------------------------ | |
Kubectl client Kubectl run --rm job | |
should create a job from an image, then delete the job [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1029 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:42:42.375: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:42:42.462: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-001cc | |
Feb 28 20:42:42.547: INFO: Service account default in ns e2e-tests-kubectl-001cc had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:42:44.628: INFO: Service account default in ns e2e-tests-kubectl-001cc with secrets found. (2.165709143s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:42:44.628: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-001cc | |
Feb 28 20:42:44.711: INFO: Service account default in ns e2e-tests-kubectl-001cc with secrets found. (82.81083ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[It] should create a job from an image, then delete the job [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1029 | |
STEP: executing a command with run --rm and attach with stdin | |
Feb 28 20:42:44.793: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config --namespace= run e2e-test-rm-busybox-job --image=busybox --rm=true --restart=Never --attach=true --stdin -- sh -c cat && echo 'stdin closed'' | |
Feb 28 20:42:55.374: INFO: stdout: "Waiting for pod default/e2e-test-rm-busybox-job-d5ix3 to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-d5ix3 to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-d5ix3 to be running, status is Pending, pod ready: false\nstdin closed\njob \"e2e-test-rm-busybox-job\" deleted\n" | |
Feb 28 20:42:55.374: INFO: stderr: "" | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-kubectl-001cc". | |
Feb 28 20:42:55.638: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:42:55.638: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:42:55.638: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:42:55.638: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:42:55.638: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:42:55.638: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:42:55.638: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:42:55.638: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:42:55.638: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:42:55.638: INFO: | |
Feb 28 20:42:55.719: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:42:55.801: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 3705 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:42:50 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:42:50 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:42:55.802: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:42:55.888: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:42:56.056: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:42:56.056: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:42:56.145: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 3707 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:42:51 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:42:51 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[registry-1.docker.io/library/busybox:latest] 1315840} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[registry-1.docker.io/library/redis:latest] 152067072} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:42:56.145: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:42:56.229: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:42:56.491: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:57.063: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:42:57.063: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:42:57.148: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 3710 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:42:53 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:42:53 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/netexec:1.4] 7513088} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:42:57.148: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:42:57.232: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:42:57.476: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:57.768: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:42:57.768: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:42:57.848: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 3704 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:42:49 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:42:49 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/hostexec:1.2] 14018048} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/goproxy:0.1] 5495808} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:42:57.848: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:42:57.931: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:42:58.181: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:42:58.521: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:42:58.521: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-001cc" for this suite. | |
• Failure [21.571 seconds] | |
Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1082 | |
Kubectl run --rm job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1030 | |
should create a job from an image, then delete the job [Conformance] [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1029 | |
Expected | |
<string>: Waiting for pod default/e2e-test-rm-busybox-job-d5ix3 to be running, status is Pending, pod ready: false | |
Waiting for pod default/e2e-test-rm-busybox-job-d5ix3 to be running, status is Pending, pod ready: false | |
Waiting for pod default/e2e-test-rm-busybox-job-d5ix3 to be running, status is Pending, pod ready: false | |
stdin closed | |
job "e2e-test-rm-busybox-job" deleted | |
to contain substring | |
<string>: abcd1234 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1022 | |
------------------------------ | |
S | |
------------------------------ | |
ReplicationController | |
should serve a basic image on each replica with a private image | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:45 | |
[BeforeEach] ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:43:03.946: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:43:04.037: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-q20oy | |
Feb 28 20:43:04.122: INFO: Service account default in ns e2e-tests-replication-controller-q20oy with secrets found. (84.789834ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:43:04.122: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-q20oy | |
Feb 28 20:43:04.205: INFO: Service account default in ns e2e-tests-replication-controller-q20oy with secrets found. (82.831974ms) | |
[It] should serve a basic image on each replica with a private image | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:45 | |
STEP: Creating replication controller my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4 | |
Feb 28 20:43:04.402: INFO: Pod name my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4: Found 2 pods out of 2 | |
STEP: Ensuring each pod is running | |
Feb 28 20:43:04.402: INFO: Waiting up to 5m0s for pod my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-r1xnz status to be running | |
Feb 28 20:43:04.489: INFO: Waiting for pod my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-r1xnz in namespace 'e2e-tests-replication-controller-q20oy' status to be 'running'(found phase: "Pending", readiness: false) (87.382809ms elapsed) | |
Feb 28 20:43:06.572: INFO: Waiting for pod my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-r1xnz in namespace 'e2e-tests-replication-controller-q20oy' status to be 'running'(found phase: "Pending", readiness: false) (2.170287636s elapsed) | |
Feb 28 20:43:08.652: INFO: Waiting for pod my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-r1xnz in namespace 'e2e-tests-replication-controller-q20oy' status to be 'running'(found phase: "Pending", readiness: false) (4.250613939s elapsed) | |
Feb 28 20:43:10.738: INFO: Waiting for pod my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-r1xnz in namespace 'e2e-tests-replication-controller-q20oy' status to be 'running'(found phase: "Pending", readiness: false) (6.336514065s elapsed) | |
Feb 28 20:43:12.823: INFO: Found pod 'my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-r1xnz' on node 'spotter-kube-rkt-minion-8b1u' | |
Feb 28 20:43:12.823: INFO: Waiting up to 5m0s for pod my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-y0187 status to be running | |
Feb 28 20:43:12.909: INFO: Found pod 'my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-y0187' on node 'spotter-kube-rkt-minion-yo39' | |
STEP: Trying to dial each unique pod | |
Feb 28 20:43:18.259: INFO: Controller my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4: Got expected result from replica 1 [my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-r1xnz]: "my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-r1xnz", 1 of 2 required successes so far | |
Feb 28 20:43:18.514: INFO: Controller my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4: Got expected result from replica 2 [my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-y0187]: "my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4-y0187", 2 of 2 required successes so far | |
STEP: deleting replication controller my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4 in namespace e2e-tests-replication-controller-q20oy | |
Feb 28 20:43:21.216: INFO: Deleting RC my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4 took: 2.613429828s | |
Feb 28 20:43:21.300: INFO: Terminating RC my-hostname-private-eb825ccf-de9e-11e5-a1fb-54ee75510eb4 pods took: 83.845573ms | |
[AfterEach] ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:43:21.300: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-replication-controller-q20oy" for this suite. | |
• [SLOW TEST:22.793 seconds] | |
ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:46 | |
should serve a basic image on each replica with a private image | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:45 | |
------------------------------ | |
Kubectl client Kubectl cluster-info | |
should check if Kubernetes master services is included in cluster-info [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:553 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:43:26.739: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:43:26.829: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-0k2o8 | |
Feb 28 20:43:26.917: INFO: Service account default in ns e2e-tests-kubectl-0k2o8 with secrets found. (88.003023ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:43:26.917: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-0k2o8 | |
Feb 28 20:43:27.001: INFO: Service account default in ns e2e-tests-kubectl-0k2o8 with secrets found. (83.573437ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[It] should check if Kubernetes master services is included in cluster-info [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:553 | |
STEP: validating cluster-info | |
Feb 28 20:43:27.001: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config cluster-info' | |
Feb 28 20:43:27.679: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://104.196.32.11\x1b[0m\n\x1b[0;32mGLBCDefaultBackend\x1b[0m is running at \x1b[0;33mhttps://104.196.32.11/api/v1/proxy/namespaces/kube-system/services/default-http-backend\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://104.196.32.11/api/v1/proxy/namespaces/kube-system/services/kube-dns\x1b[0m\n\x1b[0;32mkubernetes-dashboard\x1b[0m is running at \x1b[0;33mhttps://104.196.32.11/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard\x1b[0m\n" | |
Feb 28 20:43:27.679: INFO: stderr: "" | |
Feb 28 20:43:27.679: FAIL: Missing Heapster in kubectl cluster-info | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-kubectl-0k2o8". | |
Feb 28 20:43:27.943: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:43:27.943: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:43:27.943: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:43:27.943: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:43:27.943: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:43:27.943: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:43:27.943: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:43:27.943: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:43:27.943: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:43:27.943: INFO: | |
Feb 28 20:43:28.026: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:43:28.116: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 3752 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:43:20 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:43:20 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:43:28.116: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:43:28.203: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:43:28.374: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:43:28.374: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:43:28.460: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 3754 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:43:21 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:43:21 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[registry-1.docker.io/library/busybox:latest] 1315840} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[registry-1.docker.io/library/redis:latest] 152067072} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:43:28.460: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:43:28.544: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:43:28.815: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:43:29.294: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:43:29.294: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:43:29.375: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 3760 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0 beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:43:23 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:43:23 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/netexec:1.4] 7513088} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:43:29.375: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:43:29.462: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:43:29.714: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:43:30.032: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:43:30.032: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:43:30.114: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 3765 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:43:29 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:43:29 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/goproxy:0.1] 5495808} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:43:30.114: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:43:30.197: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:43:30.455: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:43:30.790: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:43:30.790: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-0k2o8" for this suite. | |
• Failure [9.484 seconds] | |
Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1082 | |
Kubectl cluster-info | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:554 | |
should check if Kubernetes master services is included in cluster-info [Conformance] [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:553 | |
Feb 28 20:43:27.679: Missing Heapster in kubectl cluster-info | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:550 | |
------------------------------ | |
SS | |
------------------------------ | |
Pods | |
should support remote command execution over websockets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:863 | |
[BeforeEach] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:43:36.223: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:43:36.312: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-cfdqm | |
Feb 28 20:43:36.395: INFO: Service account default in ns e2e-tests-pods-cfdqm with secrets found. (83.11183ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:43:36.395: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-cfdqm | |
Feb 28 20:43:36.478: INFO: Service account default in ns e2e-tests-pods-cfdqm with secrets found. (82.647847ms) | |
[It] should support remote command execution over websockets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:863 | |
Feb 28 20:43:36.478: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: creating the pod | |
STEP: submitting the pod to kubernetes | |
Feb 28 20:43:36.569: INFO: Waiting up to 5m0s for pod pod-exec-websocket-febf0007-de9e-11e5-a1fb-54ee75510eb4 status to be running | |
Feb 28 20:43:36.659: INFO: Waiting for pod pod-exec-websocket-febf0007-de9e-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pods-cfdqm' status to be 'running'(found phase: "Pending", readiness: false) (89.951167ms elapsed) | |
Feb 28 20:43:38.745: INFO: Waiting for pod pod-exec-websocket-febf0007-de9e-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-pods-cfdqm' status to be 'running'(found phase: "Pending", readiness: false) (2.176631556s elapsed) | |
Feb 28 20:43:40.840: INFO: Found pod 'pod-exec-websocket-febf0007-de9e-11e5-a1fb-54ee75510eb4' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: deleting the pod | |
[AfterEach] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:43:41.485: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-cfdqm" for this suite. | |
• [SLOW TEST:10.684 seconds] | |
Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1263 | |
should support remote command execution over websockets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:863 | |
------------------------------ | |
SS | |
------------------------------ | |
DaemonRestart [Disruptive] | |
Scheduler should continue assigning pods to nodes across restart | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:295 | |
[BeforeEach] DaemonRestart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:43:46.908: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:43:46.998: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonrestart-oqw0j | |
Feb 28 20:43:47.081: INFO: Service account default in ns e2e-tests-daemonrestart-oqw0j with secrets found. (83.16462ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:43:47.081: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonrestart-oqw0j | |
Feb 28 20:43:47.163: INFO: Service account default in ns e2e-tests-daemonrestart-oqw0j with secrets found. (82.474426ms) | |
[BeforeEach] DaemonRestart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:246 | |
STEP: creating replication controller daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 in namespace e2e-tests-daemonrestart-oqw0j | |
Feb 28 20:43:47.255: INFO: Created replication controller with name: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4, namespace: e2e-tests-daemonrestart-oqw0j, replica count: 10 | |
Feb 28 20:43:57.255: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 5 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:44:07.255: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:44:17.256: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:44:27.256: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:44:37.256: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:44:47.256: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:44:57.256: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:45:07.257: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:45:17.257: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:45:27.257: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:45:37.257: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:45:47.257: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:45:57.258: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:46:07.258: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:46:17.258: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:46:27.258: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:46:37.258: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:46:47.259: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:46:57.259: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:47:07.259: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:47:17.259: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:47:27.259: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:47:37.260: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:47:47.260: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 9 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:47:57.260: INFO: daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 Pods: 10 out of 10 created, 10 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
[It] Scheduler should continue assigning pods to nodes across restart | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:295 | |
Feb 28 20:47:57.260: INFO: Checking if Daemon kube-scheduler on node 104.196.32.11 is up by polling for a 200 on its /healthz endpoint | |
Feb 28 20:48:03.368: INFO: Killing Daemon kube-scheduler on node 104.196.32.11 | |
STEP: Scaling replication controller daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 in namespace e2e-tests-daemonrestart-oqw0j to 15 | |
Feb 28 20:48:15.114: INFO: Checking if Daemon kube-scheduler on node 104.196.32.11 is up by polling for a 200 on its /healthz endpoint | |
STEP: Scaling replication controller daemonrestart10-611bb944-de94-11e5-a1fb-54ee75510eb4 in namespace e2e-tests-daemonrestart-oqw0j to 15 | |
[AfterEach] DaemonRestart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:48:36.542: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-daemonrestart-oqw0j" for this suite. | |
[AfterEach] DaemonRestart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:250 | |
• [SLOW TEST:295.324 seconds] | |
DaemonRestart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:316 | |
Scheduler should continue assigning pods to nodes across restart | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:295 | |
------------------------------ | |
Restart [Disruptive] | |
should restart all nodes and ensure all nodes and pods recover | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:116 | |
[BeforeEach] Restart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:48:42.232: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:48:42.320: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-restart-7giwq | |
Feb 28 20:48:42.401: INFO: Service account default in ns e2e-tests-restart-7giwq had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:48:44.485: INFO: Service account default in ns e2e-tests-restart-7giwq with secrets found. (2.165128144s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:48:44.485: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-restart-7giwq | |
Feb 28 20:48:44.563: INFO: Service account default in ns e2e-tests-restart-7giwq with secrets found. (77.874114ms) | |
[BeforeEach] Restart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:61 | |
[It] should restart all nodes and ensure all nodes and pods recover | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:116 | |
STEP: ensuring all nodes are ready | |
Feb 28 20:48:46.918: INFO: Successfully found 3 nodes | |
Feb 28 20:48:46.918: INFO: Waiting up to 17.645198862s for node spotter-kube-rkt-minion-8b1u condition Ready to be true | |
Feb 28 20:48:46.918: INFO: Waiting up to 17.645198862s for node spotter-kube-rkt-minion-yii0 condition Ready to be true | |
Feb 28 20:48:46.918: INFO: Waiting up to 17.645198862s for node spotter-kube-rkt-minion-yo39 condition Ready to be true | |
Feb 28 20:48:47.198: INFO: Got the following nodes before restart: [spotter-kube-rkt-minion-8b1u spotter-kube-rkt-minion-yii0 spotter-kube-rkt-minion-yo39] | |
STEP: ensuring all pods are running and ready | |
Feb 28 20:48:47.198: INFO: Waiting up to 2m0s for 8 pods to be running and ready: [etcd-server-kubernetes-master-spotter-kube-rkt-master kube-apiserver-kubernetes-master-spotter-kube-rkt-master kube-controller-manager-kubernetes-master-spotter-kube-rkt-master l7-lb-controller-vg83c etcd-server-events-kubernetes-master-spotter-kube-rkt-master kube-dns-v10-ucm9e kube-scheduler-kubernetes-master-spotter-kube-rkt-master kubernetes-dashboard-v0.1.0-xx4kj] | |
Feb 28 20:48:47.198: INFO: Waiting up to 2m0s for pod etcd-server-kubernetes-master-spotter-kube-rkt-master status to be running and ready | |
Feb 28 20:48:47.198: INFO: Waiting up to 2m0s for pod kube-apiserver-kubernetes-master-spotter-kube-rkt-master status to be running and ready | |
Feb 28 20:48:47.198: INFO: Waiting up to 2m0s for pod kube-controller-manager-kubernetes-master-spotter-kube-rkt-master status to be running and ready | |
Feb 28 20:48:47.198: INFO: Waiting up to 2m0s for pod l7-lb-controller-vg83c status to be running and ready | |
Feb 28 20:48:47.198: INFO: Waiting up to 2m0s for pod etcd-server-events-kubernetes-master-spotter-kube-rkt-master status to be running and ready | |
Feb 28 20:48:47.198: INFO: Waiting up to 2m0s for pod kube-dns-v10-ucm9e status to be running and ready | |
Feb 28 20:48:47.198: INFO: Waiting up to 2m0s for pod kube-scheduler-kubernetes-master-spotter-kube-rkt-master status to be running and ready | |
Feb 28 20:48:47.198: INFO: Waiting up to 2m0s for pod kubernetes-dashboard-v0.1.0-xx4kj status to be running and ready | |
Feb 28 20:48:47.488: INFO: Wanted all 8 pods to be running and ready. Result: true. Pods: [etcd-server-kubernetes-master-spotter-kube-rkt-master kube-apiserver-kubernetes-master-spotter-kube-rkt-master kube-controller-manager-kubernetes-master-spotter-kube-rkt-master l7-lb-controller-vg83c etcd-server-events-kubernetes-master-spotter-kube-rkt-master kube-dns-v10-ucm9e kube-scheduler-kubernetes-master-spotter-kube-rkt-master kubernetes-dashboard-v0.1.0-xx4kj] | |
STEP: restarting all of the nodes | |
STEP: getting the name of the template for the managed instance group | |
Feb 28 20:48:51.488: INFO: Running gcloud [compute instance-groups managed --project=coreos-gce-testing describe --zone=us-east1-b spotter-kube-rkt-minion-group] | |
baseInstanceName: spotter-kube-rkt-minion | |
creationTimestamp: '2016-02-28T19:25:04.628-08:00' | |
currentActions: | |
abandoning: 0 | |
creating: 0 | |
deleting: 0 | |
none: 3 | |
recreating: 0 | |
refreshing: 0 | |
restarting: 0 | |
fingerprint: 42WmSpB8rSM= | |
id: '8138790142891394303' | |
instanceGroup: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/zones/us-east1-b/instanceGroups/spotter-kube-rkt-minion-group | |
instanceTemplate: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/instanceTemplates/spotter-kube-rkt-minion-template | |
kind: compute#instanceGroupManager | |
name: spotter-kube-rkt-minion-group | |
selfLink: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/zones/us-east1-b/instanceGroupManagers/spotter-kube-rkt-minion-group | |
targetSize: 3 | |
zone: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/zones/us-east1-b | |
Feb 28 20:48:53.431: INFO: MIG group spotter-kube-rkt-minion-group using template: spotter-kube-rkt-minion-template | |
STEP: starting the MIG rolling update to spotter-kube-rkt-minion-template | |
Feb 28 20:48:57.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:48:59.152: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:48:59.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:01.457: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:01.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:03.180: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:03.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:05.093: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:05.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:07.125: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:07.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:09.111: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:09.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:11.038: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:11.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:13.094: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:13.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:15.301: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:15.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:17.097: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:17.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:19.106: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:19.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:21.093: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:21.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:23.106: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:23.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:25.076: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:25.561: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:27.106: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:27.106: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:28.628: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:30.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:32.288: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:32.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:34.382: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:34.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:36.298: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:36.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:38.383: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:38.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:40.340: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:40.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:42.272: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:42.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:44.309: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:44.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:46.296: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:46.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:48.304: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:48.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:50.312: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:50.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:52.284: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:52.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:54.309: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:54.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:56.316: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:56.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:49:58.312: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:49:58.759: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:50:00.334: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
Feb 28 20:50:00.334: INFO: Running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s] | |
ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. | |
Feb 28 20:50:01.885: INFO: Got error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
[AfterEach] Restart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-restart-7giwq". | |
Feb 28 20:50:02.137: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:50:02.137: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:50:02.137: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:50:02.137: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:50:02.137: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:50:02.137: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:50:02.137: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:48:05 -0800 PST }] | |
Feb 28 20:50:02.137: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:50:02.137: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:50:02.137: INFO: | |
Feb 28 20:50:02.225: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:50:02.306: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 4039 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:50:01 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:50:01 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:50:02.306: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:50:02.394: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:50:02.559: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:50:02.559: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:50:02.644: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 4036 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:49:53 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:49:53 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[registry-1.docker.io/library/busybox:latest] 1315840} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[registry-1.docker.io/library/redis:latest] 152067072} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:50:02.644: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:50:02.732: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:50:02.989: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:50:03.579: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:50:03.579: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:11.951238s} | |
Feb 28 20:50:03.579: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10.868581s} | |
Feb 28 20:50:03.579: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:10.086495s} | |
Feb 28 20:50:03.579: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:10.086495s} | |
Feb 28 20:50:03.579: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:50:03.661: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 4037 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:49:54 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:49:54 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/netexec:1.4] 7513088} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:50:03.661: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:50:03.742: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:50:03.994: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:50:04.305: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:50:04.305: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:50:04.390: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 4038 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:50:00 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:50:00 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/pause:2.0] 355840} {[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/goproxy:0.1] 5495808} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:50:04.390: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:50:04.475: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:50:04.733: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:50:05.157: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:50:05.157: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-restart-7giwq" for this suite. | |
[AfterEach] Restart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:67 | |
• Failure [88.351 seconds] | |
Restart [Disruptive] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:117 | |
should restart all nodes and ensure all nodes and pods recover [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:116 | |
Expected error: | |
<*errors.errorString | 0xc209342600>: { | |
s: "couldn't start the MIG rolling update: migRollingUpdateStart() failed with last error: rolling-updates call failed with err: error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \\n\"", | |
} | |
couldn't start the MIG rolling update: migRollingUpdateStart() failed with last error: rolling-updates call failed with err: error running gcloud [alpha compute rolling-updates --project=coreos-gce-testing --zone=us-east1-b start --group=spotter-kube-rkt-minion-group --template=spotter-kube-rkt-minion-template --instance-startup-timeout=300s --max-num-concurrent-instances=1 --max-num-failed-instances=0 --min-instance-update-time=0s]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.alpha.compute.rolling-updates.start) ResponseError: code=412, message=There already exists an update in a non-terminal state. \n" | |
not to have occurred | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:90 | |
------------------------------ | |
SSS | |
------------------------------ | |
Kubectl client Kubectl describe | |
should check if kubectl describe prints relevant information for rc and pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:652 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:50:10.583: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:50:10.675: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-hzpsy | |
Feb 28 20:50:10.756: INFO: Service account default in ns e2e-tests-kubectl-hzpsy with secrets found. (80.538636ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:50:10.756: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-hzpsy | |
Feb 28 20:50:10.840: INFO: Service account default in ns e2e-tests-kubectl-hzpsy with secrets found. (83.67655ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:652 | |
Feb 28 20:50:10.926: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-hzpsy' | |
Feb 28 20:50:11.691: INFO: stdout: "replicationcontroller \"redis-master\" created\n" | |
Feb 28 20:50:11.691: INFO: stderr: "" | |
Feb 28 20:50:11.691: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/examples/guestbook-go/redis-master-service.json --namespace=e2e-tests-kubectl-hzpsy' | |
Feb 28 20:50:12.469: INFO: stdout: "service \"redis-master\" created\n" | |
Feb 28 20:50:12.469: INFO: stderr: "" | |
Feb 28 20:50:12.551: INFO: Waiting up to 5m0s for pod redis-master-tsjjz status to be running | |
Feb 28 20:50:12.635: INFO: Waiting for pod redis-master-tsjjz in namespace 'e2e-tests-kubectl-hzpsy' status to be 'running'(found phase: "Pending", readiness: false) (84.062599ms elapsed) | |
Feb 28 20:50:14.722: INFO: Waiting for pod redis-master-tsjjz in namespace 'e2e-tests-kubectl-hzpsy' status to be 'running'(found phase: "Pending", readiness: false) (2.171389084s elapsed) | |
Feb 28 20:50:16.810: INFO: Found pod 'redis-master-tsjjz' on node 'spotter-kube-rkt-minion-8b1u' | |
Feb 28 20:50:16.810: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config describe pod redis-master-tsjjz --namespace=e2e-tests-kubectl-hzpsy' | |
Feb 28 20:50:17.654: INFO: stdout: "Name:\t\tredis-master-tsjjz\nNamespace:\te2e-tests-kubectl-hzpsy\nNode:\t\tspotter-kube-rkt-minion-8b1u/10.240.0.5\nStart Time:\tSun, 28 Feb 2016 20:50:11 -0800\nLabels:\t\tapp=redis,role=master\nStatus:\t\tRunning\nIP:\t\t10.245.1.4\nControllers:\tReplicationController/redis-master\nContainers:\n redis-master:\n Container ID:\trkt://c63e888d-c4c6-4dd2-a8e2-b4e595817eaa:redis-master\n Image:\t\tredis\n Image ID:\t\trkt://sha512-de9a3ebb0789fb7b12a58962f451c8606e013872080244c725470f15a9142868\n QoS Tier:\n cpu:\t\tBestEffort\n memory:\t\tBestEffort\n State:\t\tRunning\n Started:\t\tSun, 28 Feb 2016 20:50:11 -0800\n Ready:\t\tTrue\n Restart Count:\t0\n Environment Variables:\nConditions:\n Type\t\tStatus\n Ready \tTrue \nVolumes:\n default-token-g7uyg:\n Type:\tSecret (a secret that should populate this volume)\n SecretName:\tdefault-token-g7uyg\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t\t\tSubobjectPath\t\t\tType\t\tReason\t\tMessage\n ---------\t--------\t-----\t----\t\t\t\t\t-------------\t\t\t--------\t------\t\t-------\n 6s\t\t6s\t\t1\t{default-scheduler }\t\t\t\t\t\t\tNormal\t\tScheduled\tSuccessfully assigned redis-master-tsjjz to spotter-kube-rkt-minion-8b1u\n 6s\t\t6s\t\t1\t{kubelet spotter-kube-rkt-minion-8b1u}\tspec.containers{redis-master}\tNormal\t\tPulling\t\tpulling image \"redis\"\n 6s\t\t6s\t\t1\t{kubelet spotter-kube-rkt-minion-8b1u}\tspec.containers{redis-master}\tNormal\t\tPulled\t\tSuccessfully pulled image \"redis\"\n 2s\t\t2s\t\t1\t{kubelet spotter-kube-rkt-minion-8b1u}\tspec.containers{redis-master}\tNormal\t\tCreated\t\tCreated with rkt id c63e888d\n 2s\t\t2s\t\t1\t{kubelet spotter-kube-rkt-minion-8b1u}\tspec.containers{redis-master}\tNormal\t\tStarted\t\tStarted with rkt id c63e888d\n\n\n" | |
Feb 28 20:50:17.654: INFO: stderr: "" | |
Feb 28 20:50:17.654: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-hzpsy' | |
Feb 28 20:50:18.561: INFO: stdout: "Name:\t\tredis-master\nNamespace:\te2e-tests-kubectl-hzpsy\nImage(s):\tredis\nSelector:\tapp=redis,role=master\nLabels:\t\tapp=redis,role=master\nReplicas:\t1 current / 1 desired\nPods Status:\t1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nNo volumes.\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t\tSubobjectPath\tType\t\tReason\t\t\tMessage\n ---------\t--------\t-----\t----\t\t\t\t-------------\t--------\t------\t\t\t-------\n 7s\t\t7s\t\t1\t{replication-controller }\t\t\tNormal\t\tSuccessfulCreate\tCreated pod: redis-master-tsjjz\n\n\n" | |
Feb 28 20:50:18.561: INFO: stderr: "" | |
Feb 28 20:50:18.561: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-hzpsy' | |
Feb 28 20:50:19.457: INFO: stdout: "Name:\t\t\tredis-master\nNamespace:\t\te2e-tests-kubectl-hzpsy\nLabels:\t\t\tapp=redis,role=master\nSelector:\t\tapp=redis,role=master\nType:\t\t\tClusterIP\nIP:\t\t\t10.0.116.205\nPort:\t\t\t<unnamed>\t6379/TCP\nEndpoints:\t\t10.245.1.4:6379\nSession Affinity:\tNone\nNo events.\n\n" | |
Feb 28 20:50:19.457: INFO: stderr: "" | |
Feb 28 20:50:19.545: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config describe node spotter-kube-rkt-master' | |
Feb 28 20:50:20.545: INFO: stdout: "Name:\t\t\tspotter-kube-rkt-master\nLabels:\t\t\tbeta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-b,kubernetes.io/hostname=spotter-kube-rkt-master\nCreationTimestamp:\tSun, 28 Feb 2016 19:26:02 -0800\nPhase:\t\t\t\nConditions:\n Type\t\tStatus\tLastHeartbeatTime\t\t\tLastTransitionTime\t\t\tReason\t\t\t\tMessage\n ----\t\t------\t-----------------\t\t\t------------------\t\t\t------\t\t\t\t-------\n OutOfDisk \tFalse \tSun, 28 Feb 2016 20:50:11 -0800 \tSun, 28 Feb 2016 19:26:02 -0800 \tKubeletHasSufficientDisk \tkubelet has sufficient disk space available\n Ready \tTrue \tSun, 28 Feb 2016 20:50:11 -0800 \tSun, 28 Feb 2016 19:26:02 -0800 \tKubeletReady \t\t\tkubelet is posting ready status\nAddresses:\t10.240.0.2,104.196.32.11\nCapacity:\n cpu:\t\t2\n memory:\t7664348Ki\n pods:\t\t40\nSystem Info:\n Machine ID:\t\t\t18f8f8ae3bfd63d33ca6152c840ac772\n System UUID:\t\t\t18F8F8AE-3BFD-63D3-3CA6-152C840AC772\n Boot ID:\t\t\t85ba1a68-a60b-4788-8d0d-31581d8a1dbc\n Kernel Version:\t\t4.4.1-coreos\n OS Image:\t\t\tCoreOS 960.0.0 (Coeur Rouge)\n Container Runtime Version:\tdocker://1.10.0\n Kubelet Version:\t\tv1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a\n Kube-Proxy Version:\t\tv1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a\nPodCIDR:\t\t\t10.245.0.0/24\nExternalID:\t\t\t1057893855773431411\nNon-terminated Pods:\t\t(5 in total)\n Namespace\t\t\tName\t\t\t\t\t\t\t\t\t\tCPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n ---------\t\t\t----\t\t\t\t\t\t\t\t\t\t------------\t----------\t---------------\t-------------\n kube-system\t\t\tetcd-server-events-kubernetes-master-spotter-kube-rkt-master\t\t\t100m (5%)\t100m (5%)\t0 (0%)\t\t0 (0%)\n kube-system\t\t\tetcd-server-kubernetes-master-spotter-kube-rkt-master\t\t\t\t200m (10%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n kube-system\t\t\tkube-apiserver-kubernetes-master-spotter-kube-rkt-master\t\t\t250m (12%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n kube-system\t\t\tkube-controller-manager-kubernetes-master-spotter-kube-rkt-master\t\t200m (10%)\t200m (10%)\t0 (0%)\t\t0 (0%)\n kube-system\t\t\tkube-scheduler-kubernetes-master-spotter-kube-rkt-master\t\t\t100m (5%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\nAllocated resources:\n (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)\n CPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n ------------\t----------\t---------------\t-------------\n 850m (42%)\t300m (15%)\t0 (0%)\t\t0 (0%)\nNo events.\n\n" | |
Feb 28 20:50:20.545: INFO: stderr: "" | |
Feb 28 20:50:20.545: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config describe namespace e2e-tests-kubectl-hzpsy' | |
Feb 28 20:50:21.454: INFO: stdout: "Name:\te2e-tests-kubectl-hzpsy\nLabels:\te2e-framework=kubectl,e2e-run=611b0c6e-de94-11e5-a1fb-54ee75510eb4\nStatus:\tActive\n\nNo resource quota.\n\nNo resource limits.\n\n\n" | |
Feb 28 20:50:21.454: INFO: stderr: "" | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:50:21.455: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-hzpsy" for this suite. | |
• [SLOW TEST:16.299 seconds] | |
Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1082 | |
Kubectl describe | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:653 | |
should check if kubectl describe prints relevant information for rc and pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:652 | |
------------------------------ | |
EmptyDir volumes | |
should support (non-root,0777,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:112 | |
[BeforeEach] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:50:26.882: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:50:26.973: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-2kpo9 | |
Feb 28 20:50:27.057: INFO: Service account default in ns e2e-tests-emptydir-2kpo9 with secrets found. (83.352023ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:50:27.057: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-2kpo9 | |
Feb 28 20:50:27.138: INFO: Service account default in ns e2e-tests-emptydir-2kpo9 with secrets found. (81.626136ms) | |
[It] should support (non-root,0777,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:112 | |
STEP: Creating a pod to test emptydir 0777 on node default medium | |
Feb 28 20:50:27.228: INFO: Waiting up to 5m0s for pod pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4 status to be success or failure | |
Feb 28 20:50:27.310: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-emptydir-2kpo9' so far | |
Feb 28 20:50:27.310: INFO: Waiting for pod pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-emptydir-2kpo9' status to be 'success or failure'(found phase: "Pending", readiness: false) (82.28281ms elapsed) | |
Feb 28 20:50:29.401: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-emptydir-2kpo9' so far | |
Feb 28 20:50:29.401: INFO: Waiting for pod pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-emptydir-2kpo9' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.173645108s elapsed) | |
Feb 28 20:50:31.483: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-emptydir-2kpo9' so far | |
Feb 28 20:50:31.483: INFO: Waiting for pod pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-emptydir-2kpo9' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.254936367s elapsed) | |
Feb 28 20:50:33.577: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4' in namespace 'e2e-tests-emptydir-2kpo9' so far | |
Feb 28 20:50:33.577: INFO: Waiting for pod pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-emptydir-2kpo9' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.348846878s elapsed) | |
Feb 28 20:50:35.660: INFO: Unexpected error occurred: pod 'pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4' terminated with failure: &{ExitCode:52 Signal:0 Reason:Error Message: StartedAt:2016-02-28 20:50:27 -0800 PST FinishedAt:0001-01-01 00:00:00 +0000 UTC ContainerID:rkt://16d47a08-31c1-47e1-96b3-0512f934923d:test-container} | |
[AfterEach] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-emptydir-2kpo9". | |
Feb 28 20:50:35.838: INFO: At 2016-02-28 20:50:27 -0800 PST - event for pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4: {default-scheduler } Scheduled: Successfully assigned pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4 to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:50:35.838: INFO: At 2016-02-28 20:50:27 -0800 PST - event for pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "gcr.io/google_containers/mounttest-user:0.3" | |
Feb 28 20:50:35.838: INFO: At 2016-02-28 20:50:29 -0800 PST - event for pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "gcr.io/google_containers/mounttest-user:0.3" | |
Feb 28 20:50:35.838: INFO: At 2016-02-28 20:50:32 -0800 PST - event for pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 16d47a08 | |
Feb 28 20:50:35.838: INFO: At 2016-02-28 20:50:32 -0800 PST - event for pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 16d47a08 | |
Feb 28 20:50:36.011: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:50:36.011: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:50:36.011: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:50:36.011: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:50:36.011: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:50:36.011: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:50:36.011: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:48:05 -0800 PST }] | |
Feb 28 20:50:36.011: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:50:36.011: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:50:36.011: INFO: | |
Feb 28 20:50:36.095: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:50:36.177: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 4088 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:50:31 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:50:31 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:50:36.177: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:50:36.263: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:50:36.431: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:50:36.431: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:50:36.513: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 4089 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:50:33 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:50:33 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/mounttest-user:0.3] 1724928} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[registry-1.docker.io/library/busybox:latest] 1315840} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[registry-1.docker.io/library/redis:latest] 152067072} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:50:36.513: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:50:36.598: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:50:36.865: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:50:37.481: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:50:37.481: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:11.951238s} | |
Feb 28 20:50:37.481: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10.868581s} | |
Feb 28 20:50:37.481: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:10.086495s} | |
Feb 28 20:50:37.481: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:10.086495s} | |
Feb 28 20:50:37.481: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:50:37.567: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 4091 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:50:35 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:50:35 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/netexec:1.4] 7513088} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:50:37.567: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:50:37.652: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:50:37.902: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:50:38.205: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:50:38.205: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:50:38.292: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 4087 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:50:31 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:50:31 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/pause:2.0] 355840} {[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/goproxy:0.1] 5495808} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:50:38.292: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:50:38.376: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:50:38.629: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:50:38.981: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:50:38.981: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-2kpo9" for this suite. | |
• Failure [17.519 seconds] | |
EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113 | |
should support (non-root,0777,default) [Conformance] [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:112 | |
Expected error: | |
<*errors.errorString | 0xc209109ff0>: { | |
s: "pod 'pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4' terminated with failure: &{ExitCode:52 Signal:0 Reason:Error Message: StartedAt:2016-02-28 20:50:27 -0800 PST FinishedAt:0001-01-01 00:00:00 +0000 UTC ContainerID:rkt://16d47a08-31c1-47e1-96b3-0512f934923d:test-container}", | |
} | |
pod 'pod-f3849ce9-de9f-11e5-a1fb-54ee75510eb4' terminated with failure: &{ExitCode:52 Signal:0 Reason:Error Message: StartedAt:2016-02-28 20:50:27 -0800 PST FinishedAt:0001-01-01 00:00:00 +0000 UTC ContainerID:rkt://16d47a08-31c1-47e1-96b3-0512f934923d:test-container} | |
not to have occurred | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:1455 | |
------------------------------ | |
kubelet Clean up pods on node | |
kubelet should be able to delete 10 pods per node in 1m0s. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:156 | |
[BeforeEach] kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:50:44.401: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:50:44.489: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-c6b8v | |
Feb 28 20:50:44.574: INFO: Service account default in ns e2e-tests-kubelet-c6b8v with secrets found. (84.473425ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:50:44.574: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-c6b8v | |
Feb 28 20:50:44.658: INFO: Service account default in ns e2e-tests-kubelet-c6b8v with secrets found. (84.251727ms) | |
[BeforeEach] kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:104 | |
[It] kubelet should be able to delete 10 pods per node in 1m0s. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:156 | |
STEP: Creating a RC of 30 pods and wait until all pods of this RC are running | |
STEP: creating replication controller cleanup30-fe0fe706-de9f-11e5-a1fb-54ee75510eb4 in namespace e2e-tests-kubelet-c6b8v | |
Feb 28 20:50:44.914: INFO: Created replication controller with name: cleanup30-fe0fe706-de9f-11e5-a1fb-54ee75510eb4, namespace: e2e-tests-kubelet-c6b8v, replica count: 30 | |
Feb 28 20:50:45.573: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:50:45.805: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:50:45.895: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:50:46.033: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:50:51.174: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:50:51.841: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:50:52.418: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:50:52.556: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:50:54.914: INFO: cleanup30-fe0fe706-de9f-11e5-a1fb-54ee75510eb4 Pods: 30 out of 30 created, 0 running, 30 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:50:56.747: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:50:57.533: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:50:58.653: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:50:58.803: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:51:02.315: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:51:03.250: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:51:04.026: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:51:04.552: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:51:04.914: INFO: cleanup30-fe0fe706-de9f-11e5-a1fb-54ee75510eb4 Pods: 30 out of 30 created, 4 running, 26 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:51:07.881: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:51:08.738: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:51:09.636: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:51:10.549: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:51:13.478: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:51:14.243: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:51:14.915: INFO: cleanup30-fe0fe706-de9f-11e5-a1fb-54ee75510eb4 Pods: 30 out of 30 created, 29 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:51:15.087: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:51:16.255: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:51:19.038: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:51:19.694: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:51:20.511: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:51:22.012: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:51:24.620: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:51:24.915: INFO: cleanup30-fe0fe706-de9f-11e5-a1fb-54ee75510eb4 Pods: 30 out of 30 created, 30 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:51:25.079: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:51:25.932: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:51:25.932: INFO: Checking pods on node spotter-kube-rkt-minion-8b1u via /runningpods endpoint | |
Feb 28 20:51:25.932: INFO: Checking pods on node spotter-kube-rkt-minion-yii0 via /runningpods endpoint | |
Feb 28 20:51:25.932: INFO: Checking pods on node spotter-kube-rkt-minion-yo39 via /runningpods endpoint | |
Feb 28 20:51:26.339: INFO: [Resource usage on node "spotter-kube-rkt-minion-yii0" is not ready yet, Resource usage on node "spotter-kube-rkt-minion-yo39" is not ready yet, Resource usage on node "spotter-kube-rkt-master" is not ready yet, Resource usage on node "spotter-kube-rkt-minion-8b1u" is not ready yet] | |
STEP: Deleting the RC | |
STEP: deleting replication controller cleanup30-fe0fe706-de9f-11e5-a1fb-54ee75510eb4 in namespace e2e-tests-kubelet-c6b8v | |
Feb 28 20:51:27.887: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:51:29.054: INFO: Deleting RC cleanup30-fe0fe706-de9f-11e5-a1fb-54ee75510eb4 took: 2.630924862s | |
Feb 28 20:51:30.104: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:51:30.735: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:51:31.357: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:51:33.680: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:51:35.591: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:51:36.187: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:51:36.742: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
Feb 28 20:51:37.398: INFO: Terminating RC cleanup30-fe0fe706-de9f-11e5-a1fb-54ee75510eb4 pods took: 8.344233634s | |
Feb 28 20:51:38.399: INFO: Checking pods on node spotter-kube-rkt-minion-8b1u via /runningpods endpoint | |
Feb 28 20:51:38.399: INFO: Checking pods on node spotter-kube-rkt-minion-yii0 via /runningpods endpoint | |
Feb 28 20:51:38.399: INFO: Checking pods on node spotter-kube-rkt-minion-yo39 via /runningpods endpoint | |
Feb 28 20:51:38.799: INFO: Deleting 30 pods on 3 nodes completed in 1.400251337s after the RC was deleted | |
Feb 28 20:51:38.799: INFO: CPU usage of containers on node "spotter-kube-rkt-master" | |
:container 5th%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%! | |
(MISSING)"/" 0.000 0.000 0.071 0.083 0.083 0.083 0.083 | |
"/docker-daemon" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
"/kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
CPU usage of containers on node "spotter-kube-rkt-minion-8b1u" | |
:container 5th%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%! | |
(MISSING)"/" 0.000 0.000 0.676 1.418 1.418 1.418 1.418 | |
"/docker-daemon" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
"/kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
CPU usage of containers on node "spotter-kube-rkt-minion-yii0" | |
:container 5th%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%! | |
(MISSING)"/" 0.000 0.000 0.649 0.649 0.949 0.949 0.949 | |
"/docker-daemon" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
"/kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
CPU usage of containers on node "spotter-kube-rkt-minion-yo39" | |
:container 5th%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%!t(MISSING)h%! | |
(MISSING)"/" 0.000 0.000 0.178 0.772 0.772 0.772 0.772 | |
"/docker-daemon" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
"/kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 | |
[AfterEach] kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:51:38.799: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubelet-c6b8v" for this suite. | |
Feb 28 20:51:39.555: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-8b1u" | |
Feb 28 20:51:41.152: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-master" | |
Feb 28 20:51:41.545: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yii0" | |
Feb 28 20:51:42.133: INFO: Missing info/stats for container "/docker-daemon" on node "spotter-kube-rkt-minion-yo39" | |
[AfterEach] kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:108 | |
• [SLOW TEST:59.833 seconds] | |
kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:159 | |
Clean up pods on node | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:158 | |
kubelet should be able to delete 10 pods per node in 1m0s. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:156 | |
------------------------------ | |
S | |
------------------------------ | |
Kubectl client Simple pod | |
should support inline execution and attach | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:490 | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:51:44.234: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:51:44.324: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-xklqm | |
Feb 28 20:51:44.413: INFO: Service account default in ns e2e-tests-kubectl-xklqm had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:51:46.500: INFO: Service account default in ns e2e-tests-kubectl-xklqm with secrets found. (2.175472458s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:51:46.500: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-xklqm | |
Feb 28 20:51:46.580: INFO: Service account default in ns e2e-tests-kubectl-xklqm with secrets found. (80.703641ms) | |
[BeforeEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113 | |
[BeforeEach] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:183 | |
STEP: creating the pod from /home/spotter/gocode/src/k8s.io/y-kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml | |
Feb 28 20:51:46.580: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config create -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-xklqm' | |
Feb 28 20:51:47.376: INFO: stdout: "pod \"nginx\" created\n" | |
Feb 28 20:51:47.376: INFO: stderr: "" | |
Feb 28 20:51:47.376: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx] | |
Feb 28 20:51:47.376: INFO: Waiting up to 5m0s for pod nginx status to be running and ready | |
Feb 28 20:51:47.461: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-xklqm' status to be 'running and ready'(found phase: "Pending", readiness: false) (84.985435ms elapsed) | |
Feb 28 20:51:49.545: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-xklqm' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.168763004s elapsed) | |
Feb 28 20:51:51.630: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-xklqm' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.25384817s elapsed) | |
Feb 28 20:51:53.716: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-xklqm' status to be 'running and ready'(found phase: "Running", readiness: false) (6.339784166s elapsed) | |
Feb 28 20:51:55.802: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-xklqm' status to be 'running and ready'(found phase: "Running", readiness: false) (8.425718085s elapsed) | |
Feb 28 20:51:57.889: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx] | |
[It] should support inline execution and attach | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:490 | |
STEP: executing a command with run and attach with stdin | |
Feb 28 20:51:57.969: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config --namespace=e2e-tests-kubectl-xklqm run run-test --image=busybox --restart=Never --attach=true --stdin -- sh -c cat && echo 'stdin closed'' | |
Feb 28 20:52:05.795: INFO: stdout: "Waiting for pod e2e-tests-kubectl-xklqm/run-test-7p84k to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-xklqm/run-test-7p84k to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-xklqm/run-test-7p84k to be running, status is Pending, pod ready: false\nstdin closed\n" | |
Feb 28 20:52:05.795: INFO: stderr: "" | |
[AfterEach] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:186 | |
STEP: using delete to clean up resources | |
Feb 28 20:52:05.795: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config delete --grace-period=0 -f /home/spotter/gocode/src/k8s.io/y-kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-xklqm' | |
Feb 28 20:52:06.554: INFO: stdout: "pod \"nginx\" deleted\n" | |
Feb 28 20:52:06.554: INFO: stderr: "" | |
Feb 28 20:52:06.554: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-xklqm' | |
Feb 28 20:52:07.308: INFO: stdout: "" | |
Feb 28 20:52:07.308: INFO: stderr: "" | |
Feb 28 20:52:07.308: INFO: Running '/home/spotter/gocode/src/k8s.io/y-kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://104.196.32.11 --kubeconfig=/home/spotter/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-xklqm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Feb 28 20:52:07.951: INFO: stdout: "" | |
Feb 28 20:52:07.951: INFO: stderr: "" | |
[AfterEach] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-kubectl-xklqm". | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:51:47 -0800 PST - event for nginx: {default-scheduler } Scheduled: Successfully assigned nginx to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:51:47 -0800 PST - event for nginx: {kubelet spotter-kube-rkt-minion-8b1u} Pulling: pulling image "nginx" | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:51:47 -0800 PST - event for nginx: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Successfully pulled image "nginx" | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:51:51 -0800 PST - event for nginx: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 09a631e6 | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:51:51 -0800 PST - event for nginx: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 09a631e6 | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:51:58 -0800 PST - event for run-test: {job-controller } SuccessfulCreate: Created pod: run-test-7p84k | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:51:58 -0800 PST - event for run-test-7p84k: {kubelet spotter-kube-rkt-minion-yo39} Pulling: pulling image "busybox" | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:51:58 -0800 PST - event for run-test-7p84k: {default-scheduler } Scheduled: Successfully assigned run-test-7p84k to spotter-kube-rkt-minion-yo39 | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:52:01 -0800 PST - event for run-test-7p84k: {kubelet spotter-kube-rkt-minion-yo39} Pulled: Successfully pulled image "busybox" | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:52:04 -0800 PST - event for run-test-7p84k: {kubelet spotter-kube-rkt-minion-yo39} Created: Created with rkt id c9d06b14 | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:52:04 -0800 PST - event for run-test-7p84k: {kubelet spotter-kube-rkt-minion-yo39} Started: Started with rkt id c9d06b14 | |
Feb 28 20:52:08.041: INFO: At 2016-02-28 20:52:06 -0800 PST - event for nginx: {kubelet spotter-kube-rkt-minion-8b1u} Killing: Killing with rkt id 09a631e6 | |
Feb 28 20:52:08.213: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:52:08.213: INFO: run-test-7p84k spotter-kube-rkt-minion-yo39 Succeeded [{Ready False 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:51:58 -0800 PST PodCompleted }] | |
Feb 28 20:52:08.213: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:52:08.213: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:52:08.213: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:52:08.213: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:52:08.213: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:52:08.213: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:48:05 -0800 PST }] | |
Feb 28 20:52:08.213: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:52:08.213: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:52:08.213: INFO: | |
Feb 28 20:52:08.300: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:52:08.381: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 4371 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:52:01 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:52:01 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:52:08.381: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:52:08.463: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:52:08.627: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:52:08.627: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:52:08.712: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 4373 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:52:04 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:52:04 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/mounttest-user:0.3] 1724928} {[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[registry-1.docker.io/library/busybox:latest] 1315840} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[registry-1.docker.io/library/redis:latest] 152067072} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:52:08.712: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:52:08.793: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:52:09.054: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:52:09.682: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:52:09.682: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:25.815142s} | |
Feb 28 20:52:09.682: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:22.713821s} | |
Feb 28 20:52:09.682: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:20.982393s} | |
Feb 28 20:52:09.682: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:20.842s} | |
Feb 28 20:52:09.682: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:10.086495s} | |
Feb 28 20:52:09.682: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:52:09.764: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 4376 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:52:06 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:52:06 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/netexec:1.4] 7513088} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:52:09.764: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:52:09.850: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:52:10.100: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:52:10.465: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:52:10.465: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:19.968975s} | |
Feb 28 20:52:10.465: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:18.96225s} | |
Feb 28 20:52:10.465: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:18.930723s} | |
Feb 28 20:52:10.465: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:17.13965s} | |
Feb 28 20:52:10.465: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:16.895004s} | |
Feb 28 20:52:10.465: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:52:10.551: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 4372 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:52:02 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:52:02 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[registry-1.docker.io/library/busybox:latest] 1315840} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/pause:2.0] 355840} {[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/goproxy:0.1] 5495808} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:52:10.551: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:52:10.631: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:52:10.883: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:52:11.460: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:52:11.460: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:23.890923s} | |
Feb 28 20:52:11.460: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:21.961607s} | |
Feb 28 20:52:11.460: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:20.436927s} | |
Feb 28 20:52:11.460: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:20.364827s} | |
Feb 28 20:52:11.460: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:19.291932s} | |
Feb 28 20:52:11.460: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-xklqm" for this suite. | |
• Failure [32.665 seconds] | |
Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1082 | |
Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:508 | |
should support inline execution and attach [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:490 | |
Expected | |
<string>: Waiting for pod e2e-tests-kubectl-xklqm/run-test-7p84k to be running, status is Pending, pod ready: false | |
Waiting for pod e2e-tests-kubectl-xklqm/run-test-7p84k to be running, status is Pending, pod ready: false | |
Waiting for pod e2e-tests-kubectl-xklqm/run-test-7p84k to be running, status is Pending, pod ready: false | |
stdin closed | |
to contain substring | |
<string>: abcd1234 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:449 | |
------------------------------ | |
Generated release_1_2 clientset | |
should create pods, delete pods, watch pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:158 | |
[BeforeEach] Generated release_1_2 clientset | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:52:16.898: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:52:16.989: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-clientset-tj5dc | |
Feb 28 20:52:17.071: INFO: Service account default in ns e2e-tests-clientset-tj5dc with secrets found. (81.575364ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:52:17.071: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-clientset-tj5dc | |
Feb 28 20:52:17.151: INFO: Service account default in ns e2e-tests-clientset-tj5dc with secrets found. (79.796474ms) | |
[It] should create pods, delete pods, watch pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:158 | |
STEP: creating the pod | |
STEP: setting up watch | |
STEP: submitting the pod to kubernetes | |
STEP: verifying the pod is in kubernetes | |
STEP: verifying pod creation was observed | |
Feb 28 20:52:17.491: INFO: Waiting up to 5m0s for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 status to be running | |
Feb 28 20:52:17.572: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (81.388021ms elapsed) | |
Feb 28 20:52:19.657: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (2.16609406s elapsed) | |
Feb 28 20:52:21.750: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (4.2589794s elapsed) | |
Feb 28 20:52:23.836: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (6.34526845s elapsed) | |
Feb 28 20:52:25.928: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (8.436835028s elapsed) | |
Feb 28 20:52:28.019: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (10.528021904s elapsed) | |
Feb 28 20:52:30.102: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (12.610501269s elapsed) | |
Feb 28 20:52:32.192: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (14.701176853s elapsed) | |
Feb 28 20:52:34.276: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (16.785103623s elapsed) | |
Feb 28 20:52:36.369: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (18.878173188s elapsed) | |
Feb 28 20:52:38.462: INFO: Waiting for pod pod35172d51-dea0-11e5-a1fb-54ee75510eb4 in namespace 'e2e-tests-clientset-tj5dc' status to be 'running'(found phase: "Pending", readiness: false) (20.971300374s elapsed) | |
Feb 28 20:52:40.559: INFO: Found pod 'pod35172d51-dea0-11e5-a1fb-54ee75510eb4' on node 'spotter-kube-rkt-minion-8b1u' | |
STEP: deleting the pod gracefully | |
STEP: verifying pod deletion was observed | |
[AfterEach] Generated release_1_2 clientset | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:52:41.490: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-clientset-tj5dc" for this suite. | |
• [SLOW TEST:30.016 seconds] | |
Generated release_1_2 clientset | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:159 | |
should create pods, delete pods, watch pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:158 | |
------------------------------ | |
SSH | |
should SSH to all nodes and run commands | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:91 | |
[BeforeEach] SSH | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:52:46.914: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:52:47.001: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-ssh-lbtmb | |
Feb 28 20:52:47.085: INFO: Service account default in ns e2e-tests-ssh-lbtmb had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:52:49.171: INFO: Service account default in ns e2e-tests-ssh-lbtmb with secrets found. (2.169247396s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:52:49.171: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-ssh-lbtmb | |
Feb 28 20:52:49.251: INFO: Service account default in ns e2e-tests-ssh-lbtmb with secrets found. (79.955374ms) | |
[BeforeEach] SSH | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:33 | |
[It] should SSH to all nodes and run commands | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:91 | |
STEP: Getting all nodes' SSH-able IP addresses | |
STEP: SSH'ing to all nodes and running echo "Hello" | |
Feb 28 20:52:50.436: INFO: Got stdout from 104.196.32.11:22: Hello | |
Feb 28 20:52:51.530: INFO: Got stdout from 104.196.7.86:22: Hello | |
Feb 28 20:52:52.583: INFO: Got stdout from 104.196.102.116:22: Hello | |
Feb 28 20:52:53.661: INFO: Got stdout from 104.196.107.90:22: Hello | |
STEP: SSH'ing to all nodes and running echo "Hello from $(whoami)@$(hostname)" | |
Feb 28 20:52:54.785: INFO: Got stdout from 104.196.32.11:22: Hello from spotter@spotter-kube-rkt-master | |
Feb 28 20:52:55.915: INFO: Got stdout from 104.196.7.86:22: Hello from spotter@spotter-kube-rkt-minion-8b1u | |
Feb 28 20:52:56.999: INFO: Got stdout from 104.196.102.116:22: Hello from spotter@spotter-kube-rkt-minion-yii0 | |
Feb 28 20:52:58.066: INFO: Got stdout from 104.196.107.90:22: Hello from spotter@spotter-kube-rkt-minion-yo39 | |
STEP: SSH'ing to all nodes and running echo "foo" | grep "bar" | |
STEP: SSH'ing to all nodes and running echo "Out" && echo "Error" >&2 && exit 7 | |
Feb 28 20:53:03.520: INFO: Got stdout from 104.196.32.11:22: Out | |
Feb 28 20:53:03.520: INFO: Got stderr from 104.196.32.11:22: Error | |
Feb 28 20:53:04.640: INFO: Got stdout from 104.196.7.86:22: Out | |
Feb 28 20:53:04.640: INFO: Got stderr from 104.196.7.86:22: Error | |
Feb 28 20:53:05.738: INFO: Got stdout from 104.196.102.116:22: Out | |
Feb 28 20:53:05.738: INFO: Got stderr from 104.196.102.116:22: Error | |
Feb 28 20:53:06.826: INFO: Got stdout from 104.196.107.90:22: Out | |
Feb 28 20:53:06.826: INFO: Got stderr from 104.196.107.90:22: Error | |
STEP: SSH'ing to a nonexistent host | |
error dialing [email protected]: 'dial tcp: missing port in address i.do.not.exist', retrying | |
error dialing [email protected]: 'dial tcp: missing port in address i.do.not.exist', retrying | |
error dialing [email protected]: 'dial tcp: missing port in address i.do.not.exist', retrying | |
error dialing [email protected]: 'dial tcp: missing port in address i.do.not.exist', retrying | |
[AfterEach] SSH | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
Feb 28 20:53:26.945: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-ssh-lbtmb" for this suite. | |
• [SLOW TEST:40.375 seconds] | |
SSH | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:92 | |
should SSH to all nodes and run commands | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:91 | |
------------------------------ | |
S | |
------------------------------ | |
Services | |
should be able to create a functioning NodePort service | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:404 | |
[BeforeEach] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:53:27.289: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:53:27.382: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-hax49 | |
Feb 28 20:53:27.460: INFO: Service account default in ns e2e-tests-services-hax49 with secrets found. (78.055966ms) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:53:27.460: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-hax49 | |
Feb 28 20:53:27.545: INFO: Service account default in ns e2e-tests-services-hax49 with secrets found. (85.794809ms) | |
[BeforeEach] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:73 | |
Feb 28 20:53:27.546: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
[It] should be able to create a functioning NodePort service | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:404 | |
STEP: creating service nodeport-test with type=NodePort in namespace e2e-tests-services-hax49 | |
STEP: creating pod to be part of service nodeport-test | |
Feb 28 20:53:27.819: INFO: Waiting up to 2m0s for 1 pods to be created | |
Feb 28 20:53:27.905: INFO: Found all 1 pods | |
Feb 28 20:53:27.905: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [nodeport-test-2dmzr] | |
Feb 28 20:53:27.905: INFO: Waiting up to 2m0s for pod nodeport-test-2dmzr status to be running and ready | |
Feb 28 20:53:27.991: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Pending", readiness: false) (85.785041ms elapsed) | |
Feb 28 20:53:30.077: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.171986455s elapsed) | |
Feb 28 20:53:32.161: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (4.255942572s elapsed) | |
Feb 28 20:53:34.250: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (6.345403084s elapsed) | |
Feb 28 20:53:36.331: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (8.425926594s elapsed) | |
Feb 28 20:53:38.417: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (10.511820501s elapsed) | |
Feb 28 20:53:40.501: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (12.596597675s elapsed) | |
Feb 28 20:53:42.587: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (14.681759479s elapsed) | |
Feb 28 20:53:44.669: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (16.764612074s elapsed) | |
Feb 28 20:53:46.752: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (18.847106582s elapsed) | |
Feb 28 20:53:48.832: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (20.9275741s elapsed) | |
Feb 28 20:53:50.915: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (23.010552979s elapsed) | |
Feb 28 20:53:53.000: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (25.094787248s elapsed) | |
Feb 28 20:53:55.090: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (27.184855361s elapsed) | |
Feb 28 20:53:57.177: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (29.27181601s elapsed) | |
Feb 28 20:53:59.258: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (31.35344965s elapsed) | |
Feb 28 20:54:01.337: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (33.432681104s elapsed) | |
Feb 28 20:54:03.436: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (35.531363878s elapsed) | |
Feb 28 20:54:05.516: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (37.611653592s elapsed) | |
Feb 28 20:54:07.601: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (39.696415056s elapsed) | |
Feb 28 20:54:09.683: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (41.778078033s elapsed) | |
Feb 28 20:54:11.773: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (43.867988428s elapsed) | |
Feb 28 20:54:13.852: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (45.947193191s elapsed) | |
Feb 28 20:54:15.938: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (48.033716821s elapsed) | |
Feb 28 20:54:18.023: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (50.118231211s elapsed) | |
Feb 28 20:54:20.110: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (52.204946014s elapsed) | |
Feb 28 20:54:22.193: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (54.288668454s elapsed) | |
Feb 28 20:54:24.275: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (56.370295676s elapsed) | |
Feb 28 20:54:26.362: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (58.456870548s elapsed) | |
Feb 28 20:54:28.443: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m0.538378626s elapsed) | |
Feb 28 20:54:30.529: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m2.624740187s elapsed) | |
Feb 28 20:54:32.620: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m4.71546968s elapsed) | |
Feb 28 20:54:34.703: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m6.798105661s elapsed) | |
Feb 28 20:54:36.783: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m8.878066731s elapsed) | |
Feb 28 20:54:38.867: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m10.962253149s elapsed) | |
Feb 28 20:54:40.952: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m13.046787104s elapsed) | |
Feb 28 20:54:43.031: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m15.125905462s elapsed) | |
Feb 28 20:54:45.117: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m17.212312936s elapsed) | |
Feb 28 20:54:47.201: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m19.29618005s elapsed) | |
Feb 28 20:54:49.289: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m21.384328251s elapsed) | |
Feb 28 20:54:51.381: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m23.476298613s elapsed) | |
Feb 28 20:54:53.468: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m25.563100128s elapsed) | |
Feb 28 20:54:55.555: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m27.650742044s elapsed) | |
Feb 28 20:54:57.634: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m29.729460351s elapsed) | |
Feb 28 20:54:59.715: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m31.809927763s elapsed) | |
Feb 28 20:55:01.797: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m33.892539286s elapsed) | |
Feb 28 20:55:03.879: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m35.973991947s elapsed) | |
Feb 28 20:55:05.959: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m38.054546888s elapsed) | |
Feb 28 20:55:08.042: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m40.137169338s elapsed) | |
Feb 28 20:55:10.127: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m42.222330666s elapsed) | |
Feb 28 20:55:12.207: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m44.301873073s elapsed) | |
Feb 28 20:55:14.287: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m46.382005999s elapsed) | |
Feb 28 20:55:16.373: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m48.468226736s elapsed) | |
Feb 28 20:55:18.455: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m50.550467889s elapsed) | |
Feb 28 20:55:20.540: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m52.635207052s elapsed) | |
Feb 28 20:55:22.623: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m54.717928214s elapsed) | |
Feb 28 20:55:24.706: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m56.80099825s elapsed) | |
Feb 28 20:55:26.788: INFO: Waiting for pod nodeport-test-2dmzr in namespace 'e2e-tests-services-hax49' status to be 'running and ready'(found phase: "Running", readiness: false) (1m58.883184217s elapsed) | |
Feb 28 20:55:28.788: INFO: Pod nodeport-test-2dmzr failed to be running and ready. | |
Feb 28 20:55:28.788: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [nodeport-test-2dmzr] | |
Feb 28 20:55:28.788: FAIL: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready | |
[AfterEach] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:84 | |
STEP: Collecting events from namespace "e2e-tests-services-hax49". | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:53:27 -0800 PST - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-2dmzr | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:53:27 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Pulled: Container image "gcr.io/google_containers/netexec:1.4" already present on machine | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:53:27 -0800 PST - event for nodeport-test-2dmzr: {default-scheduler } Scheduled: Successfully assigned nodeport-test-2dmzr to spotter-kube-rkt-minion-8b1u | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:53:30 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 4b6b96ea | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:53:30 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 4b6b96ea | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:53:34 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 3426df0b | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:53:34 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 3426df0b | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:54:39 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 9dc22344 | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:54:39 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Unhealthy: Readiness probe failed: Get http://10.245.1.3:80/hostName: dial tcp 10.245.1.3:80: connection refused | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:54:39 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 9dc22344 | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:54:43 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Started: Started with rkt id 10a4bfe3 | |
Feb 28 20:55:28.901: INFO: At 2016-02-28 20:54:43 -0800 PST - event for nodeport-test-2dmzr: {kubelet spotter-kube-rkt-minion-8b1u} Created: Created with rkt id 10a4bfe3 | |
Feb 28 20:55:29.077: INFO: POD NODE PHASE GRACE CONDITIONS | |
Feb 28 20:55:29.077: INFO: nodeport-test-2dmzr spotter-kube-rkt-minion-8b1u Running [{Ready False 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:53:27 -0800 PST ContainersNotReady containers with unready status: [netexec]}] | |
Feb 28 20:55:29.077: INFO: etcd-server-events-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:55:29.077: INFO: etcd-server-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:35:11 -0800 PST }] | |
Feb 28 20:55:29.077: INFO: kube-apiserver-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:25:59 -0800 PST }] | |
Feb 28 20:55:29.077: INFO: kube-controller-manager-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:19 -0800 PST }] | |
Feb 28 20:55:29.077: INFO: kube-dns-v10-ucm9e spotter-kube-rkt-minion-yii0 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:40:53 -0800 PST }] | |
Feb 28 20:55:29.077: INFO: kube-scheduler-kubernetes-master-spotter-kube-rkt-master spotter-kube-rkt-master Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 20:48:05 -0800 PST }] | |
Feb 28 20:55:29.077: INFO: kubernetes-dashboard-v0.1.0-xx4kj spotter-kube-rkt-minion-8b1u Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:26:43 -0800 PST }] | |
Feb 28 20:55:29.077: INFO: l7-lb-controller-vg83c spotter-kube-rkt-minion-yo39 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-02-28 19:42:06 -0800 PST }] | |
Feb 28 20:55:29.077: INFO: | |
Feb 28 20:55:29.162: INFO: | |
Logging node info for node spotter-kube-rkt-master | |
Feb 28 20:55:29.244: INFO: Node Info: &{{ } {spotter-kube-rkt-master /api/v1/nodes/spotter-kube-rkt-master 28a99cb5-de94-11e5-9ed9-42010af00002 4505 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-master] map[]} {10.245.0.0/24 1057893855773431411 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-master true} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:55:21 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:55:21 -0800 PST 2016-02-28 19:26:02 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.2} {ExternalIP 104.196.32.11}] {{10250}} {18f8f8ae3bfd63d33ca6152c840ac772 18F8F8AE-3BFD-63D3-3CA6-152C840AC772 85ba1a68-a60b-4788-8d0d-31581d8a1dbc 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) docker://1.10.0 v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[gcr.io/google_containers/kube-apiserver:e75cb1e68585bef333c384799aa55b36] 62275978} {[gcr.io/google_containers/kube-controller-manager:5ceda9f0fcc85cf5112fd4eab97edf76] 55494410} {[gcr.io/google_containers/kube-scheduler:34f7eca4d44271cb192f0c32a79fb31f] 36991674} {[python:2.7-slim-pyyaml] 205046931} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/kube-registry-proxy:0.3] 151205002} {[gcr.io/google_containers/etcd:2.0.12] 15265152} {[gcr.io/google_containers/serve_hostname:1.1] 4522409}]}} | |
Feb 28 20:55:29.244: INFO: | |
Logging kubelet events for node spotter-kube-rkt-master | |
Feb 28 20:55:29.329: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-master | |
Feb 28 20:55:29.495: INFO: Unable to retrieve kubelet pods for node spotter-kube-rkt-master | |
Feb 28 20:55:29.495: INFO: | |
Logging node info for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:55:29.576: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-8b1u /api/v1/nodes/spotter-kube-rkt-minion-8b1u 2e5bf2ae-de94-11e5-9ed9-42010af00002 4507 0 2016-02-28 19:26:11 -0800 PST <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-8b1u] map[]} {10.245.1.0/24 9623675355699996544 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-8b1u false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:55:25 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:55:25 -0800 PST 2016-02-28 19:26:11 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.5} {ExternalIP 104.196.7.86}] {{10250}} {374318fe80761a3e4e3d72f72bd62802 374318FE-8076-1A3E-4E3D-72F72BD62802 b4070cab-7518-489e-9e3e-c5003667ca85 4.4.1-coreos CoreOS 960.0.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/nginx:1.7.9] 94539264} {[gcr.io/google_containers/mounttest-user:0.3] 1724928} {[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[registry-1.docker.io/library/busybox:latest] 1315840} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[registry-1.docker.io/library/redis:latest] 152067072} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/eptest:0.1] 2977792} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/mounttest:0.6] 2090496} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v0.1.0] 35423744}]}} | |
Feb 28 20:55:29.576: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:55:29.664: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:55:29.913: INFO: kubernetes-dashboard-v0.1.0-xx4kj started at <nil> (0 container statuses recorded) | |
Feb 28 20:55:30.448: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-8b1u | |
Feb 28 20:55:30.448: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:25.815142s} | |
Feb 28 20:55:30.448: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:22.636731s} | |
Feb 28 20:55:30.448: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:21.217154s} | |
Feb 28 20:55:30.448: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:20.982393s} | |
Feb 28 20:55:30.448: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:19.618864s} | |
Feb 28 20:55:30.448: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:55:30.529: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yii0 /api/v1/nodes/spotter-kube-rkt-minion-yii0 28de6c1f-de94-11e5-9ed9-42010af00002 4508 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b kubernetes.io/hostname:spotter-kube-rkt-minion-yii0 beta.kubernetes.io/instance-type:n1-standard-2] map[]} {10.245.3.0/24 9944607117135381947 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yii0 false} {map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] [{OutOfDisk False 2016-02-28 20:55:27 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:55:27 -0800 PST 2016-02-28 19:40:42 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.3} {ExternalIP 104.196.102.116}] {{10250}} {b38c5389751fa1f6a57a3e0bc169499a B38C5389-751F-A1F6-A57A-3E0BC169499A c09d0613-ea30-4bb2-a090-591baae2e0d3 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/pause:2.0] 355840} {[gcr.io/google_containers/netexec:1.4] 7513088} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/exechealthz:1.0] 7314944} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 41955328} {[gcr.io/google_containers/kube2sky:1.12] 24685056} {[gcr.io/google_containers/etcd:2.0.9] 12827136}]}} | |
Feb 28 20:55:30.529: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:55:30.611: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:55:30.860: INFO: kube-dns-v10-ucm9e started at <nil> (0 container statuses recorded) | |
Feb 28 20:55:31.207: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yii0 | |
Feb 28 20:55:31.207: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:19.968975s} | |
Feb 28 20:55:31.207: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:19.721505s} | |
Feb 28 20:55:31.207: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:18.96225s} | |
Feb 28 20:55:31.207: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:18.96225s} | |
Feb 28 20:55:31.207: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:16.900457s} | |
Feb 28 20:55:31.207: INFO: | |
Logging node info for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:55:31.293: INFO: Node Info: &{{ } {spotter-kube-rkt-minion-yo39 /api/v1/nodes/spotter-kube-rkt-minion-yo39 28d9f8e6-de94-11e5-9ed9-42010af00002 4506 0 2016-02-28 19:26:02 -0800 PST <nil> <nil> map[kubernetes.io/hostname:spotter-kube-rkt-minion-yo39 beta.kubernetes.io/instance-type:n1-standard-2 failure-domain.beta.kubernetes.io/region:us-east1 failure-domain.beta.kubernetes.io/zone:us-east1-b] map[]} {10.245.2.0/24 5812756469575456144 gce://coreos-gce-testing/us-east1-b/spotter-kube-rkt-minion-yo39 false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7848292352.000 BinarySI} pods:{40.000 DecimalSI}] [{OutOfDisk False 2016-02-28 20:55:23 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-02-28 20:55:23 -0800 PST 2016-02-28 19:39:36 -0800 PST KubeletReady kubelet is posting ready status}] [{InternalIP 10.240.0.4} {ExternalIP 104.196.107.90}] {{10250}} {d67f51f98b40c228849e17dc0b2cff8f D67F51F9-8B40-C228-849E-17DC0B2CFF8F 37634038-5d12-46d7-8197-fc5e74a7aef8 4.4.1-coreos CoreOS 970.1.0 (Coeur Rouge) rkt://1.0.0+git9003f4a-dirty v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a v1.2.0-master+2e513aa477d14b5cd479c66217d57ed76d09c37a} [{[registry-1.docker.io/library/busybox:latest] 1315840} {[coreos.com/rkt/stage1-coreos:1.0.0+git9003f4a-dirty] 79951360} {[gcr.io/google_containers/pause:2.0] 355840} {[b.gcr.io/k8s_authenticated_test/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/hostexec:1.2] 14018048} {[registry-1.docker.io/library/nginx:latest] 136025600} {[gcr.io/google_containers/goproxy:0.1] 5495808} {[gcr.io/google_containers/busybox:1.24] 1315840} {[gcr.io/google_containers/resource_consumer:beta2] 138319872} {[gcr.io/google_samples/gb-redisslave:v1] 113454592} {[gcr.io/google_samples/gb-frontend:v4] 513876992} {[gcr.io/google_containers/update-demo:nautilus] 28160} {[gcr.io/google_containers/netexec:1.4] 7513088} {[gcr.io/google_containers/serve_hostname:1.1] 4529152} {[gcr.io/google_containers/glbc:0.5.1] 203040256} {[gcr.io/google_containers/defaultbackend:1.0] 7729152}]}} | |
Feb 28 20:55:31.293: INFO: | |
Logging kubelet events for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:55:31.377: INFO: | |
Logging pods the kubelet thinks is on node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:55:31.641: INFO: l7-lb-controller-vg83c started at <nil> (0 container statuses recorded) | |
Feb 28 20:55:32.106: INFO: | |
Latency metrics for node spotter-kube-rkt-minion-yo39 | |
Feb 28 20:55:32.106: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:23.890923s} | |
Feb 28 20:55:32.106: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:21.976088s} | |
Feb 28 20:55:32.106: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:20.436927s} | |
Feb 28 20:55:32.106: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:20.364827s} | |
Feb 28 20:55:32.106: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:19.291932s} | |
Feb 28 20:55:32.106: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-hax49" for this suite. | |
• Failure [130.233 seconds] | |
Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:902 | |
should be able to create a functioning NodePort service [It] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:404 | |
Feb 28 20:55:28.788: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1742 | |
------------------------------ | |
S | |
------------------------------ | |
Proxy version v1 | |
should proxy through a service and a pod [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:244 | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:83 | |
STEP: Creating a kubernetes client | |
Feb 28 20:55:37.523: INFO: >>> testContext.KubeConfig: /home/spotter/.kube/config | |
STEP: Building a namespace api object | |
Feb 28 20:55:37.611: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-f07sq | |
Feb 28 20:55:37.690: INFO: Service account default in ns e2e-tests-proxy-f07sq had 0 secrets, ignoring for 2s: <nil> | |
Feb 28 20:55:39.774: INFO: Service account default in ns e2e-tests-proxy-f07sq with secrets found. (2.162594124s) | |
STEP: Waiting for a default service account to be provisioned in namespace | |
Feb 28 20:55:39.774: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-f07sq | |
Feb 28 20:55:39.855: INFO: Service account default in ns e2e-tests-proxy-f07sq with secrets found. (80.729312ms) | |
[It] should proxy through a service and a pod [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:244 | |
STEP: creating replication controller proxy-service-n9r2a in namespace e2e-tests-proxy-f07sq | |
Feb 28 20:55:40.036: INFO: Created replication controller with name: proxy-service-n9r2a, namespace: e2e-tests-proxy-f07sq, replica count: 1 | |
Feb 28 20:55:41.036: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:55:42.037: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:55:43.037: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:55:44.037: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:55:45.037: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:55:46.037: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:55:47.037: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Feb 28 20:55:48.038: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Feb 28 20:55:49.038: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Feb 28 20:55:50.038: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Feb 28 20:55:51.038: INFO: proxy-service-n9r2a Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Feb 28 20:55:51.488: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 86.781452ms) | |
Feb 28 20:55:51.691: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 89.261679ms) | |
Feb 28 20:55:51.885: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 82.693802ms) | |
Feb 28 20:55:52.100: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 97.67507ms) | |
Feb 28 20:55:52.289: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 86.66901ms) | |
Feb 28 20:55:52.489: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 86.36063ms) | |
Feb 28 20:55:52.685: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 82.501078ms) | |
Feb 28 20:55:52.890: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 87.691255ms) | |
Feb 28 20:55:53.089: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 86.487509ms) | |
Feb 28 20:55:53.298: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 95.474289ms) | |
Feb 28 20:55:53.485: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 81.661344ms) | |
Feb 28 20:55:53.691: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 88.016187ms) | |
Feb 28 20:55:53.894: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 90.948541ms) | |
Feb 28 20:55:54.093: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 89.449034ms) | |
Feb 28 20:55:54.287: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 83.481287ms) | |
Feb 28 20:55:54.487: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 83.218083ms) | |
Feb 28 20:55:54.693: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 89.260205ms) | |
Feb 28 20:55:54.882: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 78.593258ms) | |
Feb 28 20:55:55.086: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 81.547019ms) | |
Feb 28 20:55:55.289: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 85.362038ms) | |
Feb 28 20:55:55.487: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 82.955387ms) | |
Feb 28 20:55:55.690: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 85.502438ms) | |
Feb 28 20:55:55.891: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 86.251939ms) | |
Feb 28 20:55:56.092: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 87.041584ms) | |
Feb 28 20:55:56.295: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 89.902567ms) | |
Feb 28 20:55:56.490: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 84.852761ms) | |
Feb 28 20:55:56.690: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 84.776022ms) | |
Feb 28 20:55:56.891: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 86.303847ms) | |
Feb 28 20:55:57.091: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 85.705277ms) | |
Feb 28 20:55:57.285: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 79.825734ms) | |
Feb 28 20:55:57.488: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 82.772762ms) | |
Feb 28 20:55:57.689: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 83.406466ms) | |
Feb 28 20:55:57.887: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 80.984609ms) | |
Feb 28 20:55:58.088: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 82.010364ms) | |
Feb 28 20:55:58.296: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 89.915894ms) | |
Feb 28 20:55:58.494: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 88.017061ms) | |
Feb 28 20:55:58.693: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 87.004885ms) | |
Feb 28 20:55:58.887: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 80.334304ms) | |
Feb 28 20:55:59.090: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 83.478788ms) | |
Feb 28 20:55:59.289: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 82.023052ms) | |
Feb 28 20:55:59.493: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 86.236936ms) | |
Feb 28 20:55:59.690: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 83.509445ms) | |
Feb 28 20:55:59.892: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 84.551163ms) | |
Feb 28 20:56:00.091: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 84.115839ms) | |
Feb 28 20:56:00.294: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 86.766866ms) | |
Feb 28 20:56:00.488: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 80.547358ms) | |
Feb 28 20:56:00.694: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 86.864792ms) | |
Feb 28 20:56:00.892: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 84.513942ms) | |
Feb 28 20:56:01.091: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 83.06149ms) | |
Feb 28 20:56:01.291: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 83.002486ms) | |
Feb 28 20:56:01.495: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 86.446456ms) | |
Feb 28 20:56:01.696: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 88.27258ms) | |
Feb 28 20:56:01.894: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 85.605519ms) | |
Feb 28 20:56:02.090: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 81.567969ms) | |
Feb 28 20:56:02.291: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 82.481425ms) | |
Feb 28 20:56:02.499: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 90.746533ms) | |
Feb 28 20:56:02.696: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 87.590973ms) | |
Feb 28 20:56:02.894: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 85.426561ms) | |
Feb 28 20:56:03.092: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 82.492524ms) | |
Feb 28 20:56:03.290: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 80.722364ms) | |
Feb 28 20:56:03.488: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 79.206398ms) | |
Feb 28 20:56:03.693: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 83.197838ms) | |
Feb 28 20:56:03.892: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 82.349223ms) | |
Feb 28 20:56:04.095: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 84.93834ms) | |
Feb 28 20:56:04.291: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 80.909174ms) | |
Feb 28 20:56:04.498: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 88.372699ms) | |
Feb 28 20:56:04.692: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 81.83113ms) | |
Feb 28 20:56:04.896: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 85.871595ms) | |
Feb 28 20:56:05.097: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 86.767742ms) | |
Feb 28 20:56:05.294: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 83.125314ms) | |
Feb 28 20:56:05.498: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 87.56268ms) | |
Feb 28 20:56:05.697: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 85.817681ms) | |
Feb 28 20:56:05.901: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 90.296817ms) | |
Feb 28 20:56:06.096: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 84.603212ms) | |
Feb 28 20:56:06.297: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 85.380892ms) | |
Feb 28 20:56:06.498: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 87.022789ms) | |
Feb 28 20:56:06.700: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 88.874512ms) | |
Feb 28 20:56:06.892: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 80.780423ms) | |
Feb 28 20:56:07.098: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 86.101885ms) | |
Feb 28 20:56:07.298: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 85.923328ms) | |
Feb 28 20:56:07.502: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 90.436655ms) | |
Feb 28 20:56:07.702: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 89.773045ms) | |
Feb 28 20:56:07.896: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 83.341994ms) | |
Feb 28 20:56:08.096: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 83.683471ms) | |
Feb 28 20:56:08.296: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 83.914728ms) | |
Feb 28 20:56:08.497: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 84.828068ms) | |
Feb 28 20:56:08.694: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 81.202256ms) | |
Feb 28 20:56:08.893: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 80.587284ms) | |
Feb 28 20:56:09.100: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 86.951809ms) | |
Feb 28 20:56:09.297: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 83.792724ms) | |
Feb 28 20:56:09.497: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 83.876008ms) | |
Feb 28 20:56:09.693: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 79.797696ms) | |
Feb 28 20:56:09.896: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 83.120821ms) | |
Feb 28 20:56:10.104: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 90.377412ms) | |
Feb 28 20:56:10.298: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 84.445437ms) | |
Feb 28 20:56:10.498: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 84.344197ms) | |
Feb 28 20:56:10.698: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 83.711525ms) | |
Feb 28 20:56:10.895: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 81.075939ms) | |
Feb 28 20:56:11.097: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 83.218776ms) | |
Feb 28 20:56:11.299: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 85.037932ms) | |
Feb 28 20:56:11.500: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 85.537994ms) | |
Feb 28 20:56:11.693: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 78.771078ms) | |
Feb 28 20:56:11.899: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 84.625665ms) | |
Feb 28 20:56:12.102: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 87.36803ms) | |
Feb 28 20:56:12.297: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 81.807488ms) | |
Feb 28 20:56:12.504: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 88.926084ms) | |
Feb 28 20:56:12.695: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 79.904058ms) | |
Feb 28 20:56:12.901: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 85.325383ms) | |
Feb 28 20:56:13.106: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 90.961015ms) | |
Feb 28 20:56:13.297: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 81.326507ms) | |
Feb 28 20:56:13.505: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 89.600254ms) | |
Feb 28 20:56:13.706: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 90.386003ms) | |
Feb 28 20:56:13.898: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 82.021279ms) | |
Feb 28 20:56:14.102: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 85.739516ms) | |
Feb 28 20:56:14.301: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 84.875098ms) | |
Feb 28 20:56:14.502: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 85.704634ms) | |
Feb 28 20:56:14.703: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 86.046281ms) | |
Feb 28 20:56:14.897: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 80.690935ms) | |
Feb 28 20:56:15.100: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 83.412297ms) | |
Feb 28 20:56:15.299: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 81.665892ms) | |
Feb 28 20:56:15.502: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 84.634315ms) | |
Feb 28 20:56:15.700: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 83.187444ms) | |
Feb 28 20:56:15.899: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 82.133223ms) | |
Feb 28 20:56:16.098: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 80.626708ms) | |
Feb 28 20:56:16.298: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 80.845755ms) | |
Feb 28 20:56:16.499: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 81.677495ms) | |
Feb 28 20:56:16.703: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 84.718908ms) | |
Feb 28 20:56:16.900: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 81.648087ms) | |
Feb 28 20:56:17.103: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 85.065863ms) | |
Feb 28 20:56:17.303: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 84.375759ms) | |
Feb 28 20:56:17.504: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 86.134135ms) | |
Feb 28 20:56:17.700: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 81.160089ms) | |
Feb 28 20:56:17.899: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 80.138304ms) | |
Feb 28 20:56:18.100: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 81.293024ms) | |
Feb 28 20:56:18.301: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 82.112347ms) | |
Feb 28 20:56:18.499: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 79.605754ms) | |
Feb 28 20:56:18.704: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 85.412298ms) | |
Feb 28 20:56:18.902: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 83.063258ms) | |
Feb 28 20:56:19.101: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 81.744197ms) | |
Feb 28 20:56:19.303: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 83.84021ms) | |
Feb 28 20:56:19.503: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 83.168231ms) | |
Feb 28 20:56:19.701: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 81.912826ms) | |
Feb 28 20:56:19.907: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 86.863938ms) | |
Feb 28 20:56:20.103: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 82.754156ms) | |
Feb 28 20:56:20.311: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 90.760204ms) | |
Feb 28 20:56:20.502: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 81.937622ms) | |
Feb 28 20:56:20.710: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 89.8983ms) | |
Feb 28 20:56:20.901: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 80.648847ms) | |
Feb 28 20:56:21.108: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 87.06635ms) | |
Feb 28 20:56:21.300: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 79.639056ms) | |
Feb 28 20:56:21.504: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 83.401301ms) | |
Feb 28 20:56:21.700: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 79.545538ms) | |
Feb 28 20:56:21.904: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 82.703722ms) | |
Feb 28 20:56:22.105: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 83.394454ms) | |
Feb 28 20:56:22.308: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 86.71398ms) | |
Feb 28 20:56:22.510: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 88.515798ms) | |
Feb 28 20:56:22.707: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 85.11492ms) | |
Feb 28 20:56:22.903: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 81.010429ms) | |
Feb 28 20:56:23.106: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 83.954112ms) | |
Feb 28 20:56:23.306: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 84.127281ms) | |
Feb 28 20:56:23.505: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 83.334862ms) | |
Feb 28 20:56:23.702: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 80.254645ms) | |
Feb 28 20:56:23.904: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 81.34484ms) | |
Feb 28 20:56:24.110: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 87.247206ms) | |
Feb 28 20:56:24.309: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 86.438808ms) | |
Feb 28 20:56:24.508: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 84.951148ms) | |
Feb 28 20:56:24.710: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 87.360636ms) | |
Feb 28 20:56:24.909: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 86.173065ms) | |
Feb 28 20:56:25.106: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 82.495437ms) | |
Feb 28 20:56:25.310: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 86.770216ms) | |
Feb 28 20:56:25.505: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 81.888122ms) | |
Feb 28 20:56:25.709: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 85.300851ms) | |
Feb 28 20:56:25.918: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 94.577355ms) | |
Feb 28 20:56:26.137: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 113.648293ms) | |
Feb 28 20:56:26.303: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 79.339021ms) | |
Feb 28 20:56:26.511: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 86.835264ms) | |
Feb 28 20:56:26.715: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 90.596537ms) | |
Feb 28 20:56:26.920: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 96.113274ms) | |
Feb 28 20:56:27.117: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 92.229635ms) | |
Feb 28 20:56:27.333: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 108.248168ms) | |
Feb 28 20:56:27.728: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 302.877598ms) | |
Feb 28 20:56:27.822: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 196.894702ms) | |
Feb 28 20:56:27.912: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 87.353732ms) | |
Feb 28 20:56:28.111: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 85.864974ms) | |
Feb 28 20:56:28.313: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 88.312167ms) | |
Feb 28 20:56:28.513: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 87.238237ms) | |
Feb 28 20:56:28.717: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 91.065423ms) | |
Feb 28 20:56:28.917: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 91.192885ms) | |
Feb 28 20:56:29.126: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 100.04831ms) | |
Feb 28 20:56:29.321: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 95.622346ms) | |
Feb 28 20:56:29.542: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 116.426186ms) | |
Feb 28 20:56:29.708: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 81.488581ms) | |
Feb 28 20:56:29.916: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 89.619716ms) | |
Feb 28 20:56:30.113: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 86.733185ms) | |
Feb 28 20:56:30.310: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 83.572344ms) | |
Feb 28 20:56:30.508: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 80.934451ms) | |
Feb 28 20:56:30.715: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 88.137974ms) | |
Feb 28 20:56:30.907: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 79.99474ms) | |
Feb 28 20:56:31.121: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 93.818715ms) | |
Feb 28 20:56:31.311: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 83.559673ms) | |
Feb 28 20:56:31.518: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 90.737971ms) | |
Feb 28 20:56:31.716: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 88.303245ms) | |
Feb 28 20:56:31.908: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 80.221593ms) | |
Feb 28 20:56:32.111: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 83.749955ms) | |
Feb 28 20:56:32.312: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 84.367987ms) | |
Feb 28 20:56:32.514: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 85.805499ms) | |
Feb 28 20:56:32.712: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 84.450919ms) | |
Feb 28 20:56:32.913: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 85.393929ms) | |
Feb 28 20:56:33.108: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 79.52547ms) | |
Feb 28 20:56:33.312: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 83.466725ms) | |
Feb 28 20:56:33.512: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 83.971715ms) | |
Feb 28 20:56:33.711: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 82.393753ms) | |
Feb 28 20:56:33.909: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 80.53817ms) | |
Feb 28 20:56:34.117: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 88.464225ms) | |
Feb 28 20:56:34.320: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 90.912403ms) | |
Feb 28 20:56:34.515: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 85.647523ms) | |
Feb 28 20:56:34.708: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 79.346234ms) | |
Feb 28 20:56:34.913: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 83.749143ms) | |
Feb 28 20:56:35.113: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 83.222453ms) | |
Feb 28 20:56:35.309: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 79.757088ms) | |
Feb 28 20:56:35.518: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 88.612611ms) | |
Feb 28 20:56:35.717: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 87.321389ms) | |
Feb 28 20:56:35.914: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 83.577066ms) | |
Feb 28 20:56:36.115: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 85.386752ms) | |
Feb 28 20:56:36.313: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 82.890253ms) | |
Feb 28 20:56:36.514: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 83.677823ms) | |
Feb 28 20:56:36.716: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 85.179464ms) | |
Feb 28 20:56:36.916: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 85.737134ms) | |
Feb 28 20:56:37.112: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 81.352856ms) | |
Feb 28 20:56:37.315: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 84.164523ms) | |
Feb 28 20:56:37.515: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 84.259743ms) | |
Feb 28 20:56:37.714: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 82.792921ms) | |
Feb 28 20:56:37.919: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 88.170007ms) | |
Feb 28 20:56:38.121: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 89.70814ms) | |
Feb 28 20:56:38.316: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 84.608144ms) | |
Feb 28 20:56:38.522: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 90.9157ms) | |
Feb 28 20:56:38.713: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 81.085576ms) | |
Feb 28 20:56:38.923: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 91.075305ms) | |
Feb 28 20:56:39.117: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 84.907781ms) | |
Feb 28 20:56:39.316: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 84.213694ms) | |
Feb 28 20:56:39.513: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 81.24622ms) | |
Feb 28 20:56:39.715: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 82.282467ms) | |
Feb 28 20:56:39.914: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 81.591717ms) | |
Feb 28 20:56:40.117: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 84.842719ms) | |
Feb 28 20:56:40.312: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 79.119048ms) | |
Feb 28 20:56:40.517: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 83.787115ms) | |
Feb 28 20:56:40.719: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 86.104839ms) | |
Feb 28 20:56:40.913: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 80.402403ms) | |
Feb 28 20:56:41.120: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 87.126081ms) | |
Feb 28 20:56:41.312: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 79.207582ms) | |
Feb 28 20:56:41.518: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 84.865169ms) | |
Feb 28 20:56:41.715: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 81.809174ms) | |
Feb 28 20:56:41.920: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 86.599695ms) | |
Feb 28 20:56:42.117: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 82.981417ms) | |
Feb 28 20:56:42.315: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 80.800563ms) | |
Feb 28 20:56:42.519: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 85.002729ms) | |
Feb 28 20:56:42.723: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 88.859132ms) | |
Feb 28 20:56:42.920: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 86.063612ms) | |
Feb 28 20:56:43.119: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 84.418975ms) | |
Feb 28 20:56:43.318: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 83.429124ms) | |
Feb 28 20:56:43.518: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 83.348886ms) | |
Feb 28 20:56:43.723: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 87.741657ms) | |
Feb 28 20:56:43.923: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 87.688043ms) | |
Feb 28 20:56:44.120: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 84.696552ms) | |
Feb 28 20:56:44.319: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 83.453311ms) | |
Feb 28 20:56:44.517: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 81.852146ms) | |
Feb 28 20:56:44.720: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 84.396563ms) | |
Feb 28 20:56:44.920: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 84.308953ms) | |
Feb 28 20:56:45.120: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 84.086171ms) | |
Feb 28 20:56:45.317: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 80.858273ms) | |
Feb 28 20:56:45.517: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 81.101971ms) | |
Feb 28 20:56:45.717: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 81.406272ms) | |
Feb 28 20:56:45.918: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 81.876504ms) | |
Feb 28 20:56:46.120: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 83.261743ms) | |
Feb 28 20:56:46.319: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 82.441807ms) | |
Feb 28 20:56:46.518: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 81.556377ms) | |
Feb 28 20:56:46.722: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 85.767558ms) | |
Feb 28 20:56:46.921: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 84.648915ms) | |
Feb 28 20:56:47.122: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 84.775739ms) | |
Feb 28 20:56:47.323: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 86.045654ms) | |
Feb 28 20:56:47.518: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 81.231113ms) | |
Feb 28 20:56:47.718: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 80.631785ms) | |
Feb 28 20:56:47.923: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 85.163997ms) | |
Feb 28 20:56:48.122: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 84.859074ms) | |
Feb 28 20:56:48.324: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 85.9088ms) | |
Feb 28 20:56:48.521: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 82.931979ms) | |
Feb 28 20:56:48.720: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 81.856167ms) | |
Feb 28 20:56:48.923: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 84.858878ms) | |
Feb 28 20:56:49.119: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 81.176624ms) | |
Feb 28 20:56:49.319: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 80.785392ms) | |
Feb 28 20:56:49.524: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 85.266711ms) | |
Feb 28 20:56:49.727: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 88.861801ms) | |
Feb 28 20:56:49.927: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 88.04415ms) | |
Feb 28 20:56:50.122: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 83.01116ms) | |
Feb 28 20:56:50.320: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 81.082004ms) | |
Feb 28 20:56:50.526: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 86.992365ms) | |
Feb 28 20:56:50.727: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 87.94302ms) | |
Feb 28 20:56:50.928: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 88.648247ms) | |
Feb 28 20:56:51.126: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 86.505006ms) | |
Feb 28 20:56:51.323: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 83.97756ms) | |
Feb 28 20:56:51.524: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 84.868067ms) | |
Feb 28 20:56:51.722: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 82.174285ms) | |
Feb 28 20:56:51.919: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 78.992331ms) | |
Feb 28 20:56:52.123: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 83.067363ms) | |
Feb 28 20:56:52.322: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 81.868688ms) | |
Feb 28 20:56:52.522: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 81.737889ms) | |
Feb 28 20:56:52.731: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 91.033613ms) | |
Feb 28 20:56:52.925: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 84.735376ms) | |
Feb 28 20:56:53.128: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 87.257759ms) | |
Feb 28 20:56:53.328: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 86.991227ms) | |
Feb 28 20:56:53.520: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 78.891505ms) | |
Feb 28 20:56:53.723: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 81.765542ms) | |
Feb 28 20:56:53.936: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 94.648189ms) | |
Feb 28 20:56:54.124: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 82.453089ms) | |
Feb 28 20:56:54.325: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 83.367477ms) | |
Feb 28 20:56:54.523: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 80.88672ms) | |
Feb 28 20:56:54.726: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 83.993404ms) | |
Feb 28 20:56:54.922: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 79.624619ms) | |
Feb 28 20:56:55.123: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 81.30952ms) | |
Feb 28 20:56:55.325: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 82.805981ms) | |
Feb 28 20:56:55.521: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 79.094112ms) | |
Feb 28 20:56:55.726: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 83.577427ms) | |
Feb 28 20:56:55.928: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 85.786304ms) | |
Feb 28 20:56:56.126: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 83.067053ms) | |
Feb 28 20:56:56.329: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 85.621155ms) | |
Feb 28 20:56:56.526: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 83.112061ms) | |
Feb 28 20:56:56.730: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 86.797867ms) | |
Feb 28 20:56:56.932: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 88.704089ms) | |
Feb 28 20:56:57.126: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 81.968531ms) | |
Feb 28 20:56:57.328: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 83.89151ms) | |
Feb 28 20:56:57.529: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 85.510118ms) | |
Feb 28 20:56:57.730: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 86.055988ms) | |
Feb 28 20:56:57.925: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 80.745414ms) | |
Feb 28 20:56:58.130: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 85.946438ms) | |
Feb 28 20:56:58.325: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 80.253021ms) | |
Feb 28 20:56:58.533: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 89.031771ms) | |
Feb 28 20:56:58.723: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 78.964895ms) | |
Feb 28 20:56:58.924: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 79.288381ms) | |
Feb 28 20:56:59.133: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 88.536182ms) | |
Feb 28 20:56:59.328: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 83.123431ms) | |
Feb 28 20:56:59.531: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 85.867734ms) | |
Feb 28 20:56:59.729: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 83.380779ms) | |
Feb 28 20:56:59.928: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 82.992495ms) | |
Feb 28 20:57:00.128: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 82.141502ms) | |
Feb 28 20:57:00.328: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 82.251908ms) | |
Feb 28 20:57:00.530: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 83.936305ms) | |
Feb 28 20:57:00.730: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 83.951186ms) | |
Feb 28 20:57:00.929: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 83.010048ms) | |
Feb 28 20:57:01.127: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 80.40131ms) | |
Feb 28 20:57:01.337: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 90.990011ms) | |
Feb 28 20:57:01.529: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 82.810831ms) | |
Feb 28 20:57:01.736: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 89.954059ms) | |
Feb 28 20:57:01.930: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 83.138957ms) | |
Feb 28 20:57:02.128: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 81.55581ms) | |
Feb 28 20:57:02.332: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 85.447222ms) | |
Feb 28 20:57:02.528: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 80.521429ms) | |
Feb 28 20:57:02.730: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 82.551641ms) | |
Feb 28 20:57:02.938: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 90.696042ms) | |
Feb 28 20:57:03.135: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 87.814095ms) | |
Feb 28 20:57:03.331: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 83.485801ms) | |
Feb 28 20:57:03.529: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 81.493908ms) | |
Feb 28 20:57:03.732: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 83.896877ms) | |
Feb 28 20:57:03.931: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 83.083058ms) | |
Feb 28 20:57:04.131: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 82.503637ms) | |
Feb 28 20:57:04.329: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 80.803334ms) | |
Feb 28 20:57:04.532: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 83.826635ms) | |
Feb 28 20:57:04.726: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 77.853474ms) | |
Feb 28 20:57:04.933: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 84.648745ms) | |
Feb 28 20:57:05.135: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 86.127521ms) | |
Feb 28 20:57:05.338: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 89.241852ms) | |
Feb 28 20:57:05.536: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 87.495894ms) | |
Feb 28 20:57:05.731: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 81.935111ms) | |
Feb 28 20:57:05.930: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 80.792743ms) | |
Feb 28 20:57:06.133: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 83.590311ms) | |
Feb 28 20:57:06.330: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:462/proxy/: tls qux (200; 80.466115ms) | |
Feb 28 20:57:06.537: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:81/: bar (200; 87.63868ms) | |
Feb 28 20:57:06.734: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/: tls qux (200; 83.948295ms) | |
Feb 28 20:57:06.937: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:444/: tls qux (200; 87.429178ms) | |
Feb 28 20:57:07.135: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname2/proxy/: tls qux (200; 85.08833ms) | |
Feb 28 20:57:07.334: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea/proxy/rewriteme"... (200; 83.380531ms) | |
Feb 28 20:57:07.531: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/: tls baz (200; 80.772378ms) | |
Feb 28 20:57:07.731: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/re... (200; 80.791228ms) | |
Feb 28 20:57:07.933: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 82.205086ms) | |
Feb 28 20:57:08.135: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 83.765305ms) | |
Feb 28 20:57:08.337: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 86.073412ms) | |
Feb 28 20:57:08.533: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 82.445101ms) | |
Feb 28 20:57:08.732: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 80.579957ms) | |
Feb 28 20:57:08.934: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 82.343352ms) | |
Feb 28 20:57:09.132: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 80.941248ms) | |
Feb 28 20:57:09.332: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 80.516008ms) | |
Feb 28 20:57:09.535: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 83.318678ms) | |
Feb 28 20:57:09.730: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 78.537824ms) | |
Feb 28 20:57:09.938: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 85.807222ms) | |
Feb 28 20:57:10.132: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 79.88632ms) | |
Feb 28 20:57:10.335: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 83.049847ms) | |
Feb 28 20:57:10.536: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 83.896554ms) | |
Feb 28 20:57:10.735: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 82.710863ms) | |
Feb 28 20:57:10.936: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 83.778461ms) | |
Feb 28 20:57:11.132: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:80/proxy/re... (200; 79.539431ms) | |
Feb 28 20:57:11.339: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 85.974187ms) | |
Feb 28 20:57:11.539: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:81/: bar (200; 85.847659ms) | |
Feb 28 20:57:11.737: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/: foo (200; 83.991538ms) | |
Feb 28 20:57:11.935: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:80/: foo (200; 81.504353ms) | |
Feb 28 20:57:12.138: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:162/: bar (200; 85.057509ms) | |
Feb 28 20:57:12.333: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/: bar (200; 79.999836ms) | |
Feb 28 20:57:12.541: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/: bar (200; 87.92816ms) | |
Feb 28 20:57:12.733: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname2/proxy/: bar (200; 79.793109ms) | |
Feb 28 20:57:12.935: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 80.975007ms) | |
Feb 28 20:57:13.138: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:tlsportname1/proxy/: tls baz (200; 84.453134ms) | |
Feb 28 20:57:13.338: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/rewrite... (200; 84.136484ms) | |
Feb 28 20:57:13.539: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:460/proxy/: tls baz (200; 85.003426ms) | |
Feb 28 20:57:13.739: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/proxy/: foo (200; 84.393138ms) | |
Feb 28 20:57:13.935: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/https:proxy-service-n9r2a-b24ea:443/proxy/... (200; 80.48945ms) | |
Feb 28 20:57:14.135: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:80/: foo (200; 80.432222ms) | |
Feb 28 20:57:14.337: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/proxy/: foo (200; 82.472449ms) | |
Feb 28 20:57:14.540: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/proxy/: bar (200; 84.91984ms) | |
Feb 28 20:57:14.737: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:162/proxy/: bar (200; 82.778758ms) | |
Feb 28 20:57:14.936: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/https:proxy-service-n9r2a:443/: tls baz (200; 81.577727ms) | |
Feb 28 20:57:15.140: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:160/: foo (200; 85.398545ms) | |
Feb 28 20:57:15.341: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-f07sq/pods/proxy-service-n9r2a-b24ea:80/proxy/rewrite... (200; 85.717398ms) | |
Feb 28 20:57:15.539: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/pods/http:proxy-service-n9r2a-b24ea:160/: foo (200; 83.462465ms) | |
Feb 28 20:57:15.736: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/proxy-service-n9r2a:portname1/: foo (200; 80.424065ms) | |
Feb 28 20:57:15.938: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname2/: bar (200; 82.753847ms) | |
Feb 28 20:57:16.138: INFO: /api/v1/namespaces/e2e-tests-proxy-f07sq/services/http:proxy-service-n9r2a:portname1/proxy/: foo (200; 81.927673ms) | |
Feb 28 20:57:16.337: INFO: /api/v1/proxy/namespa |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment