Created
April 22, 2016 06:54
-
-
Save cjcullen/aa8ce51e92481275286a129c737490d8 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
GitHub pull request #24502 of commit e53aa93836a1d0b26babca1baea34e710c4d43fa, no merge conflicts. | |
Setting status of e53aa93836a1d0b26babca1baea34e710c4d43fa to PENDING with url https://console.cloud.google.com/storage/browser/kubernetes-jenkins/pr-logs/pull/24502/kubernetes-pull-build-test-e2e-gce/36553/ and message: 'Build started sha1 is merged.' | |
Using context: Jenkins GCE e2e | |
[EnvInject] - Loading node environment variables. | |
Building on master in workspace /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace | |
[WS-CLEANUP] Deleting project workspace... | |
[WS-CLEANUP] Done | |
Cloning the remote Git repository | |
Cloning repository https://github.com/kubernetes/kubernetes | |
> /usr/bin/git init /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace # timeout=10 | |
Fetching upstream changes from https://github.com/kubernetes/kubernetes | |
> /usr/bin/git --version # timeout=10 | |
> /usr/bin/git -c core.askpass=true fetch --tags --progress https://github.com/kubernetes/kubernetes +refs/heads/*:refs/remotes/origin/* | |
> /usr/bin/git config remote.origin.url https://github.com/kubernetes/kubernetes # timeout=10 | |
> /usr/bin/git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 | |
> /usr/bin/git config remote.origin.url https://github.com/kubernetes/kubernetes # timeout=10 | |
Fetching upstream changes from https://github.com/kubernetes/kubernetes | |
> /usr/bin/git -c core.askpass=true fetch --tags --progress https://github.com/kubernetes/kubernetes +refs/pull/24502/*:refs/remotes/origin/pr/24502/* | |
> /usr/bin/git rev-parse refs/remotes/origin/pr/24502/merge^{commit} # timeout=10 | |
> /usr/bin/git rev-parse refs/remotes/origin/origin/pr/24502/merge^{commit} # timeout=10 | |
Checking out Revision fe22780e894ff38321e7c9bf33f2742d9c393921 (refs/remotes/origin/pr/24502/merge) | |
> /usr/bin/git config core.sparsecheckout # timeout=10 | |
> /usr/bin/git checkout -f fe22780e894ff38321e7c9bf33f2742d9c393921 | |
First time build. Skipping changelog. | |
[EnvInject] - Executing scripts and injecting environment variables after the SCM step. | |
[EnvInject] - Injecting as environment variables the properties content | |
KUBE_SKIP_PUSH_GCS=y | |
KUBE_RUN_FROM_OUTPUT=y | |
[EnvInject] - Variables injected successfully. | |
[workspace] $ /bin/bash -xe /tmp/hudson6151002897136271615.sh | |
+ JENKINS_BUILD_STARTED=true | |
+ bash /dev/fd/63 | |
++ curl -fsS --retry 3 https://raw.githubusercontent.com/ixdy/kubernetes/upload-to-gcs-script/hack/jenkins/upload-to-gcs.sh | |
Run starting at Thu Apr 21 22:32:55 PDT 2016 | |
Found Kubernetes version: v1.3.0-alpha.2.487+fe22780e894ff3 | |
Uploading version to: gs://kubernetes-jenkins/pr-logs/pull/24502/kubernetes-pull-build-test-e2e-gce/36553/started.json (attempt 1) | |
[workspace] $ /bin/bash -xe /tmp/hudson5248402273341746001.sh | |
+ export GCE_SERVICE_ACCOUNT=211744435552-compute@developer.gserviceaccount.com | |
+ GCE_SERVICE_ACCOUNT=211744435552-compute@developer.gserviceaccount.com | |
+ ./hack/jenkins/build.sh | |
+ export HOME=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace | |
+ HOME=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace | |
+ export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin | |
+ PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin | |
+ export KUBE_SKIP_CONFIRMATIONS=y | |
+ KUBE_SKIP_CONFIRMATIONS=y | |
+ export CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true | |
+ CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true | |
+ : n | |
+ export KUBE_RELEASE_RUN_TESTS | |
+ rm -rf '/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube*' | |
+ make clean | |
build/make-clean.sh | |
+++ [0421 22:32:57] Verifying Prerequisites.... | |
!!! [0421 22:32:57] Build image not built. Cannot clean via docker build image. | |
+++ [0421 22:32:57] Removing data container | |
+++ [0421 22:32:57] Cleaning out local _output directory | |
+++ [0421 22:32:57] Deleting docker image kube-build:build-2be8cc7bdc | |
+++ [0421 22:32:57] Cleaning all other untagged docker images | |
rm -rf _output | |
rm -rf Godeps/_workspace/pkg | |
+ git clean -fdx | |
+ go run ./hack/e2e.go -v --build | |
2016/04/21 22:32:58 e2e.go:194: Running: build-release | |
Your active configuration is: [NONE] | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-b | |
+++ [0421 22:32:59] Verifying Prerequisites.... | |
+++ [0421 22:33:02] Building Docker image kube-build:build-2be8cc7bdc. | |
+++ [0421 22:33:08] Running build command.... | |
+++ [0421 22:33:08] Creating data container | |
Go version: go version go1.6 linux/amd64 | |
+++ [0421 22:33:11] Multiple platforms requested and available 115G >= threshold 11G, building platforms in parallel | |
+++ [0421 22:33:11] Building go targets for linux/amd64 | |
linux/arm | |
linux/arm64 | |
linux/ppc64le in parallel (output will appear in a burst when complete): | |
cmd/kube-proxy | |
cmd/kube-apiserver | |
cmd/kube-controller-manager | |
cmd/kubelet | |
cmd/kubemark | |
cmd/hyperkube | |
federation/cmd/federated-apiserver | |
plugin/cmd/kube-scheduler | |
+++ [0421 22:33:11] linux/amd64: go build started | |
+++ [0421 22:36:55] linux/amd64: go build finished | |
+++ [0421 22:33:11] linux/arm: go build started | |
+++ [0421 22:37:10] linux/arm: go build finished | |
+++ [0421 22:33:11] linux/arm64: go build started | |
+++ [0421 22:37:10] linux/arm64: go build finished | |
+++ [0421 22:33:11] linux/ppc64le: go build started | |
+++ [0421 22:37:12] linux/ppc64le: go build finished | |
Go version: go version go1.6 linux/amd64 | |
+++ [0421 22:37:12] Multiple platforms requested and available 113G >= threshold 11G, building platforms in parallel | |
+++ [0421 22:37:12] Building go targets for linux/amd64 | |
linux/386 | |
linux/arm | |
linux/arm64 | |
linux/ppc64le | |
darwin/amd64 | |
darwin/386 | |
windows/amd64 | |
windows/386 in parallel (output will appear in a burst when complete): | |
cmd/kubectl | |
+++ [0421 22:37:12] linux/amd64: go build started | |
+++ [0421 22:38:11] linux/amd64: go build finished | |
+++ [0421 22:37:12] linux/386: go build started | |
+++ [0421 22:39:16] linux/386: go build finished | |
+++ [0421 22:37:12] linux/arm: go build started | |
+++ [0421 22:38:11] linux/arm: go build finished | |
+++ [0421 22:37:12] linux/arm64: go build started | |
+++ [0421 22:38:11] linux/arm64: go build finished | |
+++ [0421 22:37:12] linux/ppc64le: go build started | |
+++ [0421 22:38:10] linux/ppc64le: go build finished | |
+++ [0421 22:37:12] darwin/amd64: go build started | |
+++ [0421 22:39:15] darwin/amd64: go build finished | |
+++ [0421 22:37:12] darwin/386: go build started | |
+++ [0421 22:39:16] darwin/386: go build finished | |
+++ [0421 22:37:12] windows/amd64: go build started | |
+++ [0421 22:39:16] windows/amd64: go build finished | |
+++ [0421 22:37:12] windows/386: go build started | |
+++ [0421 22:39:17] windows/386: go build finished | |
Go version: go version go1.6 linux/amd64 | |
+++ [0421 22:39:18] Multiple platforms requested and available 114G >= threshold 11G, building platforms in parallel | |
+++ [0421 22:39:18] Building go targets for linux/amd64 | |
darwin/amd64 | |
windows/amd64 | |
linux/arm in parallel (output will appear in a burst when complete): | |
cmd/integration | |
cmd/gendocs | |
cmd/genkubedocs | |
cmd/genman | |
cmd/genyaml | |
cmd/mungedocs | |
cmd/genbashcomp | |
cmd/genswaggertypedocs | |
cmd/linkcheck | |
examples/k8petstore/web-server/src | |
github.com/onsi/ginkgo/ginkgo | |
test/e2e/e2e.test | |
test/e2e_node/e2e_node.test | |
+++ [0421 22:39:18] linux/amd64: go build started | |
+++ [0421 22:41:54] linux/amd64: go build finished | |
+++ [0421 22:39:18] darwin/amd64: go build started | |
+++ [0421 22:42:22] darwin/amd64: go build finished | |
+++ [0421 22:39:18] windows/amd64: go build started | |
+++ [0421 22:42:22] windows/amd64: go build finished | |
+++ [0421 22:39:18] linux/arm: go build started | |
+++ [0421 22:41:51] linux/arm: go build finished | |
+++ [0421 22:42:22] Placing binaries | |
+++ [0421 22:42:44] Running build command.... | |
+++ [0421 22:42:58] Output directory is local. No need to copy results out. | |
+++ [0421 22:42:58] Building tarball: src | |
+++ [0421 22:42:58] Building tarball: manifests | |
+++ [0421 22:42:58] Building tarball: salt | |
+++ [0421 22:42:58] Building tarball: server linux-amd64 | |
+++ [0421 22:42:58] Starting tarball: client darwin-386 | |
+++ [0421 22:42:58] Starting tarball: client darwin-amd64 | |
+++ [0421 22:42:58] Starting tarball: client linux-386 | |
+++ [0421 22:42:58] Starting tarball: client linux-amd64 | |
+++ [0421 22:42:58] Starting tarball: client linux-arm | |
+++ [0421 22:42:58] Starting tarball: client linux-arm64 | |
+++ [0421 22:42:58] Starting tarball: client linux-ppc64le | |
+++ [0421 22:42:58] Starting tarball: client windows-386 | |
+++ [0421 22:42:58] Starting tarball: client windows-amd64 | |
+++ [0421 22:42:58] Waiting on tarballs | |
+++ [0421 22:42:59] Starting Docker build for image: kube-apiserver | |
+++ [0421 22:42:59] Starting Docker build for image: kube-controller-manager | |
+++ [0421 22:42:59] Starting Docker build for image: kube-scheduler | |
+++ [0421 22:42:59] Starting Docker build for image: kube-proxy | |
+++ [0421 22:43:08] Deleting docker image gcr.io/google_containers/kube-scheduler:14106a3697cd08dd72e28cdca827e18c | |
Untagged: gcr.io/google_containers/kube-scheduler:14106a3697cd08dd72e28cdca827e18c | |
Deleted: faa6e7bbeebe698c8dd95b2a6824072197127dcaebb964e9c9c104ebb1de4a98 | |
+++ [0421 22:43:10] Deleting docker image gcr.io/google_containers/kube-controller-manager:9eaed70f1c2b742aa938576197d8cea1 | |
Untagged: gcr.io/google_containers/kube-controller-manager:9eaed70f1c2b742aa938576197d8cea1 | |
Deleted: bed0c6dc667e8977fb83c7e4fefadca62e452ae5445e5dd1089451ff3cfc4d01 | |
+++ [0421 22:43:13] Deleting docker image gcr.io/google_containers/kube-proxy:1661e340da401ea3e34cc37f7eff0615 | |
Untagged: gcr.io/google_containers/kube-proxy:1661e340da401ea3e34cc37f7eff0615 | |
Deleted: 5a58708958647c165ac215048737a6cb4954f6b73c276f9a512d4216e727cd24 | |
+++ [0421 22:43:13] Deleting docker image gcr.io/google_containers/kube-apiserver:63c2d8514f49a2b4564c4c650b146491 | |
Untagged: gcr.io/google_containers/kube-apiserver:63c2d8514f49a2b4564c4c650b146491 | |
Deleted: 707af377a24aaa39a86ab06a6c27276578b903e4f8a3423492cc6423a596646e | |
+++ [0421 22:43:13] Docker builds done | |
+++ [0421 22:44:00] Building tarball: server linux-arm | |
+++ [0421 22:44:00] Starting Docker build for image: kube-apiserver | |
+++ [0421 22:44:00] Starting Docker build for image: kube-controller-manager | |
+++ [0421 22:44:00] Starting Docker build for image: kube-scheduler | |
+++ [0421 22:44:00] Starting Docker build for image: kube-proxy | |
+++ [0421 22:44:06] Deleting docker image gcr.io/google_containers/kube-scheduler-arm:832eb862aae88a54d75ea79a862eef6c | |
Untagged: gcr.io/google_containers/kube-scheduler-arm:832eb862aae88a54d75ea79a862eef6c | |
Deleted: d591f1413edfe65bd59cc7a10b0aeeffe18f29c85547bcd1b11290668b367087 | |
+++ [0421 22:44:07] Deleting docker image gcr.io/google_containers/kube-apiserver-arm:9bd3c3fcb006b03ee43fb081da9f6b44 | |
Untagged: gcr.io/google_containers/kube-apiserver-arm:9bd3c3fcb006b03ee43fb081da9f6b44 | |
Deleted: 17a85cb94001a3aade275375074102d2d620778bb7b4f3554e84176f9265f4ed | |
+++ [0421 22:44:07] Deleting docker image gcr.io/google_containers/kube-controller-manager-arm:d2dc806e58edc17c42e23be09d9f8513 | |
Untagged: gcr.io/google_containers/kube-controller-manager-arm:d2dc806e58edc17c42e23be09d9f8513 | |
Deleted: d1eb86f2b253655fc3e89155712d7abb41a2f3ab164a2fdbb076407109d18971 | |
+++ [0421 22:44:09] Deleting docker image gcr.io/google_containers/kube-proxy-arm:596a5cb1cc972852d1de5b94875c1a3c | |
Untagged: gcr.io/google_containers/kube-proxy-arm:596a5cb1cc972852d1de5b94875c1a3c | |
Deleted: 59675a73b6ca288109d08c61bf6cdbbc96fed4ee71a9168160c6cb3bc331f3f4 | |
+++ [0421 22:44:09] Docker builds done | |
+++ [0421 22:44:43] Building tarball: server linux-arm64 | |
+++ [0421 22:44:44] Starting Docker build for image: kube-apiserver | |
+++ [0421 22:44:44] Starting Docker build for image: kube-controller-manager | |
+++ [0421 22:44:44] Starting Docker build for image: kube-scheduler | |
+++ [0421 22:44:44] Starting Docker build for image: kube-proxy | |
+++ [0421 22:44:50] Deleting docker image gcr.io/google_containers/kube-controller-manager-arm64:fc288df14af333a2e53dae7e72f7c9ab | |
Untagged: gcr.io/google_containers/kube-controller-manager-arm64:fc288df14af333a2e53dae7e72f7c9ab | |
Deleted: ff4686af099da25edcd0dcd549c0cd930d08446e2e58d6543ebf558c853658d8 | |
+++ [0421 22:44:50] Deleting docker image gcr.io/google_containers/kube-scheduler-arm64:316e576fdac1ffe17e4c031f077363b0 | |
Untagged: gcr.io/google_containers/kube-scheduler-arm64:316e576fdac1ffe17e4c031f077363b0 | |
Deleted: b5491d98d258a87b60f887b64bf328238c409037a9bd74f607cdcaa209a7fe27 | |
+++ [0421 22:44:51] Deleting docker image gcr.io/google_containers/kube-apiserver-arm64:3056dff31c8a1e17ac6b4dc45032d16d | |
Untagged: gcr.io/google_containers/kube-apiserver-arm64:3056dff31c8a1e17ac6b4dc45032d16d | |
Deleted: 71f56bfae76a8e1d5ac9d11caf442ff06109cc5097049ad862a32379ce3caa81 | |
+++ [0421 22:44:52] Deleting docker image gcr.io/google_containers/kube-proxy-arm64:7400015129d3e177597a9af5cf4cdb48 | |
Untagged: gcr.io/google_containers/kube-proxy-arm64:7400015129d3e177597a9af5cf4cdb48 | |
Deleted: 9d9f1c9c67994ee0fb94cb502d6ff154ec07d3dde286aeb92749c27840624c13 | |
+++ [0421 22:44:52] Docker builds done | |
+++ [0421 22:45:31] Building tarball: server linux-ppc64le | |
+++ [0421 22:45:31] Starting Docker build for image: kube-apiserver | |
+++ [0421 22:45:31] Starting Docker build for image: kube-controller-manager | |
+++ [0421 22:45:31] Starting Docker build for image: kube-scheduler | |
+++ [0421 22:45:31] Starting Docker build for image: kube-proxy | |
+++ [0421 22:45:49] Deleting docker image gcr.io/google_containers/kube-scheduler-ppc64le:236399cc9f66cf88c93a19b7e6bbab91 | |
Untagged: gcr.io/google_containers/kube-scheduler-ppc64le:236399cc9f66cf88c93a19b7e6bbab91 | |
Deleted: 840ceb99393a4e7c64dff2bb8d1e3a210beb29302fd3ccbd266e9f99698c77e9 | |
+++ [0421 22:45:51] Deleting docker image gcr.io/google_containers/kube-proxy-ppc64le:4cf3b1f430cce249f58eaecc39107cc8 | |
Untagged: gcr.io/google_containers/kube-proxy-ppc64le:4cf3b1f430cce249f58eaecc39107cc8 | |
Deleted: d9b466bd897b555763c3e3faf0a17b35896647db42c5567cd7169b0e4a226b7a | |
+++ [0421 22:45:52] Deleting docker image gcr.io/google_containers/kube-controller-manager-ppc64le:8cdef0a602ad17e4325d3e777fbded78 | |
Untagged: gcr.io/google_containers/kube-controller-manager-ppc64le:8cdef0a602ad17e4325d3e777fbded78 | |
Deleted: db5a5b0d2a7b02536f69d0cacb50b0c70874cd246bb4f40ad9bf95b0869f682d | |
+++ [0421 22:45:52] Deleting docker image gcr.io/google_containers/kube-apiserver-ppc64le:af2a70793fd4be3259358b755e723282 | |
Untagged: gcr.io/google_containers/kube-apiserver-ppc64le:af2a70793fd4be3259358b755e723282 | |
Deleted: 5f0ef48f32debd888882fe04457cb16be855bf580d2e760d301b695bff41dca8 | |
+++ [0421 22:45:52] Docker builds done | |
+++ [0421 22:46:30] Building tarball: test | |
+++ [0421 22:46:30] Building tarball: full | |
2016/04/21 22:47:52 e2e.go:196: Step 'build-release' finished in 14m54.30249706s | |
+ [[ y =~ ^[yY]$ ]] | |
+ sha256sum _output/release-tars/kubernetes-client-darwin-386.tar.gz _output/release-tars/kubernetes-client-darwin-amd64.tar.gz _output/release-tars/kubernetes-client-linux-386.tar.gz _output/release-tars/kubernetes-client-linux-amd64.tar.gz _output/release-tars/kubernetes-client-linux-arm64.tar.gz _output/release-tars/kubernetes-client-linux-arm.tar.gz _output/release-tars/kubernetes-client-linux-ppc64le.tar.gz _output/release-tars/kubernetes-client-windows-386.tar.gz _output/release-tars/kubernetes-client-windows-amd64.tar.gz _output/release-tars/kubernetes-manifests.tar.gz _output/release-tars/kubernetes-salt.tar.gz _output/release-tars/kubernetes-server-linux-amd64.tar.gz _output/release-tars/kubernetes-server-linux-arm64.tar.gz _output/release-tars/kubernetes-server-linux-arm.tar.gz _output/release-tars/kubernetes-server-linux-ppc64le.tar.gz _output/release-tars/kubernetes-src.tar.gz _output/release-tars/kubernetes.tar.gz _output/release-tars/kubernetes-test.tar.gz | |
850a4ae5c3ccc34da834044744165c3fba8664a1063213be17a31ad4c0b8282e _output/release-tars/kubernetes-client-darwin-386.tar.gz | |
4eab7b188fe853513e63c3b277635546e312b708c1e32e5a1418deea2333b056 _output/release-tars/kubernetes-client-darwin-amd64.tar.gz | |
b3df23aecc90d6bb5da829bcd46ac440578ded276b4314e8463e0ab75e09a14e _output/release-tars/kubernetes-client-linux-386.tar.gz | |
5efa9a45fed90f441260d3dbbfc6ddca4a85220897f6abd4c67157a50c0f2da2 _output/release-tars/kubernetes-client-linux-amd64.tar.gz | |
422bafa36da65cb31522c73ad5a1a8b65c8fa02ebbbe69749e63085e6bd94ccf _output/release-tars/kubernetes-client-linux-arm64.tar.gz | |
b98ad7b1714432e452dc70906dbccff2196055f97fb53055027d4801b05cf418 _output/release-tars/kubernetes-client-linux-arm.tar.gz | |
19de9dadfca38fdcceedf312a70fb626554df777890c05650886672f60016c90 _output/release-tars/kubernetes-client-linux-ppc64le.tar.gz | |
8c3a79fafaa30667d9dcacbfcd6c97a45ccd1a8bfb8d7ff94300e7f1c84a8115 _output/release-tars/kubernetes-client-windows-386.tar.gz | |
f876f982a0d1b99033c901e4c71c11fd73ee62d6593ed53afbc97d32694ff63a _output/release-tars/kubernetes-client-windows-amd64.tar.gz | |
9be162d156f4beb5690a76c507578549e83bd941c5d8e874d841c66674a42e4a _output/release-tars/kubernetes-manifests.tar.gz | |
0a5073b3b0ce09844f1a54bdc5a0d670304c36dbb898cb8ed7e989bf966709a3 _output/release-tars/kubernetes-salt.tar.gz | |
9b737cd58540802c78228e6a658c9cf852225e4ba513e3881268d55ce42f2caa _output/release-tars/kubernetes-server-linux-amd64.tar.gz | |
39ad4e70400d0a5720de352632cd8cb494d235e5d689ee548397bc79f5ac6e28 _output/release-tars/kubernetes-server-linux-arm64.tar.gz | |
c9791adafa40b5317a2cc7c75d878e78c4a69e55df8264a84a893c553795ed4c _output/release-tars/kubernetes-server-linux-arm.tar.gz | |
32c2d671dfcefee910f5fd96a5b910853950a1c2992b15cb8259f8a3ff037ae8 _output/release-tars/kubernetes-server-linux-ppc64le.tar.gz | |
00f80142cbd66a9e5bc0c99be8daf0fddb7a2e216b7ecdaa291ba8118d75142b _output/release-tars/kubernetes-src.tar.gz | |
640b60356814b64b73826ca7867fa6b3561f7d40273e82bee2e6fb81a0c10644 _output/release-tars/kubernetes.tar.gz | |
99bd16fb8329d5a732f08d9f36661b19c262464371bc5cc5ac0129c333d0c442 _output/release-tars/kubernetes-test.tar.gz | |
+ [[ master == \m\a\s\t\e\r ]] | |
+ export HOME=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace | |
+ HOME=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace | |
+ export KUBERNETES_PROVIDER=gce | |
+ KUBERNETES_PROVIDER=gce | |
+ export E2E_MIN_STARTUP_PODS=1 | |
+ E2E_MIN_STARTUP_PODS=1 | |
+ export KUBE_GCE_ZONE=us-central1-f | |
+ KUBE_GCE_ZONE=us-central1-f | |
+ export FAIL_ON_GCP_RESOURCE_LEAK=true | |
+ FAIL_ON_GCP_RESOURCE_LEAK=true | |
+ export E2E_NAME=e2e-gce-master-1 | |
+ E2E_NAME=e2e-gce-master-1 | |
+ export GINKGO_PARALLEL=y | |
+ GINKGO_PARALLEL=y | |
+ export 'GINKGO_TEST_ARGS=--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]' | |
+ GINKGO_TEST_ARGS='--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]' | |
+ export FAIL_ON_GCP_RESOURCE_LEAK=false | |
+ FAIL_ON_GCP_RESOURCE_LEAK=false | |
+ export PROJECT=kubernetes-jenkins-pull | |
+ PROJECT=kubernetes-jenkins-pull | |
+ export NUM_NODES=6 | |
+ NUM_NODES=6 | |
+ export E2E_UP=true | |
+ E2E_UP=true | |
+ export E2E_TEST=true | |
+ E2E_TEST=true | |
+ export E2E_DOWN=true | |
+ E2E_DOWN=true | |
+ export CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true | |
+ CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true | |
+ export KUBE_AWS_INSTANCE_PREFIX=e2e-gce-master-1 | |
+ KUBE_AWS_INSTANCE_PREFIX=e2e-gce-master-1 | |
+ export INSTANCE_PREFIX=e2e-gce-master-1 | |
+ INSTANCE_PREFIX=e2e-gce-master-1 | |
+ export KUBE_GCE_NETWORK=e2e-gce-master-1 | |
+ KUBE_GCE_NETWORK=e2e-gce-master-1 | |
+ export KUBE_GCE_INSTANCE_PREFIX=e2e-gce-master-1 | |
+ KUBE_GCE_INSTANCE_PREFIX=e2e-gce-master-1 | |
+ export CLUSTER_NAME=e2e-gce-master-1 | |
+ CLUSTER_NAME=e2e-gce-master-1 | |
+ export KUBE_GKE_NETWORK=e2e-gce-master-1 | |
+ KUBE_GKE_NETWORK=e2e-gce-master-1 | |
+ export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin | |
+ PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin | |
+ timeout -k 15m 55m ./hack/jenkins/e2e-runner.sh | |
+ running_in_docker | |
+ grep -q docker /proc/self/cgroup | |
+ [[ -n '' ]] | |
+ [[ '' =~ ^[yY]$ ]] | |
+ echo -------------------------------------------------------------------------------- | |
-------------------------------------------------------------------------------- | |
+ echo 'Test Environment:' | |
Test Environment: | |
+ printenv | |
+ sort | |
BUILD_CAUSE=GHPRBCAUSE | |
BUILD_CAUSE_GHPRBCAUSE=true | |
BUILD_DISPLAY_NAME=#36553 | |
BUILD_ID=36553 | |
BUILD_NUMBER=36553 | |
BUILD_TAG=jenkins-kubernetes-pull-build-test-e2e-gce-36553 | |
BUILD_URL=http://goto.google.com/prkubekins/job/kubernetes-pull-build-test-e2e-gce/36553/ | |
CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true | |
CLUSTER_NAME=e2e-gce-master-1 | |
E2E_DOWN=true | |
E2E_MIN_STARTUP_PODS=1 | |
E2E_NAME=e2e-gce-master-1 | |
E2E_TEST=true | |
E2E_UP=true | |
EXECUTOR_NUMBER=1 | |
FAIL_ON_GCP_RESOURCE_LEAK=false | |
GCE_SERVICE_ACCOUNT=211744435552-compute@developer.gserviceaccount.com | |
ghprbActualCommitAuthor=CJ Cullen | |
[email protected] | |
ghprbActualCommit=e53aa93836a1d0b26babca1baea34e710c4d43fa | |
ghprbAuthorRepoGitUrl=https://github.com/cjcullen/kubernetes.git | |
ghprbCommentBody=@k8s-bot test this\n\nTests are more than 48 hours old. Re-running tests. | |
ghprbCredentialsId=71f234df-867a-4303-bfaa-fa3b0d153033 | |
ghprbGhRepository=kubernetes/kubernetes | |
[email protected] | |
ghprbPullAuthorLogin=cjcullen | |
ghprbPullAuthorLoginMention=@cjcullen | |
ghprbPullDescription=GitHub pull request #24502 of commit e53aa93836a1d0b26babca1baea34e710c4d43fa, no merge conflicts. | |
ghprbPullId=24502 | |
ghprbPullLink=https://github.com/kubernetes/kubernetes/pull/24502 | |
ghprbPullLongDescription=Pass through the Subresource and Name fields from the `authorization.Attributes` to the `SubjectAccessReviewSpec.ResourceAttributes`. | |
ghprbPullTitle=Add Subresource & Name to webhook authorizer. | |
ghprbSourceBranch=subresource | |
ghprbTargetBranch=master | |
ghprbTriggerAuthorLogin=k8s-merge-robot | |
ghprbTriggerAuthorLoginMention=@k8s-merge-robot | |
GINKGO_PARALLEL=y | |
GINKGO_TEST_ARGS=--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] | |
GIT_BRANCH=subresource | |
GIT_COMMIT=730b9a922fe23138f4c371c1d81bcf388ac654ed | |
GIT_URL=https://github.com/kubernetes/kubernetes | |
HOME=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace | |
HUDSON_COOKIE=fe55d406-7a5c-40f4-81cd-42cd34453a31 | |
HUDSON_HOME=/var/lib/jenkins | |
HUDSON_SERVER_COOKIE=32617aabcc4952a0 | |
HUDSON_URL=http://goto.google.com/prkubekins/ | |
INSTANCE_PREFIX=e2e-gce-master-1 | |
JENKINS_HOME=/var/lib/jenkins | |
JENKINS_SERVER_COOKIE=32617aabcc4952a0 | |
JENKINS_URL=http://goto.google.com/prkubekins/ | |
JOB_NAME=kubernetes-pull-build-test-e2e-gce | |
JOB_URL=http://goto.google.com/prkubekins/job/kubernetes-pull-build-test-e2e-gce/ | |
KUBE_AWS_INSTANCE_PREFIX=e2e-gce-master-1 | |
KUBE_GCE_INSTANCE_PREFIX=e2e-gce-master-1 | |
KUBE_GCE_NETWORK=e2e-gce-master-1 | |
KUBE_GCE_ZONE=us-central1-f | |
KUBE_GKE_NETWORK=e2e-gce-master-1 | |
KUBERNETES_PROVIDER=gce | |
KUBE_RUN_FROM_OUTPUT=y | |
KUBE_SKIP_PUSH_GCS=y | |
LANG=en_US.UTF-8 | |
LOGNAME=jenkins | |
MAIL=/var/mail/jenkins | |
NODE_LABELS=master | |
NODE_NAME=master | |
NUM_NODES=6 | |
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin | |
PROJECT=kubernetes-jenkins-pull | |
PWD=/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace | |
ROOT_BUILD_CAUSE=GHPRBCAUSE | |
ROOT_BUILD_CAUSE_GHPRBCAUSE=true | |
sha1=origin/pr/24502/merge | |
SHELL=/bin/bash | |
SHLVL=3 | |
TERM=linux | |
USER=jenkins | |
_=/usr/bin/printenv | |
WORKSPACE=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace | |
XDG_SESSION_COOKIE=d08582512ff865ef414bfd685572154c-1460313315.916937-1945149396 | |
+ echo -------------------------------------------------------------------------------- | |
-------------------------------------------------------------------------------- | |
+ [[ '' =~ ^[yY]$ ]] | |
+ [[ y =~ ^[yY]$ ]] | |
+ clean_binaries | |
+ echo 'Cleaning up binaries.' | |
Cleaning up binaries. | |
+ rm -rf 'kubernetes*' | |
+ fetch_output_tars | |
+ echo 'Using binaries from _output.' | |
Using binaries from _output. | |
+ cp _output/release-tars/kubernetes-client-darwin-386.tar.gz _output/release-tars/kubernetes-client-darwin-amd64.tar.gz _output/release-tars/kubernetes-client-linux-386.tar.gz _output/release-tars/kubernetes-client-linux-amd64.tar.gz _output/release-tars/kubernetes-client-linux-arm64.tar.gz _output/release-tars/kubernetes-client-linux-arm.tar.gz _output/release-tars/kubernetes-client-linux-ppc64le.tar.gz _output/release-tars/kubernetes-client-windows-386.tar.gz _output/release-tars/kubernetes-client-windows-amd64.tar.gz _output/release-tars/kubernetes-manifests.tar.gz _output/release-tars/kubernetes-salt.tar.gz _output/release-tars/kubernetes-server-linux-amd64.tar.gz _output/release-tars/kubernetes-server-linux-arm64.tar.gz _output/release-tars/kubernetes-server-linux-arm.tar.gz _output/release-tars/kubernetes-server-linux-ppc64le.tar.gz _output/release-tars/kubernetes-src.tar.gz _output/release-tars/kubernetes.tar.gz _output/release-tars/kubernetes-test.tar.gz . | |
+ unpack_binaries | |
+ md5sum kubernetes-client-darwin-386.tar.gz kubernetes-client-darwin-amd64.tar.gz kubernetes-client-linux-386.tar.gz kubernetes-client-linux-amd64.tar.gz kubernetes-client-linux-arm64.tar.gz kubernetes-client-linux-arm.tar.gz kubernetes-client-linux-ppc64le.tar.gz kubernetes-client-windows-386.tar.gz kubernetes-client-windows-amd64.tar.gz kubernetes-manifests.tar.gz kubernetes-salt.tar.gz kubernetes-server-linux-amd64.tar.gz kubernetes-server-linux-arm64.tar.gz kubernetes-server-linux-arm.tar.gz kubernetes-server-linux-ppc64le.tar.gz kubernetes-src.tar.gz kubernetes.tar.gz kubernetes-test.tar.gz | |
866fb81c2db73401cfb677910220733c kubernetes-client-darwin-386.tar.gz | |
1e0fe067c6e788246e83f55ab2ea91b3 kubernetes-client-darwin-amd64.tar.gz | |
ed51631cbbacbb5e605fdbeb6c31a79b kubernetes-client-linux-386.tar.gz | |
74d309e093744c56df709c7da49c44c8 kubernetes-client-linux-amd64.tar.gz | |
6521d19d64c8905b8d928f12e2acb932 kubernetes-client-linux-arm64.tar.gz | |
22930e1b3caee358447f3df11250e70a kubernetes-client-linux-arm.tar.gz | |
e35ef30ce1d1ada33e0743d997174a90 kubernetes-client-linux-ppc64le.tar.gz | |
dc3c6a48d5c4a4242be5dab33f79be90 kubernetes-client-windows-386.tar.gz | |
3eef4c168ec03898f2bde78a4e786e01 kubernetes-client-windows-amd64.tar.gz | |
b7e0ca5a40311cea16955c78678362f6 kubernetes-manifests.tar.gz | |
58550b067a9c243640d7828d2eebc7e2 kubernetes-salt.tar.gz | |
745c2ff6d1d40864155727d2e5a93fac kubernetes-server-linux-amd64.tar.gz | |
af57c89506e45044d971e68f36aaa904 kubernetes-server-linux-arm64.tar.gz | |
18a654a230a6f143e76e50b1f70fdadd kubernetes-server-linux-arm.tar.gz | |
49fb967a17b5961705dbace8d9afdd35 kubernetes-server-linux-ppc64le.tar.gz | |
d483c554440d9391f9bd9fce9841da2b kubernetes-src.tar.gz | |
79f89f0c167e99680688f32a9f650c1e kubernetes.tar.gz | |
2ca6e741c53e14e71676cb5741612483 kubernetes-test.tar.gz | |
+ tar -xzf kubernetes.tar.gz | |
+ tar -xzf kubernetes-test.tar.gz | |
+ case "${KUBERNETES_PROVIDER}" in | |
+ running_in_docker | |
+ grep -q docker /proc/self/cgroup | |
+ mkdir -p /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.ssh/ | |
+ cp /var/lib/jenkins/gce_keys/google_compute_engine /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.ssh/ | |
+ cp /var/lib/jenkins/gce_keys/google_compute_engine.pub /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.ssh/ | |
+ [[ ! -f /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.ssh/google_compute_engine ]] | |
+ cd kubernetes | |
+ [[ ! kubernetes-pull-build-test-e2e-gce =~ -pull- ]] | |
+ ARTIFACTS=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts | |
+ mkdir -p /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts | |
+ trap 'chmod -R o+r '\''/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts'\''' EXIT SIGINT SIGTERM | |
+ export E2E_REPORT_DIR=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts | |
+ E2E_REPORT_DIR=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts | |
+ declare -r gcp_list_resources_script=./cluster/gce/list-resources.sh | |
+ declare -r gcp_resources_before=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-before.txt | |
+ declare -r gcp_resources_cluster_up=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-cluster-up.txt | |
+ declare -r gcp_resources_after=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-after.txt | |
+ [[ gce == \g\c\e ]] | |
+ [[ -x ./cluster/gce/list-resources.sh ]] | |
+ gcp_list_resources=true | |
+ curl -fsS --retry 3 https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/gce/list-resources.sh | |
+ [[ true == \t\r\u\e ]] | |
+ go run ./hack/e2e.go -v --down | |
2016/04/21 22:48:35 e2e.go:194: Running: teardown | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
Shutting down test cluster in background. | |
ERROR: (gcloud.compute.firewall-rules.delete) Some requests did not succeed: | |
- The resource 'projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-minion-e2e-gce-master-1-http-alt' was not found | |
ERROR: (gcloud.compute.firewall-rules.delete) Some requests did not succeed: | |
- The resource 'projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-minion-e2e-gce-master-1-nodeports' was not found | |
Bringing down cluster using provider: gce | |
All components are up to date. | |
All components are up to date. | |
All components are up to date. | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
INSTANCE_GROUPS= | |
NODE_NAMES= | |
Bringing down cluster | |
Listed 0 items. | |
Listed 0 items. | |
Listed 0 items. | |
property "clusters.kubernetes-jenkins-pull_e2e-gce-master-1" unset. | |
property "users.kubernetes-jenkins-pull_e2e-gce-master-1" unset. | |
property "users.kubernetes-jenkins-pull_e2e-gce-master-1-basic-auth" unset. | |
property "contexts.kubernetes-jenkins-pull_e2e-gce-master-1" unset. | |
Cleared config for kubernetes-jenkins-pull_e2e-gce-master-1 from /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
Done | |
2016/04/21 22:48:45 e2e.go:196: Step 'teardown' finished in 10.458124055s | |
+ [[ true == \t\r\u\e ]] | |
+ ./cluster/gce/list-resources.sh | |
Listed 0 items. | |
Listed 0 items. | |
Listed 0 items. | |
Listed 0 items. | |
+ [[ true == \t\r\u\e ]] | |
+ go run ./hack/e2e.go -v --up | |
2016/04/21 22:49:12 e2e.go:194: Running: get status | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
Client Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.2.487+fe22780e894ff3", GitCommit:"fe22780e894ff38321e7c9bf33f2742d9c393921", GitTreeState:"clean", BuildDate:"2016-04-22T05:37:12Z", GoVersion:"go1.6", Compiler:"gc", Platform:"linux/amd64"} | |
error: couldn't read version from server: the server does not allow access to the requested resource | |
2016/04/21 22:49:12 e2e.go:200: Error running get status: exit status 1 | |
2016/04/21 22:49:12 e2e.go:196: Step 'get status' finished in 67.931359ms | |
2016/04/21 22:49:12 e2e.go:194: Running: up | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
... Starting cluster in us-central1-f using provider gce | |
... calling verify-prereqs | |
All components are up to date. | |
All components are up to date. | |
All components are up to date. | |
... calling kube-up | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
gs://kubernetes-staging-b6e85ca3f3/devel-0/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-1/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-10/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-11/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-2/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-3/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-4/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-5/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-6/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-7/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-8/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-9/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-1-0/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-1-1/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-1-2/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-1-3/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-1-4/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-1-5/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-1-6/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-1-7/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-2-0/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-2-1/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-2-2/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-2-3/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-2-4/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-2-5/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-2-6/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-2-7/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-3-0/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-3-1/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-3-2/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-3-3/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-3-4/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-3-5/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-4-0/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-4-1/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-4-2/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-4-3/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-builder-4-4/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-master-0/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-master-1/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-master-3/ | |
gs://kubernetes-staging-b6e85ca3f3/devel-pull-jenkins-builder-2-1/ | |
gs://kubernetes-staging-b6e85ca3f3/devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-1-0-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-1-1-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-1-2-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-1-3-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-1-4-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-1-5-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-1-6-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-1-7-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-2-0-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-2-1-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-2-2-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-2-3-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-2-4-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-2-5-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-2-6-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-2-7-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-3-0-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-3-1-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-3-2-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-3-3-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-3-4-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-3-5-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-4-0-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-4-1-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-4-2-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-4-3-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-4-4-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-builder-4-5-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-master-0-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-master-1-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-master-2-devel/ | |
gs://kubernetes-staging-b6e85ca3f3/e2e-gce-master-3-devel/ | |
+++ Staging server tars to Google Storage: gs://kubernetes-staging-b6e85ca3f3/e2e-gce-master-1-devel | |
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 14b1420625a1e75dbfffe7a2f3710031b98c372f) | |
+++ kubernetes-salt.tar.gz uploaded (sha1 = 2c4433b37735914c6951210d6a02f0496b2a8bd7) | |
Starting master and configuring firewalls | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/disks/e2e-gce-master-1-master-pd]. | |
NAME ZONE SIZE_GB TYPE STATUS | |
e2e-gce-master-1-master-pd us-central1-f 20 pd-ssd READY | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-master-https]. | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
e2e-gce-master-1-master-https e2e-gce-master-1 0.0.0.0/0 tcp:443 e2e-gce-master-1-master | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/regions/us-central1/addresses/e2e-gce-master-1-master-ip]. | |
Generating certs for alternate-names: IP:146.148.88.146,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:e2e-gce-master-1-master | |
+++ Logging using Fluentd to elasticsearch | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-minion-all]. | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
e2e-gce-master-1-minion-all e2e-gce-master-1 10.245.0.0/16 tcp,udp,icmp,esp,ah,sctp e2e-gce-master-1-minion | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-master]. | |
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS | |
e2e-gce-master-1-master us-central1-f n1-standard-2 10.240.0.2 146.148.88.146 RUNNING | |
Creating minions. | |
Attempt 1 to create e2e-gce-master-1-minion-template | |
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance. | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/instanceTemplates/e2e-gce-master-1-minion-template]. | |
NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP | |
e2e-gce-master-1-minion-template n1-standard-2 2016-04-21T22:50:08.783-07:00 | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instanceGroupManagers/e2e-gce-master-1-minion-group]. | |
NAME LOCATION SCOPE BASE_INSTANCE_NAME SIZE TARGET_SIZE INSTANCE_TEMPLATE AUTOSCALED | |
e2e-gce-master-1-minion-group us-central1-f zone e2e-gce-master-1-minion 6 e2e-gce-master-1-minion-template | |
Waiting for group to become stable, current operations: creating: 6 | |
Waiting for group to become stable, current operations: creating: 6 | |
Waiting for group to become stable, current operations: creating: 6 | |
Waiting for group to become stable, current operations: creating: 1 | |
Group is stable | |
INSTANCE_GROUPS=e2e-gce-master-1-minion-group | |
NODE_NAMES=https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-6ch0 https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-8eot https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-asea https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-fyts https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-hlmm https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-x3cg | |
Using master: e2e-gce-master-1-master (external IP: 146.148.88.146) | |
Waiting up to 300 seconds for cluster initialization. | |
This will continually check to see if the API for kubernetes is reachable. | |
This may time out if there was some uncaught error during start up. | |
....Kubernetes cluster created. | |
cluster "kubernetes-jenkins-pull_e2e-gce-master-1" set. | |
user "kubernetes-jenkins-pull_e2e-gce-master-1" set. | |
context "kubernetes-jenkins-pull_e2e-gce-master-1" set. | |
switched to context "kubernetes-jenkins-pull_e2e-gce-master-1". | |
user "kubernetes-jenkins-pull_e2e-gce-master-1-basic-auth" set. | |
Wrote config for kubernetes-jenkins-pull_e2e-gce-master-1 to /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
Kubernetes cluster is running. The master is running at: | |
https://146.148.88.146 | |
The user name and password to use is located in /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config. | |
... calling validate-cluster | |
Waiting for 7 ready nodes. 1 ready nodes, 1 registered. Retrying. | |
Waiting for 7 ready nodes. 1 ready nodes, 7 registered. Retrying. | |
Waiting for 7 ready nodes. 1 ready nodes, 7 registered. Retrying. | |
Waiting for 7 ready nodes. 1 ready nodes, 7 registered. Retrying. | |
Found 7 node(s). | |
NAME STATUS AGE | |
e2e-gce-master-1-master Ready,SchedulingDisabled 1m | |
e2e-gce-master-1-minion-6ch0 Ready 49s | |
e2e-gce-master-1-minion-8eot Ready 58s | |
e2e-gce-master-1-minion-asea Ready 57s | |
e2e-gce-master-1-minion-fyts Ready 51s | |
e2e-gce-master-1-minion-hlmm Ready 53s | |
e2e-gce-master-1-minion-x3cg Ready 59s | |
Validate output: | |
NAME STATUS MESSAGE ERROR | |
controller-manager Healthy ok | |
scheduler Healthy ok | |
etcd-0 Healthy {"health": "true"} | |
etcd-1 Healthy {"health": "true"} | |
Cluster validation succeeded | |
Done, listing cluster services: | |
Kubernetes master is running at https://146.148.88.146 | |
GLBCDefaultBackend is running at https://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/default-http-backend | |
Elasticsearch is running at https://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging | |
Heapster is running at https://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/heapster | |
Kibana is running at https://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/kibana-logging | |
KubeDNS is running at https://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/kube-dns | |
kubernetes-dashboard is running at https://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard | |
Grafana is running at https://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana | |
InfluxDB is running at https://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
e2e-gce-master-1-minion-e2e-gce-master-1-http-alt e2e-gce-master-1 0.0.0.0/0 tcp:80,tcp:8080 e2e-gce-master-1-minion | |
allowed: | |
- IPProtocol: tcp | |
ports: | |
- '80' | |
- IPProtocol: tcp | |
ports: | |
- '8080' | |
creationTimestamp: '2016-04-21T22:52:15.800-07:00' | |
description: '' | |
id: '6329506749673112288' | |
kind: compute#firewall | |
name: e2e-gce-master-1-minion-e2e-gce-master-1-http-alt | |
network: https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/networks/e2e-gce-master-1 | |
selfLink: https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-minion-e2e-gce-master-1-http-alt | |
sourceRanges: | |
- 0.0.0.0/0 | |
targetTags: | |
- e2e-gce-master-1-minion | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
e2e-gce-master-1-minion-e2e-gce-master-1-nodeports e2e-gce-master-1 0.0.0.0/0 tcp:30000-32767,udp:30000-32767 e2e-gce-master-1-minion | |
allowed: | |
- IPProtocol: tcp | |
ports: | |
- 30000-32767 | |
- IPProtocol: udp | |
ports: | |
- 30000-32767 | |
creationTimestamp: '2016-04-21T22:53:08.227-07:00' | |
description: '' | |
id: '434517162132243115' | |
kind: compute#firewall | |
name: e2e-gce-master-1-minion-e2e-gce-master-1-nodeports | |
network: https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/networks/e2e-gce-master-1 | |
selfLink: https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-minion-e2e-gce-master-1-nodeports | |
sourceRanges: | |
- 0.0.0.0/0 | |
targetTags: | |
- e2e-gce-master-1-minion | |
2016/04/21 22:53:34 e2e.go:196: Step 'up' finished in 4m22.112549194s | |
+ up_result=0 | |
+ [[ 0 -ne 0 ]] | |
+ go run ./hack/e2e.go -v '--ctl=version --match-server-version=false' | |
2016/04/21 22:53:35 e2e.go:194: Running: 'kubectl version --match-server-version=false' | |
Client Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.2.487+fe22780e894ff3", GitCommit:"fe22780e894ff38321e7c9bf33f2742d9c393921", GitTreeState:"clean", BuildDate:"2016-04-22T05:37:12Z", GoVersion:"go1.6", Compiler:"gc", Platform:"linux/amd64"} | |
Server Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.2.487+fe22780e894ff3", GitCommit:"fe22780e894ff38321e7c9bf33f2742d9c393921", GitTreeState:"clean", BuildDate:"2016-04-22T05:33:11Z", GoVersion:"go1.6", Compiler:"gc", Platform:"linux/amd64"} | |
2016/04/21 22:53:35 e2e.go:196: Step ''kubectl version --match-server-version=false'' finished in 88.360962ms | |
+ [[ true == \t\r\u\e ]] | |
+ ./cluster/gce/list-resources.sh | |
+ [[ -n '' ]] | |
+ [[ true == \t\r\u\e ]] | |
+ e2e_test '--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]' | |
+ local -r 'ginkgo_test_args=--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]' | |
+ go run ./hack/e2e.go -v --isup | |
2016/04/21 22:53:46 e2e.go:194: Running: get status | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
Client Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.2.487+fe22780e894ff3", GitCommit:"fe22780e894ff38321e7c9bf33f2742d9c393921", GitTreeState:"clean", BuildDate:"2016-04-22T05:37:12Z", GoVersion:"go1.6", Compiler:"gc", Platform:"linux/amd64"} | |
Server Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.2.487+fe22780e894ff3", GitCommit:"fe22780e894ff38321e7c9bf33f2742d9c393921", GitTreeState:"clean", BuildDate:"2016-04-22T05:33:11Z", GoVersion:"go1.6", Compiler:"gc", Platform:"linux/amd64"} | |
2016/04/21 22:53:46 e2e.go:196: Step 'get status' finished in 111.767294ms | |
2016/04/21 22:53:46 e2e.go:78: Cluster is UP | |
+ go run ./hack/e2e.go -v --test '--test_args=--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]' | |
2016/04/21 22:53:47 e2e.go:194: Running: get status | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
Client Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.2.487+fe22780e894ff3", GitCommit:"fe22780e894ff38321e7c9bf33f2742d9c393921", GitTreeState:"clean", BuildDate:"2016-04-22T05:37:12Z", GoVersion:"go1.6", Compiler:"gc", Platform:"linux/amd64"} | |
Server Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.2.487+fe22780e894ff3", GitCommit:"fe22780e894ff38321e7c9bf33f2742d9c393921", GitTreeState:"clean", BuildDate:"2016-04-22T05:33:11Z", GoVersion:"go1.6", Compiler:"gc", Platform:"linux/amd64"} | |
2016/04/21 22:53:47 e2e.go:196: Step 'get status' finished in 112.780965ms | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
2016/04/21 22:53:47 e2e.go:194: Running: Ginkgo tests | |
Setting up for KUBERNETES_PROVIDER="gce". | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
Running Suite: Kubernetes e2e suite | |
=================================== | |
Random Seed: 1461304428 - Will randomize all specs | |
Will run 171 of 278 specs | |
Running in parallel across 30 nodes | |
Apr 21 22:53:48.850: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
Apr 21 22:53:48.859: INFO: Waiting up to 10m0s for all pods (need at least 1) in namespace 'kube-system' to be running and ready | |
Apr 21 22:53:48.902: INFO: 25 / 25 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) | |
Apr 21 22:53:48.902: INFO: expected 7 pod replicas in namespace 'kube-system', 7 are Running and Ready. | |
SSSSSSSSSSSSSSSS | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.962: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should support --unix-socket=/path [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1189 | |
STEP: Starting the proxy | |
Apr 21 22:53:49.171: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix006562278/test' | |
STEP: retrieving proxy /api/ output | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:49.240: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-5jnmk" for this suite. | |
• [SLOW TEST:5.346 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Proxy server | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support --unix-socket=/path [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1189 | |
------------------------------ | |
[BeforeEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.931: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:91 | |
[It] should grab all metrics from API server. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:101 | |
STEP: Connecting to /metrics endpoint | |
[AfterEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:49.396: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-metrics-grabber-mewbm" for this suite. | |
• [SLOW TEST:5.523 seconds] | |
[k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should grab all metrics from API server. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:101 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.919: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and ensure its status is promptly calculated. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:54 | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:51.465: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-3ve0k" for this suite. | |
• [SLOW TEST:7.571 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a ResourceQuota and ensure its status is promptly calculated. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:54 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:49.264: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63 | |
Apr 21 22:53:52.144: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 26.060709ms) | |
Apr 21 22:53:52.182: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 38.179178ms) | |
Apr 21 22:53:52.190: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 7.80068ms) | |
Apr 21 22:53:52.195: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.523467ms) | |
Apr 21 22:53:52.202: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 7.450418ms) | |
Apr 21 22:53:52.207: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.789077ms) | |
Apr 21 22:53:52.212: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.936713ms) | |
Apr 21 22:53:52.217: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.897576ms) | |
Apr 21 22:53:52.224: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 7.230676ms) | |
Apr 21 22:53:52.243: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 18.921098ms) | |
Apr 21 22:53:52.259: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 14.041326ms) | |
Apr 21 22:53:52.317: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 57.756272ms) | |
Apr 21 22:53:52.364: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 47.622408ms) | |
Apr 21 22:53:52.420: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 56.268441ms) | |
Apr 21 22:53:52.472: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 51.783915ms) | |
Apr 21 22:53:52.496: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 23.276854ms) | |
Apr 21 22:53:52.522: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 26.422481ms) | |
Apr 21 22:53:52.532: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 10.068986ms) | |
Apr 21 22:53:52.545: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 12.467573ms) | |
Apr 21 22:53:52.566: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:10250/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 21.384286ms) | |
[AfterEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:52.566: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-proxy-szs4m" for this suite. | |
• [SLOW TEST:8.326 seconds] | |
[k8s.io] Proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40 | |
should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63 | |
------------------------------ | |
[BeforeEach] [k8s.io] Cadvisor | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.947: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be healthy on every node. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:41 | |
STEP: getting list of nodes | |
STEP: Querying stats from node e2e-gce-master-1-master using url api/v1/proxy/nodes/e2e-gce-master-1-master/stats/ | |
STEP: Querying stats from node e2e-gce-master-1-minion-6ch0 using url api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/stats/ | |
STEP: Querying stats from node e2e-gce-master-1-minion-8eot using url api/v1/proxy/nodes/e2e-gce-master-1-minion-8eot/stats/ | |
STEP: Querying stats from node e2e-gce-master-1-minion-asea using url api/v1/proxy/nodes/e2e-gce-master-1-minion-asea/stats/ | |
STEP: Querying stats from node e2e-gce-master-1-minion-fyts using url api/v1/proxy/nodes/e2e-gce-master-1-minion-fyts/stats/ | |
STEP: Querying stats from node e2e-gce-master-1-minion-hlmm using url api/v1/proxy/nodes/e2e-gce-master-1-minion-hlmm/stats/ | |
STEP: Querying stats from node e2e-gce-master-1-minion-x3cg using url api/v1/proxy/nodes/e2e-gce-master-1-minion-x3cg/stats/ | |
[AfterEach] [k8s.io] Cadvisor | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:52.995: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-cadvisor-xrb8k" for this suite. | |
• [SLOW TEST:9.077 seconds] | |
[k8s.io] Cadvisor | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be healthy on every node. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:41 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:54.309: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should check is all data is printed [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:924 | |
Apr 21 22:53:54.351: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config version' | |
Apr 21 22:53:54.415: INFO: stderr: "" | |
Apr 21 22:53:54.415: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"3+\", GitVersion:\"v1.3.0-alpha.2.487+fe22780e894ff3\", GitCommit:\"fe22780e894ff38321e7c9bf33f2742d9c393921\", GitTreeState:\"clean\", BuildDate:\"2016-04-22T05:37:12Z\", GoVersion:\"go1.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"3+\", GitVersion:\"v1.3.0-alpha.2.487+fe22780e894ff3\", GitCommit:\"fe22780e894ff38321e7c9bf33f2742d9c393921\", GitTreeState:\"clean\", BuildDate:\"2016-04-22T05:33:11Z\", GoVersion:\"go1.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:54.415: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-lhyky" for this suite. | |
• [SLOW TEST:5.168 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl version | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should check is all data is printed [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:924 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.904: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a configMap. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:307 | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
STEP: Creating a ConfigMap | |
STEP: Ensuring resource quota status captures configMap creation | |
STEP: Deleting a ConfigMap | |
STEP: Ensuring resource quota status released usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:55.249: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-h64hy" for this suite. | |
• [SLOW TEST:11.411 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a ResourceQuota and capture the life of a configMap. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:307 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.937: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy to cadvisor [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:61 | |
Apr 21 22:53:49.674: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 34.102704ms) | |
Apr 21 22:53:49.701: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 26.718019ms) | |
Apr 21 22:53:49.722: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 21.758845ms) | |
Apr 21 22:53:49.732: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 9.890519ms) | |
Apr 21 22:53:49.806: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 73.871468ms) | |
Apr 21 22:53:50.049: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 242.866908ms) | |
Apr 21 22:53:50.081: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 31.596319ms) | |
Apr 21 22:53:50.092: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 10.910698ms) | |
Apr 21 22:53:50.147: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 55.03862ms) | |
Apr 21 22:53:50.163: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 16.375703ms) | |
Apr 21 22:53:50.215: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 52.00894ms) | |
Apr 21 22:53:55.847: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 5.631786878s) | |
Apr 21 22:53:55.855: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 7.379919ms) | |
Apr 21 22:53:55.860: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 5.56821ms) | |
Apr 21 22:53:55.868: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 8.052663ms) | |
Apr 21 22:53:55.874: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 5.823202ms) | |
Apr 21 22:53:55.880: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 5.761096ms) | |
Apr 21 22:53:55.886: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 6.092179ms) | |
Apr 21 22:53:55.892: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 6.327801ms) | |
Apr 21 22:53:55.898: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:4194/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 5.479045ms) | |
[AfterEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:55.898: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-proxy-sr0jr" for this suite. | |
• [SLOW TEST:11.982 seconds] | |
[k8s.io] Proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40 | |
should proxy to cadvisor [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:61 | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.948: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a secret. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:133 | |
STEP: Discovering how many secrets are in namespace by default | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
STEP: Creating a Secret | |
STEP: Ensuring resource quota status captures secret creation | |
STEP: Deleting a secret | |
STEP: Ensuring resource quota status released usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:56.459: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-2srfo" for this suite. | |
• [SLOW TEST:12.530 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a ResourceQuota and capture the life of a secret. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:133 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.951: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a nodePort service updated to clusterIP. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:221 | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
STEP: Creating a NodePort type Service | |
STEP: Ensuring resource quota status captures service creation | |
STEP: Updating the service type to clusterIP | |
STEP: Checking resource quota status capture service update | |
STEP: Deleting a Service | |
STEP: Ensuring resource quota status released usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:58.860: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-ekkzm" for this suite. | |
• [SLOW TEST:15.072 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a ResourceQuota and capture the life of a nodePort service updated to clusterIP. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:221 | |
------------------------------ | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:59.480: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:74 | |
Apr 21 22:53:59.528: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
[It] should prevent NodePort collisions | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:771 | |
STEP: creating service nodeport-collision-1 with type NodePort in namespace e2e-tests-services-5fydn | |
STEP: creating service nodeport-collision-2 with conflicting NodePort | |
STEP: deleting service nodeport-collision-1 to release NodePort | |
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort | |
STEP: deleting service nodeport-collision-2 in namespace e2e-tests-services-5fydn | |
[AfterEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:59.626: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-5fydn" for this suite. | |
• [SLOW TEST:5.164 seconds] | |
[k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should prevent NodePort collisions | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:771 | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.947: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a replication controller. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:342 | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
STEP: Creating a ReplicationController | |
STEP: Ensuring resource quota status captures replication controller creation | |
STEP: Deleting a ReplicationController | |
STEP: Ensuring resource quota status released usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:56.553: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-ouxxh" for this suite. | |
• [SLOW TEST:17.648 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a ResourceQuota and capture the life of a replication controller. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:342 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.937: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:50 | |
STEP: Executing a successful http request from the external internet | |
[It] should provide Internet connection for containers [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55 | |
STEP: Running container which tries to wget google.com | |
Apr 21 22:53:50.180: INFO: Waiting up to 5m0s for pod wget-test status to be success or failure | |
Apr 21 22:53:50.208: INFO: No Status.Info for container 'wget-test-container' in pod 'wget-test' yet | |
Apr 21 22:53:50.208: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-fwh42' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.551096ms elapsed) | |
Apr 21 22:53:52.213: INFO: No Status.Info for container 'wget-test-container' in pod 'wget-test' yet | |
Apr 21 22:53:52.213: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-fwh42' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.032676021s elapsed) | |
Apr 21 22:53:54.216: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-fwh42' so far | |
Apr 21 22:53:54.216: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-fwh42' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.035916556s elapsed) | |
Apr 21 22:53:56.220: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-fwh42' so far | |
Apr 21 22:53:56.220: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-fwh42' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.039993215s elapsed) | |
Apr 21 22:53:58.223: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-fwh42' so far | |
Apr 21 22:53:58.223: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-fwh42' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.043151302s elapsed) | |
Apr 21 22:54:00.229: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-fwh42' so far | |
Apr 21 22:54:00.229: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-fwh42' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.04939859s elapsed) | |
STEP: Saw pod success | |
[AfterEach] [k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:02.255: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-nettest-fwh42" for this suite. | |
• [SLOW TEST:18.343 seconds] | |
[k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should provide Internet connection for containers [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55 | |
------------------------------ | |
[BeforeEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.907: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:37 | |
STEP: Creating configMap with name configmap-test-volume-95a67a15-084e-11e6-aee8-42010af00007 | |
STEP: Creating a pod to test consume configMaps | |
Apr 21 22:53:49.312: INFO: Waiting up to 5m0s for pod pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007 status to be success or failure | |
Apr 21 22:53:49.314: INFO: No Status.Info for container 'configmap-volume-test' in pod 'pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007' yet | |
Apr 21 22:53:49.314: INFO: Waiting for pod pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-configmap-38g1v' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.11166ms elapsed) | |
Apr 21 22:53:51.327: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-configmap-38g1v' so far | |
Apr 21 22:53:51.327: INFO: Waiting for pod pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-configmap-38g1v' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.01490821s elapsed) | |
Apr 21 22:53:53.330: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-configmap-38g1v' so far | |
Apr 21 22:53:53.330: INFO: Waiting for pod pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-configmap-38g1v' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.018435228s elapsed) | |
Apr 21 22:53:55.334: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-configmap-38g1v' so far | |
Apr 21 22:53:55.334: INFO: Waiting for pod pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-configmap-38g1v' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.021786346s elapsed) | |
Apr 21 22:53:57.338: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-configmap-38g1v' so far | |
Apr 21 22:53:57.338: INFO: Waiting for pod pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-configmap-38g1v' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.0257832s elapsed) | |
Apr 21 22:53:59.341: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-configmap-38g1v' so far | |
Apr 21 22:53:59.341: INFO: Waiting for pod pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-configmap-38g1v' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.029078628s elapsed) | |
Apr 21 22:54:01.345: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-configmap-38g1v' so far | |
Apr 21 22:54:01.345: INFO: Waiting for pod pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-configmap-38g1v' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.032996244s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-configmaps-95ab3758-084e-11e6-aee8-42010af00007 container configmap-volume-test: <nil> | |
STEP: Successfully fetched pod logs:content of file "/etc/configmap-volume/data-1": value-1 | |
STEP: Cleaning up the configMap | |
[AfterEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:03.514: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-configmap-38g1v" for this suite. | |
• [SLOW TEST:19.649 seconds] | |
[k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be consumable from pods in volume [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:37 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:04.646: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should support proxy with --port 0 [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1161 | |
STEP: starting the proxy server | |
Apr 21 22:54:04.736: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config proxy -p 0' | |
STEP: curling proxy /api/ output | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:04.797: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-lytnh" for this suite. | |
• [SLOW TEST:5.208 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Proxy server | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support proxy with --port 0 [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1161 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Variable Expansion | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:01.480: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should allow composing env vars into new env vars [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:70 | |
STEP: Creating a pod to test env composition | |
Apr 21 22:54:01.608: INFO: Waiting up to 5m0s for pod var-expansion-9d003b00-084e-11e6-a9ac-42010af00007 status to be success or failure | |
Apr 21 22:54:01.614: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-9d003b00-084e-11e6-a9ac-42010af00007' yet | |
Apr 21 22:54:01.614: INFO: Waiting for pod var-expansion-9d003b00-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-var-expansion-638eo' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.312864ms elapsed) | |
Apr 21 22:54:03.624: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-9d003b00-084e-11e6-a9ac-42010af00007' yet | |
Apr 21 22:54:03.624: INFO: Waiting for pod var-expansion-9d003b00-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-var-expansion-638eo' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.015901714s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-asea pod var-expansion-9d003b00-084e-11e6-a9ac-42010af00007 container dapi-container: <nil> | |
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT=443 | |
FOOBAR=foo-value;;bar-value | |
HOSTNAME=var-expansion-9d003b00-084e-11e6-a9ac-42010af00007 | |
SHLVL=1 | |
HOME=/root | |
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 | |
BAR=bar-value | |
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin | |
KUBERNETES_PORT_443_TCP_PORT=443 | |
FOO=foo-value | |
KUBERNETES_PORT_443_TCP_PROTO=tcp | |
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT_HTTPS=443 | |
PWD=/ | |
KUBERNETES_SERVICE_HOST=10.0.0.1 | |
[AfterEach] [k8s.io] Variable Expansion | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:05.698: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-var-expansion-638eo" for this suite. | |
• [SLOW TEST:9.234 seconds] | |
[k8s.io] Variable Expansion | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should allow composing env vars into new env vars [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:70 | |
------------------------------ | |
[BeforeEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:58.026: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with mappings [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:49 | |
STEP: Creating configMap with name configmap-test-volume-map-9aeb7021-084e-11e6-9698-42010af00007 | |
STEP: Creating a pod to test consume configMaps | |
Apr 21 22:53:58.154: INFO: Waiting up to 5m0s for pod pod-configmaps-9aedf791-084e-11e6-9698-42010af00007 status to be success or failure | |
Apr 21 22:53:58.159: INFO: No Status.Info for container 'configmap-volume-test' in pod 'pod-configmaps-9aedf791-084e-11e6-9698-42010af00007' yet | |
Apr 21 22:53:58.159: INFO: Waiting for pod pod-configmaps-9aedf791-084e-11e6-9698-42010af00007 in namespace 'e2e-tests-configmap-z7kiv' status to be 'success or failure'(found phase: "Pending", readiness: false) (5.309159ms elapsed) | |
Apr 21 22:54:00.169: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-9aedf791-084e-11e6-9698-42010af00007' in namespace 'e2e-tests-configmap-z7kiv' so far | |
Apr 21 22:54:00.169: INFO: Waiting for pod pod-configmaps-9aedf791-084e-11e6-9698-42010af00007 in namespace 'e2e-tests-configmap-z7kiv' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.015316126s elapsed) | |
Apr 21 22:54:02.175: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-9aedf791-084e-11e6-9698-42010af00007' in namespace 'e2e-tests-configmap-z7kiv' so far | |
Apr 21 22:54:02.175: INFO: Waiting for pod pod-configmaps-9aedf791-084e-11e6-9698-42010af00007 in namespace 'e2e-tests-configmap-z7kiv' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.020858394s elapsed) | |
Apr 21 22:54:04.179: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-9aedf791-084e-11e6-9698-42010af00007' in namespace 'e2e-tests-configmap-z7kiv' so far | |
Apr 21 22:54:04.179: INFO: Waiting for pod pod-configmaps-9aedf791-084e-11e6-9698-42010af00007 in namespace 'e2e-tests-configmap-z7kiv' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.024685803s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-asea pod pod-configmaps-9aedf791-084e-11e6-9698-42010af00007 container configmap-volume-test: <nil> | |
STEP: Successfully fetched pod logs:content of file "/etc/configmap-volume/path/to/data-2": value-2 | |
STEP: Cleaning up the configMap | |
[AfterEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:06.331: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-configmap-z7kiv" for this suite. | |
• [SLOW TEST:13.325 seconds] | |
[k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be consumable from pods in volume with mappings [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:49 | |
------------------------------ | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:08.559: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy logs on node [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60 | |
Apr 21 22:54:08.690: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.336737ms) | |
Apr 21 22:54:08.694: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 3.951845ms) | |
Apr 21 22:54:08.698: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.81174ms) | |
Apr 21 22:54:08.703: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.575568ms) | |
Apr 21 22:54:08.707: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.058441ms) | |
Apr 21 22:54:08.712: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.896094ms) | |
Apr 21 22:54:08.723: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 10.665909ms) | |
Apr 21 22:54:08.728: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.942363ms) | |
Apr 21 22:54:08.745: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 17.358337ms) | |
Apr 21 22:54:08.752: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 6.808645ms) | |
Apr 21 22:54:08.759: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 6.917148ms) | |
Apr 21 22:54:08.776: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 17.031354ms) | |
Apr 21 22:54:08.790: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 13.650841ms) | |
Apr 21 22:54:08.800: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 10.066791ms) | |
Apr 21 22:54:08.807: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 7.741424ms) | |
Apr 21 22:54:08.823: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 15.199531ms) | |
Apr 21 22:54:08.828: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 5.526788ms) | |
Apr 21 22:54:08.833: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.490356ms) | |
Apr 21 22:54:08.838: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.844928ms) | |
Apr 21 22:54:08.842: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.665007ms) | |
[AfterEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:08.842: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-proxy-j2rr0" for this suite. | |
• [SLOW TEST:5.303 seconds] | |
[k8s.io] Proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40 | |
should proxy logs on node [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.959: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Kubectl run default | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:946 | |
[It] should create an rc or deployment from an image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:966 | |
STEP: running the image gcr.io/google_containers/nginx:1.7.9 | |
Apr 21 22:53:51.403: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-s4vwf' | |
Apr 21 22:53:51.499: INFO: stderr: "" | |
Apr 21 22:53:51.499: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" created" | |
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created | |
[AfterEach] [k8s.io] Kubectl run default | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:950 | |
Apr 21 22:53:53.507: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-s4vwf' | |
Apr 21 22:53:55.648: INFO: stderr: "" | |
Apr 21 22:53:55.648: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" deleted" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:53:55.648: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-s4vwf" for this suite. | |
• [SLOW TEST:26.715 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl run default | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create an rc or deployment from an image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:966 | |
------------------------------ | |
[BeforeEach] [k8s.io] Ubernetes Lite | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:06.597: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Ubernetes Lite | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ubernetes_lite.go:49 | |
STEP: Checking for multi-zone cluster. Zone count = 1 | |
Apr 21 22:54:06.707: INFO: Zone count is %!!(MISSING)d(MISSING), only run for multi-zone clusters, skipping test | |
[AfterEach] [k8s.io] Ubernetes Lite | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:06.707: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-ubernetes-lite-z678t" for this suite. | |
S [SKIPPING] in Spec Setup (BeforeEach) [10.136 seconds] | |
[k8s.io] Ubernetes Lite | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should spread the pods of a service across zones [BeforeEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ubernetes_lite.go:52 | |
Apr 21 22:54:06.707: Zone count is %!d(MISSING), only run for multi-zone clusters, skipping test | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:276 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:07.281: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:42 | |
Apr 21 22:54:07.339: INFO: Only supported for providers [mesos/docker] (not gce) | |
[AfterEach] [k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:07.340: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-w9r2a" for this suite. | |
S [SKIPPING] in Spec Setup (BeforeEach) [10.097 seconds] | |
[k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
applies slave attributes as labels [BeforeEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:62 | |
Apr 21 22:54:07.339: Only supported for providers [mesos/docker] (not gce) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:276 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Ubernetes Lite | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:15.675: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Ubernetes Lite | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ubernetes_lite.go:49 | |
STEP: Checking for multi-zone cluster. Zone count = 1 | |
Apr 21 22:54:15.745: INFO: Zone count is %!!(MISSING)d(MISSING), only run for multi-zone clusters, skipping test | |
[AfterEach] [k8s.io] Ubernetes Lite | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:15.745: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-ubernetes-lite-11zmn" for this suite. | |
S [SKIPPING] in Spec Setup (BeforeEach) [5.088 seconds] | |
[k8s.io] Ubernetes Lite | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should spread the pods of a replication controller across zones [BeforeEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ubernetes_lite.go:56 | |
Apr 21 22:54:15.745: Zone count is %!d(MISSING), only run for multi-zone clusters, skipping test | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:276 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.939: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] deployment should support rollback | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:76 | |
Apr 21 22:53:49.875: INFO: Creating deployment test-rollback-deployment | |
Apr 21 22:53:53.992: INFO: Updating deployment test-rollback-deployment | |
Apr 21 22:54:00.029: INFO: rolling back deployment test-rollback-deployment to revision 1 | |
Apr 21 22:54:06.086: INFO: rolling back deployment test-rollback-deployment to last revision | |
Apr 21 22:54:12.145: INFO: Deleting deployment test-rollback-deployment | |
Apr 21 22:54:16.232: INFO: Ensuring deployment test-rollback-deployment was deleted | |
Apr 21 22:54:16.234: INFO: Ensuring deployment test-rollback-deployment's RSes were deleted | |
Apr 21 22:54:16.236: INFO: Ensuring deployment test-rollback-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:16.239: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-kd8n1" for this suite. | |
• [SLOW TEST:32.318 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
deployment should support rollback | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:76 | |
------------------------------ | |
[BeforeEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.960: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should run a job to completion when tasks sometimes fail and are not locally restarted | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:100 | |
STEP: Creating a job | |
STEP: Ensuring job reaches completions | |
[AfterEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:19.575: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-v1job-x2n18" for this suite. | |
• [SLOW TEST:35.639 seconds] | |
[k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should run a job to completion when tasks sometimes fail and are not locally restarted | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:100 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.953: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0644,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:105 | |
STEP: Creating a pod to test emptydir 0644 on node default medium | |
Apr 21 22:53:51.253: INFO: Waiting up to 5m0s for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 status to be success or failure | |
Apr 21 22:53:51.265: INFO: No Status.Info for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' yet | |
Apr 21 22:53:51.265: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.672806ms elapsed) | |
Apr 21 22:53:53.271: INFO: No Status.Info for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' yet | |
Apr 21 22:53:53.271: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.017999829s elapsed) | |
Apr 21 22:53:55.274: INFO: No Status.Info for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' yet | |
Apr 21 22:53:55.274: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.020903255s elapsed) | |
Apr 21 22:53:57.277: INFO: No Status.Info for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' yet | |
Apr 21 22:53:57.277: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.024400209s elapsed) | |
Apr 21 22:53:59.285: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:53:59.285: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.032371452s elapsed) | |
Apr 21 22:54:01.289: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:01.289: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.035956606s elapsed) | |
Apr 21 22:54:03.292: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:03.292: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.039441923s elapsed) | |
Apr 21 22:54:05.296: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:05.296: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.043252435s elapsed) | |
Apr 21 22:54:07.301: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:07.301: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.048434115s elapsed) | |
Apr 21 22:54:09.305: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:09.305: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (18.052447922s elapsed) | |
Apr 21 22:54:11.308: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:11.308: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (20.055738933s elapsed) | |
Apr 21 22:54:13.314: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:13.314: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (22.060876603s elapsed) | |
Apr 21 22:54:15.319: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:15.319: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (24.066129893s elapsed) | |
Apr 21 22:54:17.323: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:17.323: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.070526688s elapsed) | |
Apr 21 22:54:19.327: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:19.327: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.074432224s elapsed) | |
Apr 21 22:54:21.334: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96cdca5e-084e-11e6-94a7-42010af00007' in namespace 'e2e-tests-emptydir-8h6zs' so far | |
Apr 21 22:54:21.334: INFO: Waiting for pod pod-96cdca5e-084e-11e6-94a7-42010af00007 in namespace 'e2e-tests-emptydir-8h6zs' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.081314262s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-6ch0 pod pod-96cdca5e-084e-11e6-94a7-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267 | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rw-r--r-- | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:23.434: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-8h6zs" for this suite. | |
• [SLOW TEST:39.505 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (non-root,0644,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:105 | |
------------------------------ | |
[BeforeEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.956: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:36 | |
[It] should be able to override the image's default commmand (docker entrypoint) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62 | |
STEP: Creating a pod to test override command | |
Apr 21 22:53:51.371: INFO: Waiting up to 5m0s for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 status to be success or failure | |
Apr 21 22:53:51.378: INFO: No Status.Info for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' yet | |
Apr 21 22:53:51.378: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.141132ms elapsed) | |
Apr 21 22:53:53.381: INFO: No Status.Info for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' yet | |
Apr 21 22:53:53.381: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.009841732s elapsed) | |
Apr 21 22:53:55.384: INFO: No Status.Info for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' yet | |
Apr 21 22:53:55.384: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.012975241s elapsed) | |
Apr 21 22:53:57.387: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:53:57.387: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.016338436s elapsed) | |
Apr 21 22:53:59.391: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:53:59.391: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.01972609s elapsed) | |
Apr 21 22:54:01.394: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:01.394: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.023434742s elapsed) | |
Apr 21 22:54:03.400: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:03.400: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.028850073s elapsed) | |
Apr 21 22:54:05.404: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:05.404: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.033533715s elapsed) | |
Apr 21 22:54:07.408: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:07.408: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.03686264s elapsed) | |
Apr 21 22:54:09.413: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:09.413: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (18.042514336s elapsed) | |
Apr 21 22:54:11.435: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:11.435: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (20.064437038s elapsed) | |
Apr 21 22:54:13.439: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:13.439: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (22.06842381s elapsed) | |
Apr 21 22:54:15.443: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:15.443: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (24.072117609s elapsed) | |
Apr 21 22:54:17.448: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:17.448: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.076596906s elapsed) | |
Apr 21 22:54:19.451: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:19.451: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.080009606s elapsed) | |
Apr 21 22:54:21.455: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-96e1f54a-084e-11e6-a789-42010af00007' in namespace 'e2e-tests-containers-obyfl' so far | |
Apr 21 22:54:21.455: INFO: Waiting for pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-obyfl' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.083589235s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-fyts pod client-containers-96e1f54a-084e-11e6-a789-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:[/ep-2] | |
[AfterEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:23.507: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-containers-obyfl" for this suite. | |
• [SLOW TEST:39.594 seconds] | |
[k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be able to override the image's default commmand (docker entrypoint) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62 | |
------------------------------ | |
[BeforeEach] [k8s.io] hostPath | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:16.735: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] hostPath | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:46 | |
[It] should support r/w | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:90 | |
STEP: Creating a pod to test hostPath r/w | |
Apr 21 22:54:16.811: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure | |
Apr 21 22:54:16.815: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet | |
Apr 21 22:54:16.815: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-7ryxa' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.021534ms elapsed) | |
Apr 21 22:54:18.819: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-7ryxa' so far | |
Apr 21 22:54:18.819: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-7ryxa' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00831233s elapsed) | |
Apr 21 22:54:20.823: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-7ryxa' so far | |
Apr 21 22:54:20.823: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-7ryxa' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.012224154s elapsed) | |
Apr 21 22:54:22.827: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-7ryxa' so far | |
Apr 21 22:54:22.828: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-7ryxa' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.016564229s elapsed) | |
STEP: Saw pod success | |
Apr 21 22:54:24.835: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-asea pod pod-host-path-test container test-container-2: <nil> | |
STEP: Successfully fetched pod logs:content of file "/test-volume/test-file": mount-tester new file | |
[AfterEach] [k8s.io] hostPath | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:24.957: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-hostpath-7ryxa" for this suite. | |
• [SLOW TEST:13.252 seconds] | |
[k8s.io] hostPath | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support r/w | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:90 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] SSH | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:04.024: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] SSH | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:35 | |
[It] should SSH to all nodes and run commands | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:93 | |
STEP: Getting all nodes' SSH-able IP addresses | |
STEP: SSH'ing to all nodes and running echo "Hello" | |
Apr 21 22:54:04.210: INFO: Got stdout from 8.34.213.250:22: Hello | |
Apr 21 22:54:04.258: INFO: Got stdout from 104.154.114.186:22: Hello | |
Apr 21 22:54:04.330: INFO: Got stdout from 146.148.35.35:22: Hello | |
Apr 21 22:54:04.384: INFO: Got stdout from 8.35.199.247:22: Hello | |
Apr 21 22:54:04.432: INFO: Got stdout from 8.34.214.252:22: Hello | |
Apr 21 22:54:04.477: INFO: Got stdout from 104.197.129.178:22: Hello | |
STEP: SSH'ing to all nodes and running echo "Hello from $(whoami)@$(hostname)" | |
Apr 21 22:54:04.516: INFO: Got stdout from 8.34.213.250:22: Hello from jenkins@e2e-gce-master-1-minion-6ch0 | |
Apr 21 22:54:04.556: INFO: Got stdout from 104.154.114.186:22: Hello from jenkins@e2e-gce-master-1-minion-8eot | |
Apr 21 22:54:04.620: INFO: Got stdout from 146.148.35.35:22: Hello from jenkins@e2e-gce-master-1-minion-asea | |
Apr 21 22:54:04.657: INFO: Got stdout from 8.35.199.247:22: Hello from jenkins@e2e-gce-master-1-minion-fyts | |
Apr 21 22:54:04.710: INFO: Got stdout from 8.34.214.252:22: Hello from jenkins@e2e-gce-master-1-minion-hlmm | |
Apr 21 22:54:04.746: INFO: Got stdout from 104.197.129.178:22: Hello from jenkins@e2e-gce-master-1-minion-x3cg | |
STEP: SSH'ing to all nodes and running echo "foo" | grep "bar" | |
STEP: SSH'ing to all nodes and running echo "Out" && echo "Error" >&2 && exit 7 | |
Apr 21 22:54:04.991: INFO: Got stdout from 8.34.213.250:22: Out | |
Apr 21 22:54:04.991: INFO: Got stderr from 8.34.213.250:22: Error | |
Apr 21 22:54:05.024: INFO: Got stdout from 104.154.114.186:22: Out | |
Apr 21 22:54:05.024: INFO: Got stderr from 104.154.114.186:22: Error | |
Apr 21 22:54:05.071: INFO: Got stdout from 146.148.35.35:22: Out | |
Apr 21 22:54:05.071: INFO: Got stderr from 146.148.35.35:22: Error | |
Apr 21 22:54:05.102: INFO: Got stdout from 8.35.199.247:22: Out | |
Apr 21 22:54:05.102: INFO: Got stderr from 8.35.199.247:22: Error | |
Apr 21 22:54:05.135: INFO: Got stdout from 8.34.214.252:22: Out | |
Apr 21 22:54:05.135: INFO: Got stderr from 8.34.214.252:22: Error | |
Apr 21 22:54:05.165: INFO: Got stdout from 104.197.129.178:22: Out | |
Apr 21 22:54:05.165: INFO: Got stderr from 104.197.129.178:22: Error | |
STEP: SSH'ing to a nonexistent host | |
error dialing [email protected]: 'dial tcp: missing port in address i.do.not.exist', retrying | |
error dialing [email protected]: 'dial tcp: missing port in address i.do.not.exist', retrying | |
error dialing [email protected]: 'dial tcp: missing port in address i.do.not.exist', retrying | |
error dialing [email protected]: 'dial tcp: missing port in address i.do.not.exist', retrying | |
error dialing [email protected]: 'dial tcp: missing port in address i.do.not.exist', retrying | |
[AfterEach] [k8s.io] SSH | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:25.166: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-ssh-ii9rg" for this suite. | |
• [SLOW TEST:26.166 seconds] | |
[k8s.io] SSH | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should SSH to all nodes and run commands | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:93 | |
------------------------------ | |
SSSS | |
------------------------------ | |
[BeforeEach] [k8s.io] Secrets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:21.258: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:100 | |
STEP: Creating secret with name secret-test-a8c1ed91-084e-11e6-bcb9-42010af00007 | |
STEP: Creating a pod to test consume secrets | |
Apr 21 22:54:21.339: INFO: Waiting up to 5m0s for pod pod-secrets-a8c543cd-084e-11e6-bcb9-42010af00007 status to be success or failure | |
Apr 21 22:54:21.344: INFO: No Status.Info for container 'secret-volume-test' in pod 'pod-secrets-a8c543cd-084e-11e6-bcb9-42010af00007' yet | |
Apr 21 22:54:21.344: INFO: Waiting for pod pod-secrets-a8c543cd-084e-11e6-bcb9-42010af00007 in namespace 'e2e-tests-secrets-v2m3y' status to be 'success or failure'(found phase: "Pending", readiness: false) (5.211306ms elapsed) | |
Apr 21 22:54:23.348: INFO: Nil State.Terminated for container 'secret-volume-test' in pod 'pod-secrets-a8c543cd-084e-11e6-bcb9-42010af00007' in namespace 'e2e-tests-secrets-v2m3y' so far | |
Apr 21 22:54:23.348: INFO: Waiting for pod pod-secrets-a8c543cd-084e-11e6-bcb9-42010af00007 in namespace 'e2e-tests-secrets-v2m3y' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.008379931s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-8eot pod pod-secrets-a8c543cd-084e-11e6-bcb9-42010af00007 container secret-volume-test: <nil> | |
STEP: Successfully fetched pod logs:mode of file "/etc/secret-volume/data-1": -r--r--r-- | |
content of file "/etc/secret-volume/data-1": value-1 | |
STEP: Cleaning up the secret | |
[AfterEach] [k8s.io] Secrets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:25.518: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-secrets-v2m3y" for this suite. | |
• [SLOW TEST:14.281 seconds] | |
[k8s.io] Secrets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be consumable from pods in volume [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:100 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:09.856: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0666,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:109 | |
STEP: Creating a pod to test emptydir 0666 on node default medium | |
Apr 21 22:54:09.933: INFO: Waiting up to 5m0s for pod pod-a1f5d083-084e-11e6-bd26-42010af00007 status to be success or failure | |
Apr 21 22:54:09.936: INFO: No Status.Info for container 'test-container' in pod 'pod-a1f5d083-084e-11e6-bd26-42010af00007' yet | |
Apr 21 22:54:09.936: INFO: Waiting for pod pod-a1f5d083-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-emptydir-7at7e' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.393634ms elapsed) | |
Apr 21 22:54:11.939: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a1f5d083-084e-11e6-bd26-42010af00007' in namespace 'e2e-tests-emptydir-7at7e' so far | |
Apr 21 22:54:11.939: INFO: Waiting for pod pod-a1f5d083-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-emptydir-7at7e' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005970848s elapsed) | |
Apr 21 22:54:13.943: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a1f5d083-084e-11e6-bd26-42010af00007' in namespace 'e2e-tests-emptydir-7at7e' so far | |
Apr 21 22:54:13.943: INFO: Waiting for pod pod-a1f5d083-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-emptydir-7at7e' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.009616343s elapsed) | |
Apr 21 22:54:15.946: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a1f5d083-084e-11e6-bd26-42010af00007' in namespace 'e2e-tests-emptydir-7at7e' so far | |
Apr 21 22:54:15.947: INFO: Waiting for pod pod-a1f5d083-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-emptydir-7at7e' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.013210988s elapsed) | |
Apr 21 22:54:17.950: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a1f5d083-084e-11e6-bd26-42010af00007' in namespace 'e2e-tests-emptydir-7at7e' so far | |
Apr 21 22:54:17.950: INFO: Waiting for pod pod-a1f5d083-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-emptydir-7at7e' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.016780531s elapsed) | |
Apr 21 22:54:19.954: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a1f5d083-084e-11e6-bd26-42010af00007' in namespace 'e2e-tests-emptydir-7at7e' so far | |
Apr 21 22:54:19.954: INFO: Waiting for pod pod-a1f5d083-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-emptydir-7at7e' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.021045823s elapsed) | |
Apr 21 22:54:21.958: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a1f5d083-084e-11e6-bd26-42010af00007' in namespace 'e2e-tests-emptydir-7at7e' so far | |
Apr 21 22:54:21.958: INFO: Waiting for pod pod-a1f5d083-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-emptydir-7at7e' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.024704625s elapsed) | |
Apr 21 22:54:23.969: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a1f5d083-084e-11e6-bd26-42010af00007' in namespace 'e2e-tests-emptydir-7at7e' so far | |
Apr 21 22:54:23.969: INFO: Waiting for pod pod-a1f5d083-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-emptydir-7at7e' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.036018382s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-6ch0 pod pod-a1f5d083-084e-11e6-bd26-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267 | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rw-rw-rw- | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:26.000: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-7at7e" for this suite. | |
• [SLOW TEST:26.174 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (non-root,0666,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:109 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:13.864: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0644,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:77 | |
STEP: Creating a pod to test emptydir 0644 on tmpfs | |
Apr 21 22:54:13.957: INFO: Waiting up to 5m0s for pod pod-a45bf0f7-084e-11e6-aee8-42010af00007 status to be success or failure | |
Apr 21 22:54:13.962: INFO: No Status.Info for container 'test-container' in pod 'pod-a45bf0f7-084e-11e6-aee8-42010af00007' yet | |
Apr 21 22:54:13.962: INFO: Waiting for pod pod-a45bf0f7-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-kdsin' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.553649ms elapsed) | |
Apr 21 22:54:15.966: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a45bf0f7-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-emptydir-kdsin' so far | |
Apr 21 22:54:15.966: INFO: Waiting for pod pod-a45bf0f7-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-kdsin' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.009204204s elapsed) | |
Apr 21 22:54:17.970: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a45bf0f7-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-emptydir-kdsin' so far | |
Apr 21 22:54:17.970: INFO: Waiting for pod pod-a45bf0f7-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-kdsin' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.012373015s elapsed) | |
Apr 21 22:54:19.974: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a45bf0f7-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-emptydir-kdsin' so far | |
Apr 21 22:54:19.974: INFO: Waiting for pod pod-a45bf0f7-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-kdsin' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.016689605s elapsed) | |
Apr 21 22:54:21.978: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a45bf0f7-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-emptydir-kdsin' so far | |
Apr 21 22:54:21.978: INFO: Waiting for pod pod-a45bf0f7-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-kdsin' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.020351095s elapsed) | |
Apr 21 22:54:23.982: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a45bf0f7-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-emptydir-kdsin' so far | |
Apr 21 22:54:23.982: INFO: Waiting for pod pod-a45bf0f7-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-kdsin' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.024498673s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-asea pod pod-a45bf0f7-084e-11e6-aee8-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rw-r--r-- | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:26.108: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-kdsin" for this suite. | |
• [SLOW TEST:22.292 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (non-root,0644,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:77 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Service endpoints latency | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.932: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should not be very high [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:115 | |
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-d06nw | |
Apr 21 22:53:49.434: INFO: Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-d06nw, replica count: 1 | |
Apr 21 22:53:50.434: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:53:51.435: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:53:52.435: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:53:53.435: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:53:54.436: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:53:55.436: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:53:56.437: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:53:57.437: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:53:58.438: INFO: svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:53:58.563: INFO: Created: latency-svc-ehrvb | |
Apr 21 22:53:58.572: INFO: Got endpoints: latency-svc-ehrvb [33.404093ms] | |
Apr 21 22:53:58.657: INFO: Created: latency-svc-0a1hj | |
Apr 21 22:53:58.671: INFO: Created: latency-svc-vvaoc | |
Apr 21 22:53:58.672: INFO: Got endpoints: latency-svc-0a1hj [83.789937ms] | |
Apr 21 22:53:58.683: INFO: Got endpoints: latency-svc-vvaoc [94.080678ms] | |
Apr 21 22:53:58.684: INFO: Created: latency-svc-0jfjt | |
Apr 21 22:53:58.695: INFO: Created: latency-svc-ywsto | |
Apr 21 22:53:58.702: INFO: Created: latency-svc-vdnda | |
Apr 21 22:53:58.703: INFO: Got endpoints: latency-svc-0jfjt [114.286049ms] | |
Apr 21 22:53:58.722: INFO: Created: latency-svc-3zxby | |
Apr 21 22:53:58.733: INFO: Got endpoints: latency-svc-ywsto [144.385772ms] | |
Apr 21 22:53:58.750: INFO: Got endpoints: latency-svc-vdnda [160.985758ms] | |
Apr 21 22:53:58.750: INFO: Got endpoints: latency-svc-3zxby [161.839794ms] | |
Apr 21 22:53:58.773: INFO: Created: latency-svc-ub3fq | |
Apr 21 22:53:58.792: INFO: Created: latency-svc-h6gu3 | |
Apr 21 22:53:58.810: INFO: Created: latency-svc-mifz4 | |
Apr 21 22:53:58.829: INFO: Got endpoints: latency-svc-ub3fq [240.810504ms] | |
Apr 21 22:53:58.837: INFO: Created: latency-svc-sxp8f | |
Apr 21 22:53:58.845: INFO: Created: latency-svc-fjdpf | |
Apr 21 22:53:58.868: INFO: Created: latency-svc-7qcpl | |
Apr 21 22:53:58.881: INFO: Created: latency-svc-c5hfu | |
Apr 21 22:53:58.886: INFO: Got endpoints: latency-svc-h6gu3 [297.173109ms] | |
Apr 21 22:53:58.887: INFO: Got endpoints: latency-svc-mifz4 [298.657026ms] | |
Apr 21 22:53:58.926: INFO: Created: latency-svc-1vfzp | |
Apr 21 22:53:58.934: INFO: Got endpoints: latency-svc-fjdpf [345.330475ms] | |
Apr 21 22:53:58.943: INFO: Got endpoints: latency-svc-sxp8f [354.297042ms] | |
Apr 21 22:53:58.963: INFO: Got endpoints: latency-svc-7qcpl [373.962372ms] | |
Apr 21 22:53:58.970: INFO: Got endpoints: latency-svc-c5hfu [380.728362ms] | |
Apr 21 22:53:58.982: INFO: Got endpoints: latency-svc-1vfzp [393.204912ms] | |
Apr 21 22:53:59.005: INFO: Created: latency-svc-ceiya | |
Apr 21 22:53:59.143: INFO: Created: latency-svc-c5cz8 | |
Apr 21 22:53:59.149: INFO: Created: latency-svc-sc886 | |
Apr 21 22:53:59.157: INFO: Created: latency-svc-n9pkt | |
Apr 21 22:53:59.173: INFO: Created: latency-svc-nqwnp | |
Apr 21 22:53:59.207: INFO: Created: latency-svc-jqnql | |
Apr 21 22:53:59.224: INFO: Created: latency-svc-j0spr | |
Apr 21 22:53:59.239: INFO: Created: latency-svc-v35ce | |
Apr 21 22:53:59.253: INFO: Created: latency-svc-zx3jb | |
Apr 21 22:53:59.264: INFO: Created: latency-svc-hqwwp | |
Apr 21 22:53:59.271: INFO: Created: latency-svc-km8ci | |
Apr 21 22:53:59.297: INFO: Created: latency-svc-0jaz7 | |
Apr 21 22:53:59.304: INFO: Created: latency-svc-qni8u | |
Apr 21 22:53:59.309: INFO: Created: latency-svc-ih5hz | |
Apr 21 22:53:59.330: INFO: Created: latency-svc-e9a57 | |
Apr 21 22:53:59.404: INFO: Got endpoints: latency-svc-ceiya [815.696171ms] | |
Apr 21 22:53:59.430: INFO: Created: latency-svc-odb60 | |
Apr 21 22:53:59.711: INFO: Got endpoints: latency-svc-c5cz8 [694.650662ms] | |
Apr 21 22:53:59.745: INFO: Created: latency-svc-28flo | |
Apr 21 22:53:59.755: INFO: Got endpoints: latency-svc-sc886 [753.174354ms] | |
Apr 21 22:53:59.778: INFO: Created: latency-svc-8qvqh | |
Apr 21 22:53:59.804: INFO: Got endpoints: latency-svc-n9pkt [762.261926ms] | |
Apr 21 22:53:59.831: INFO: Created: latency-svc-z1ogw | |
Apr 21 22:53:59.854: INFO: Got endpoints: latency-svc-nqwnp [790.46621ms] | |
Apr 21 22:53:59.878: INFO: Created: latency-svc-sguqa | |
Apr 21 22:53:59.904: INFO: Got endpoints: latency-svc-jqnql [810.285512ms] | |
Apr 21 22:53:59.936: INFO: Created: latency-svc-7mpqw | |
Apr 21 22:54:00.205: INFO: Got endpoints: latency-svc-j0spr [1.10224359s] | |
Apr 21 22:54:00.234: INFO: Created: latency-svc-v660x | |
Apr 21 22:54:00.254: INFO: Got endpoints: latency-svc-v35ce [1.142570903s] | |
Apr 21 22:54:00.321: INFO: Got endpoints: latency-svc-zx3jb [1.200134882s] | |
Apr 21 22:54:00.341: INFO: Created: latency-svc-2apma | |
Apr 21 22:54:00.364: INFO: Created: latency-svc-klfsq | |
Apr 21 22:54:00.381: INFO: Got endpoints: latency-svc-hqwwp [1.253876928s] | |
Apr 21 22:54:00.406: INFO: Got endpoints: latency-svc-km8ci [1.274758574s] | |
Apr 21 22:54:00.456: INFO: Created: latency-svc-7d1z8 | |
Apr 21 22:54:00.472: INFO: Created: latency-svc-mr36j | |
Apr 21 22:54:00.705: INFO: Got endpoints: latency-svc-0jaz7 [1.528118383s] | |
Apr 21 22:54:00.743: INFO: Created: latency-svc-4h1md | |
Apr 21 22:54:00.804: INFO: Got endpoints: latency-svc-qni8u [1.621103111s] | |
Apr 21 22:54:00.832: INFO: Created: latency-svc-o91sd | |
Apr 21 22:54:00.856: INFO: Got endpoints: latency-svc-ih5hz [1.666715114s] | |
Apr 21 22:54:00.883: INFO: Created: latency-svc-j2y3j | |
Apr 21 22:54:00.904: INFO: Got endpoints: latency-svc-e9a57 [1.707956384s] | |
Apr 21 22:54:00.934: INFO: Created: latency-svc-kishq | |
Apr 21 22:54:01.204: INFO: Got endpoints: latency-svc-odb60 [1.783847365s] | |
Apr 21 22:54:01.234: INFO: Created: latency-svc-g7nbv | |
Apr 21 22:54:01.539: INFO: Got endpoints: latency-svc-28flo [1.808582457s] | |
Apr 21 22:54:01.582: INFO: Created: latency-svc-j6mjd | |
Apr 21 22:54:01.605: INFO: Got endpoints: latency-svc-8qvqh [1.835695744s] | |
Apr 21 22:54:01.656: INFO: Created: latency-svc-erg6z | |
Apr 21 22:54:01.705: INFO: Got endpoints: latency-svc-z1ogw [1.884396261s] | |
Apr 21 22:54:01.735: INFO: Created: latency-svc-fnyoz | |
Apr 21 22:54:01.905: INFO: Got endpoints: latency-svc-sguqa [2.036261258s] | |
Apr 21 22:54:01.935: INFO: Created: latency-svc-bot11 | |
Apr 21 22:54:02.056: INFO: Got endpoints: latency-svc-7mpqw [2.131612982s] | |
Apr 21 22:54:02.081: INFO: Created: latency-svc-6pq86 | |
Apr 21 22:54:02.208: INFO: Got endpoints: latency-svc-v660x [1.984390188s] | |
Apr 21 22:54:02.242: INFO: Created: latency-svc-sldmo | |
Apr 21 22:54:02.404: INFO: Got endpoints: latency-svc-2apma [2.082065675s] | |
Apr 21 22:54:02.436: INFO: Created: latency-svc-lfrog | |
Apr 21 22:54:02.505: INFO: Got endpoints: latency-svc-klfsq [2.153628251s] | |
Apr 21 22:54:02.534: INFO: Created: latency-svc-q5b7h | |
Apr 21 22:54:02.704: INFO: Got endpoints: latency-svc-7d1z8 [2.277469159s] | |
Apr 21 22:54:02.797: INFO: Created: latency-svc-mkujv | |
Apr 21 22:54:02.804: INFO: Got endpoints: latency-svc-mr36j [2.362301535s] | |
Apr 21 22:54:02.844: INFO: Created: latency-svc-9pezz | |
Apr 21 22:54:03.006: INFO: Got endpoints: latency-svc-4h1md [2.271977606s] | |
Apr 21 22:54:03.044: INFO: Created: latency-svc-yj4fj | |
Apr 21 22:54:03.156: INFO: Got endpoints: latency-svc-o91sd [2.333270964s] | |
Apr 21 22:54:03.189: INFO: Created: latency-svc-sehls | |
Apr 21 22:54:03.306: INFO: Got endpoints: latency-svc-j2y3j [2.431075401s] | |
Apr 21 22:54:03.337: INFO: Created: latency-svc-3r9ni | |
Apr 21 22:54:03.458: INFO: Got endpoints: latency-svc-kishq [2.535379823s] | |
Apr 21 22:54:03.487: INFO: Created: latency-svc-kchvh | |
Apr 21 22:54:03.605: INFO: Got endpoints: latency-svc-g7nbv [2.380301242s] | |
Apr 21 22:54:03.637: INFO: Created: latency-svc-49rfi | |
Apr 21 22:54:03.754: INFO: Got endpoints: latency-svc-j6mjd [2.186139317s] | |
Apr 21 22:54:03.780: INFO: Created: latency-svc-tsa0c | |
Apr 21 22:54:03.904: INFO: Got endpoints: latency-svc-erg6z [2.26095533s] | |
Apr 21 22:54:03.929: INFO: Created: latency-svc-hfyw3 | |
Apr 21 22:54:04.055: INFO: Got endpoints: latency-svc-fnyoz [2.329864558s] | |
Apr 21 22:54:04.140: INFO: Created: latency-svc-vjt4g | |
Apr 21 22:54:04.205: INFO: Got endpoints: latency-svc-bot11 [2.278932787s] | |
Apr 21 22:54:04.231: INFO: Created: latency-svc-46r0k | |
Apr 21 22:54:04.355: INFO: Got endpoints: latency-svc-6pq86 [2.282599696s] | |
Apr 21 22:54:04.380: INFO: Created: latency-svc-ssval | |
Apr 21 22:54:04.505: INFO: Got endpoints: latency-svc-sldmo [2.279782342s] | |
Apr 21 22:54:04.537: INFO: Created: latency-svc-ogxmx | |
Apr 21 22:54:04.655: INFO: Got endpoints: latency-svc-lfrog [2.233563148s] | |
Apr 21 22:54:04.709: INFO: Created: latency-svc-y2gwy | |
Apr 21 22:54:04.808: INFO: Got endpoints: latency-svc-q5b7h [2.286632911s] | |
Apr 21 22:54:04.844: INFO: Created: latency-svc-fvnu6 | |
Apr 21 22:54:04.958: INFO: Got endpoints: latency-svc-mkujv [2.178257712s] | |
Apr 21 22:54:04.983: INFO: Created: latency-svc-tocai | |
Apr 21 22:54:05.105: INFO: Got endpoints: latency-svc-9pezz [2.268729407s] | |
Apr 21 22:54:05.131: INFO: Created: latency-svc-2u7v5 | |
Apr 21 22:54:05.255: INFO: Got endpoints: latency-svc-yj4fj [2.230061292s] | |
Apr 21 22:54:05.283: INFO: Created: latency-svc-42sle | |
Apr 21 22:54:05.406: INFO: Got endpoints: latency-svc-sehls [2.231538634s] | |
Apr 21 22:54:05.442: INFO: Created: latency-svc-ilmo3 | |
Apr 21 22:54:05.605: INFO: Got endpoints: latency-svc-3r9ni [2.279313069s] | |
Apr 21 22:54:05.629: INFO: Created: latency-svc-hla03 | |
Apr 21 22:54:05.755: INFO: Got endpoints: latency-svc-kchvh [2.277972255s] | |
Apr 21 22:54:05.783: INFO: Created: latency-svc-5kczo | |
Apr 21 22:54:05.904: INFO: Got endpoints: latency-svc-49rfi [2.277718023s] | |
Apr 21 22:54:05.933: INFO: Created: latency-svc-9uyls | |
Apr 21 22:54:06.055: INFO: Got endpoints: latency-svc-tsa0c [2.283621341s] | |
Apr 21 22:54:06.080: INFO: Created: latency-svc-98vlj | |
Apr 21 22:54:06.211: INFO: Got endpoints: latency-svc-hfyw3 [2.2908959s] | |
Apr 21 22:54:06.246: INFO: Created: latency-svc-ebbhi | |
Apr 21 22:54:06.354: INFO: Got endpoints: latency-svc-vjt4g [2.22468467s] | |
Apr 21 22:54:06.374: INFO: Created: latency-svc-d1dbz | |
Apr 21 22:54:06.555: INFO: Got endpoints: latency-svc-ogxmx [2.029450664s] | |
Apr 21 22:54:06.583: INFO: Created: latency-svc-tms1a | |
Apr 21 22:54:06.706: INFO: Got endpoints: latency-svc-46r0k [2.483773455s] | |
Apr 21 22:54:06.738: INFO: Created: latency-svc-aluzc | |
Apr 21 22:54:06.755: INFO: Got endpoints: latency-svc-ssval [2.382429764s] | |
Apr 21 22:54:06.784: INFO: Created: latency-svc-4zwdj | |
Apr 21 22:54:07.057: INFO: Got endpoints: latency-svc-tocai [2.081825916s] | |
Apr 21 22:54:07.081: INFO: Created: latency-svc-5c9s1 | |
Apr 21 22:54:07.205: INFO: Got endpoints: latency-svc-y2gwy [2.516552798s] | |
Apr 21 22:54:07.232: INFO: Created: latency-svc-avh8v | |
Apr 21 22:54:07.256: INFO: Got endpoints: latency-svc-fvnu6 [2.428216563s] | |
Apr 21 22:54:07.281: INFO: Created: latency-svc-ppwpu | |
Apr 21 22:54:07.355: INFO: Got endpoints: latency-svc-42sle [2.083666802s] | |
Apr 21 22:54:07.386: INFO: Created: latency-svc-xr481 | |
Apr 21 22:54:07.656: INFO: Got endpoints: latency-svc-2u7v5 [2.535132869s] | |
Apr 21 22:54:07.681: INFO: Created: latency-svc-6aadm | |
Apr 21 22:54:07.805: INFO: Got endpoints: latency-svc-ilmo3 [2.373987773s] | |
Apr 21 22:54:07.829: INFO: Created: latency-svc-tyn0x | |
Apr 21 22:54:08.556: INFO: Got endpoints: latency-svc-hla03 [2.937930674s] | |
Apr 21 22:54:08.636: INFO: Created: latency-svc-5fosx | |
Apr 21 22:54:08.655: INFO: Got endpoints: latency-svc-5kczo [2.880553678s] | |
Apr 21 22:54:08.678: INFO: Created: latency-svc-eom4k | |
Apr 21 22:54:08.755: INFO: Got endpoints: latency-svc-9uyls [2.829984205s] | |
Apr 21 22:54:08.802: INFO: Created: latency-svc-xwhns | |
Apr 21 22:54:08.955: INFO: Got endpoints: latency-svc-98vlj [2.883883656s] | |
Apr 21 22:54:08.980: INFO: Created: latency-svc-932mr | |
Apr 21 22:54:09.105: INFO: Got endpoints: latency-svc-ebbhi [2.867913021s] | |
Apr 21 22:54:09.155: INFO: Created: latency-svc-5tupb | |
Apr 21 22:54:09.255: INFO: Got endpoints: latency-svc-d1dbz [2.888587983s] | |
Apr 21 22:54:09.281: INFO: Created: latency-svc-ung1l | |
Apr 21 22:54:09.405: INFO: Got endpoints: latency-svc-tms1a [2.829794776s] | |
Apr 21 22:54:09.438: INFO: Created: latency-svc-2eazt | |
Apr 21 22:54:09.555: INFO: Got endpoints: latency-svc-aluzc [2.827023322s] | |
Apr 21 22:54:09.578: INFO: Created: latency-svc-46p31 | |
Apr 21 22:54:09.705: INFO: Got endpoints: latency-svc-4zwdj [2.934732938s] | |
Apr 21 22:54:09.728: INFO: Created: latency-svc-lqwdj | |
Apr 21 22:54:09.854: INFO: Got endpoints: latency-svc-5c9s1 [2.781624519s] | |
Apr 21 22:54:09.883: INFO: Created: latency-svc-t488p | |
Apr 21 22:54:10.066: INFO: Got endpoints: latency-svc-avh8v [2.84714527s] | |
Apr 21 22:54:10.094: INFO: Created: latency-svc-o5nef | |
Apr 21 22:54:10.155: INFO: Got endpoints: latency-svc-ppwpu [2.882985268s] | |
Apr 21 22:54:10.182: INFO: Created: latency-svc-f0k62 | |
Apr 21 22:54:10.304: INFO: Got endpoints: latency-svc-xr481 [2.929503359s] | |
Apr 21 22:54:10.328: INFO: Created: latency-svc-j1cjg | |
Apr 21 22:54:10.454: INFO: Got endpoints: latency-svc-6aadm [2.781526334s] | |
Apr 21 22:54:10.479: INFO: Created: latency-svc-4wc64 | |
Apr 21 22:54:10.605: INFO: Got endpoints: latency-svc-tyn0x [2.784283512s] | |
Apr 21 22:54:10.628: INFO: Created: latency-svc-2r07k | |
Apr 21 22:54:10.767: INFO: Got endpoints: latency-svc-5fosx [2.142567481s] | |
Apr 21 22:54:10.850: INFO: Created: latency-svc-4t0l0 | |
Apr 21 22:54:10.904: INFO: Got endpoints: latency-svc-eom4k [2.234536565s] | |
Apr 21 22:54:10.935: INFO: Created: latency-svc-epjvt | |
Apr 21 22:54:11.055: INFO: Got endpoints: latency-svc-xwhns [2.270101765s] | |
Apr 21 22:54:11.086: INFO: Created: latency-svc-cq9jx | |
Apr 21 22:54:11.216: INFO: Got endpoints: latency-svc-932mr [2.245752879s] | |
Apr 21 22:54:11.243: INFO: Created: latency-svc-srqk7 | |
Apr 21 22:54:11.355: INFO: Got endpoints: latency-svc-5tupb [2.230818191s] | |
Apr 21 22:54:11.387: INFO: Created: latency-svc-wwr4f | |
Apr 21 22:54:11.505: INFO: Got endpoints: latency-svc-ung1l [2.234147299s] | |
Apr 21 22:54:11.531: INFO: Created: latency-svc-g4dw2 | |
Apr 21 22:54:11.654: INFO: Got endpoints: latency-svc-2eazt [2.228983628s] | |
Apr 21 22:54:11.681: INFO: Created: latency-svc-78no5 | |
Apr 21 22:54:11.805: INFO: Got endpoints: latency-svc-46p31 [2.23570535s] | |
Apr 21 22:54:11.832: INFO: Created: latency-svc-chwcy | |
Apr 21 22:54:11.954: INFO: Got endpoints: latency-svc-lqwdj [2.23467812s] | |
Apr 21 22:54:11.978: INFO: Created: latency-svc-imppv | |
Apr 21 22:54:12.109: INFO: Got endpoints: latency-svc-t488p [2.236376734s] | |
Apr 21 22:54:12.151: INFO: Created: latency-svc-hu930 | |
Apr 21 22:54:12.254: INFO: Got endpoints: latency-svc-o5nef [2.167208058s] | |
Apr 21 22:54:12.278: INFO: Created: latency-svc-dq8ob | |
Apr 21 22:54:12.405: INFO: Got endpoints: latency-svc-f0k62 [2.22988323s] | |
Apr 21 22:54:12.429: INFO: Created: latency-svc-g2gu4 | |
Apr 21 22:54:12.555: INFO: Got endpoints: latency-svc-j1cjg [2.234989794s] | |
Apr 21 22:54:12.580: INFO: Created: latency-svc-rnmzj | |
Apr 21 22:54:12.711: INFO: Got endpoints: latency-svc-4wc64 [2.241054429s] | |
Apr 21 22:54:12.733: INFO: Created: latency-svc-8cxl0 | |
Apr 21 22:54:12.854: INFO: Got endpoints: latency-svc-2r07k [2.235187954s] | |
Apr 21 22:54:12.879: INFO: Created: latency-svc-3jhpg | |
Apr 21 22:54:13.004: INFO: Got endpoints: latency-svc-4t0l0 [2.176612142s] | |
Apr 21 22:54:13.072: INFO: Created: latency-svc-rg16m | |
Apr 21 22:54:13.156: INFO: Got endpoints: latency-svc-epjvt [2.231220073s] | |
Apr 21 22:54:13.182: INFO: Created: latency-svc-rxsmw | |
Apr 21 22:54:13.304: INFO: Got endpoints: latency-svc-cq9jx [2.231630196s] | |
Apr 21 22:54:13.364: INFO: Created: latency-svc-1van7 | |
Apr 21 22:54:13.455: INFO: Got endpoints: latency-svc-srqk7 [2.219179239s] | |
Apr 21 22:54:13.477: INFO: Created: latency-svc-3d6p8 | |
Apr 21 22:54:13.605: INFO: Got endpoints: latency-svc-wwr4f [2.227759855s] | |
Apr 21 22:54:13.629: INFO: Created: latency-svc-uhlck | |
Apr 21 22:54:13.755: INFO: Got endpoints: latency-svc-g4dw2 [2.232235242s] | |
Apr 21 22:54:13.783: INFO: Created: latency-svc-9x0k2 | |
Apr 21 22:54:13.918: INFO: Got endpoints: latency-svc-78no5 [2.246659971s] | |
Apr 21 22:54:13.949: INFO: Created: latency-svc-vx7t8 | |
Apr 21 22:54:14.105: INFO: Got endpoints: latency-svc-chwcy [2.282863883s] | |
Apr 21 22:54:14.133: INFO: Created: latency-svc-xxmpb | |
Apr 21 22:54:14.305: INFO: Got endpoints: latency-svc-imppv [2.334986701s] | |
Apr 21 22:54:14.329: INFO: Created: latency-svc-c6psq | |
Apr 21 22:54:14.456: INFO: Got endpoints: latency-svc-hu930 [2.313910885s] | |
Apr 21 22:54:14.480: INFO: Created: latency-svc-tgq2d | |
Apr 21 22:54:14.605: INFO: Got endpoints: latency-svc-dq8ob [2.334793818s] | |
Apr 21 22:54:14.630: INFO: Created: latency-svc-vgjvx | |
Apr 21 22:54:14.755: INFO: Got endpoints: latency-svc-g2gu4 [2.33517407s] | |
Apr 21 22:54:14.790: INFO: Created: latency-svc-g3zmc | |
Apr 21 22:54:14.908: INFO: Got endpoints: latency-svc-rnmzj [2.335509924s] | |
Apr 21 22:54:15.004: INFO: Created: latency-svc-q0bks | |
Apr 21 22:54:15.055: INFO: Got endpoints: latency-svc-8cxl0 [2.33027924s] | |
Apr 21 22:54:15.081: INFO: Created: latency-svc-u2wp3 | |
Apr 21 22:54:15.204: INFO: Got endpoints: latency-svc-3jhpg [2.333581255s] | |
Apr 21 22:54:15.232: INFO: Created: latency-svc-n8eya | |
Apr 21 22:54:15.355: INFO: Got endpoints: latency-svc-rg16m [2.297843215s] | |
Apr 21 22:54:15.377: INFO: Created: latency-svc-kfeer | |
Apr 21 22:54:15.506: INFO: Got endpoints: latency-svc-rxsmw [2.332521294s] | |
Apr 21 22:54:15.530: INFO: Created: latency-svc-eolus | |
Apr 21 22:54:15.655: INFO: Got endpoints: latency-svc-1van7 [2.331757957s] | |
Apr 21 22:54:15.680: INFO: Created: latency-svc-7bvxb | |
Apr 21 22:54:15.804: INFO: Got endpoints: latency-svc-3d6p8 [2.335480129s] | |
Apr 21 22:54:15.841: INFO: Created: latency-svc-zrhhy | |
Apr 21 22:54:15.954: INFO: Got endpoints: latency-svc-uhlck [2.333963345s] | |
Apr 21 22:54:15.981: INFO: Created: latency-svc-dlpee | |
Apr 21 22:54:16.104: INFO: Got endpoints: latency-svc-9x0k2 [2.329036344s] | |
Apr 21 22:54:16.132: INFO: Created: latency-svc-l0e0o | |
Apr 21 22:54:16.256: INFO: Got endpoints: latency-svc-vx7t8 [2.314688373s] | |
Apr 21 22:54:16.297: INFO: Created: latency-svc-nofi3 | |
Apr 21 22:54:16.404: INFO: Got endpoints: latency-svc-xxmpb [2.281717109s] | |
Apr 21 22:54:16.429: INFO: Created: latency-svc-4bfmi | |
Apr 21 22:54:16.555: INFO: Got endpoints: latency-svc-c6psq [2.234227857s] | |
Apr 21 22:54:16.626: INFO: Created: latency-svc-o0ev2 | |
Apr 21 22:54:16.705: INFO: Got endpoints: latency-svc-tgq2d [2.234671534s] | |
Apr 21 22:54:16.729: INFO: Created: latency-svc-6q9tb | |
Apr 21 22:54:16.855: INFO: Got endpoints: latency-svc-vgjvx [2.231722853s] | |
Apr 21 22:54:16.879: INFO: Created: latency-svc-xo63j | |
Apr 21 22:54:17.005: INFO: Got endpoints: latency-svc-g3zmc [2.224930634s] | |
Apr 21 22:54:17.032: INFO: Created: latency-svc-73kkn | |
Apr 21 22:54:17.155: INFO: Got endpoints: latency-svc-q0bks [2.218868301s] | |
Apr 21 22:54:17.181: INFO: Created: latency-svc-k7n2q | |
Apr 21 22:54:17.305: INFO: Got endpoints: latency-svc-u2wp3 [2.234938945s] | |
Apr 21 22:54:17.334: INFO: Created: latency-svc-t8hcq | |
Apr 21 22:54:17.461: INFO: Got endpoints: latency-svc-n8eya [2.236993613s] | |
Apr 21 22:54:17.519: INFO: Created: latency-svc-vait9 | |
Apr 21 22:54:17.605: INFO: Got endpoints: latency-svc-kfeer [2.234679445s] | |
Apr 21 22:54:17.630: INFO: Created: latency-svc-r26lf | |
Apr 21 22:54:17.755: INFO: Got endpoints: latency-svc-eolus [2.233786497s] | |
Apr 21 22:54:17.786: INFO: Created: latency-svc-h0qnt | |
Apr 21 22:54:17.906: INFO: Got endpoints: latency-svc-7bvxb [2.233727679s] | |
Apr 21 22:54:17.935: INFO: Created: latency-svc-1dp39 | |
Apr 21 22:54:18.055: INFO: Got endpoints: latency-svc-zrhhy [2.23248219s] | |
Apr 21 22:54:18.080: INFO: Created: latency-svc-38so8 | |
Apr 21 22:54:18.248: INFO: Got endpoints: latency-svc-dlpee [2.276091038s] | |
Apr 21 22:54:18.297: INFO: Created: latency-svc-q7ub0 | |
Apr 21 22:54:18.355: INFO: Got endpoints: latency-svc-l0e0o [2.234992223s] | |
Apr 21 22:54:18.391: INFO: Created: latency-svc-ha35f | |
Apr 21 22:54:18.505: INFO: Got endpoints: latency-svc-nofi3 [2.230214278s] | |
Apr 21 22:54:18.530: INFO: Created: latency-svc-j6074 | |
Apr 21 22:54:18.656: INFO: Got endpoints: latency-svc-4bfmi [2.237500492s] | |
Apr 21 22:54:18.679: INFO: Created: latency-svc-67no8 | |
Apr 21 22:54:18.805: INFO: Got endpoints: latency-svc-o0ev2 [2.233995442s] | |
Apr 21 22:54:18.838: INFO: Created: latency-svc-gcwf4 | |
Apr 21 22:54:18.954: INFO: Got endpoints: latency-svc-6q9tb [2.233733912s] | |
Apr 21 22:54:18.979: INFO: Created: latency-svc-yzmif | |
Apr 21 22:54:19.104: INFO: Got endpoints: latency-svc-xo63j [2.233235423s] | |
Apr 21 22:54:19.129: INFO: Created: latency-svc-7qqgb | |
Apr 21 22:54:19.256: INFO: Got endpoints: latency-svc-73kkn [2.234812621s] | |
Apr 21 22:54:19.293: INFO: Created: latency-svc-vc0wr | |
Apr 21 22:54:19.405: INFO: Got endpoints: latency-svc-k7n2q [2.232763163s] | |
Apr 21 22:54:19.436: INFO: Created: latency-svc-ervsb | |
Apr 21 22:54:19.556: INFO: Got endpoints: latency-svc-t8hcq [2.232324833s] | |
Apr 21 22:54:19.590: INFO: Created: latency-svc-clbdr | |
Apr 21 22:54:19.705: INFO: Got endpoints: latency-svc-vait9 [2.197668464s] | |
Apr 21 22:54:19.740: INFO: Created: latency-svc-p98hb | |
Apr 21 22:54:19.872: INFO: Got endpoints: latency-svc-r26lf [2.25131923s] | |
Apr 21 22:54:19.917: INFO: Created: latency-svc-es1ci | |
Apr 21 22:54:20.006: INFO: Got endpoints: latency-svc-h0qnt [2.231728489s] | |
Apr 21 22:54:20.056: INFO: Created: latency-svc-cvq3x | |
Apr 21 22:54:20.155: INFO: Got endpoints: latency-svc-1dp39 [2.233244543s] | |
Apr 21 22:54:20.178: INFO: Created: latency-svc-rhk9y | |
Apr 21 22:54:20.324: INFO: Got endpoints: latency-svc-38so8 [2.251859924s] | |
Apr 21 22:54:20.353: INFO: Created: latency-svc-8er7i | |
Apr 21 22:54:20.456: INFO: Got endpoints: latency-svc-q7ub0 [2.168175238s] | |
Apr 21 22:54:20.488: INFO: Created: latency-svc-g5ybj | |
Apr 21 22:54:20.609: INFO: Got endpoints: latency-svc-ha35f [2.235987058s] | |
Apr 21 22:54:20.633: INFO: Created: latency-svc-9eugb | |
Apr 21 22:54:20.755: INFO: Got endpoints: latency-svc-j6074 [2.232832432s] | |
Apr 21 22:54:20.793: INFO: Created: latency-svc-09alr | |
Apr 21 22:54:20.906: INFO: Got endpoints: latency-svc-67no8 [2.235463243s] | |
Apr 21 22:54:20.934: INFO: Created: latency-svc-uei3f | |
Apr 21 22:54:21.054: INFO: Got endpoints: latency-svc-gcwf4 [2.232310148s] | |
Apr 21 22:54:21.132: INFO: Created: latency-svc-zdh8a | |
Apr 21 22:54:21.205: INFO: Got endpoints: latency-svc-yzmif [2.234349963s] | |
Apr 21 22:54:21.257: INFO: Created: latency-svc-hji47 | |
Apr 21 22:54:21.355: INFO: Got endpoints: latency-svc-7qqgb [2.233950056s] | |
Apr 21 22:54:21.387: INFO: Created: latency-svc-vgcvk | |
Apr 21 22:54:21.505: INFO: Got endpoints: latency-svc-vc0wr [2.228699176s] | |
Apr 21 22:54:21.529: INFO: Created: latency-svc-uedf4 | |
Apr 21 22:54:21.655: INFO: Got endpoints: latency-svc-ervsb [2.228765714s] | |
Apr 21 22:54:21.685: INFO: Created: latency-svc-bbyuc | |
Apr 21 22:54:21.806: INFO: Got endpoints: latency-svc-clbdr [2.229117001s] | |
Apr 21 22:54:21.833: INFO: Created: latency-svc-ezxc4 | |
Apr 21 22:54:21.954: INFO: Got endpoints: latency-svc-p98hb [2.227224934s] | |
Apr 21 22:54:21.978: INFO: Created: latency-svc-jd6es | |
Apr 21 22:54:22.105: INFO: Got endpoints: latency-svc-es1ci [2.199497303s] | |
Apr 21 22:54:22.136: INFO: Created: latency-svc-fnawe | |
Apr 21 22:54:22.259: INFO: Got endpoints: latency-svc-cvq3x [2.221566892s] | |
Apr 21 22:54:22.283: INFO: Created: latency-svc-mvgr2 | |
Apr 21 22:54:22.405: INFO: Got endpoints: latency-svc-rhk9y [2.235178829s] | |
Apr 21 22:54:22.432: INFO: Created: latency-svc-kutzb | |
Apr 21 22:54:22.555: INFO: Got endpoints: latency-svc-8er7i [2.211544722s] | |
Apr 21 22:54:22.581: INFO: Created: latency-svc-kf673 | |
Apr 21 22:54:22.705: INFO: Got endpoints: latency-svc-g5ybj [2.231833314s] | |
Apr 21 22:54:22.732: INFO: Created: latency-svc-3cz27 | |
Apr 21 22:54:22.854: INFO: Got endpoints: latency-svc-9eugb [2.229908999s] | |
Apr 21 22:54:22.920: INFO: Created: latency-svc-gpfys | |
Apr 21 22:54:23.005: INFO: Got endpoints: latency-svc-09alr [2.233871061s] | |
Apr 21 22:54:23.034: INFO: Created: latency-svc-nfu1m | |
Apr 21 22:54:23.155: INFO: Got endpoints: latency-svc-uei3f [2.230014969s] | |
Apr 21 22:54:23.184: INFO: Created: latency-svc-u0a25 | |
Apr 21 22:54:23.305: INFO: Got endpoints: latency-svc-zdh8a [2.18462523s] | |
Apr 21 22:54:23.329: INFO: Created: latency-svc-oz7l5 | |
Apr 21 22:54:23.456: INFO: Got endpoints: latency-svc-hji47 [2.2204606s] | |
Apr 21 22:54:23.489: INFO: Created: latency-svc-x72b4 | |
Apr 21 22:54:23.609: INFO: Got endpoints: latency-svc-vgcvk [2.230384132s] | |
Apr 21 22:54:23.655: INFO: Created: latency-svc-w3l2i | |
Apr 21 22:54:23.755: INFO: Got endpoints: latency-svc-uedf4 [2.235110788s] | |
Apr 21 22:54:23.782: INFO: Created: latency-svc-p732v | |
Apr 21 22:54:23.905: INFO: Got endpoints: latency-svc-bbyuc [2.232127427s] | |
Apr 21 22:54:23.932: INFO: Created: latency-svc-fur6u | |
Apr 21 22:54:24.055: INFO: Got endpoints: latency-svc-ezxc4 [2.232114658s] | |
Apr 21 22:54:24.082: INFO: Created: latency-svc-ttj2f | |
Apr 21 22:54:24.205: INFO: Got endpoints: latency-svc-jd6es [2.23566957s] | |
Apr 21 22:54:24.237: INFO: Created: latency-svc-qe4hm | |
Apr 21 22:54:24.355: INFO: Got endpoints: latency-svc-fnawe [2.235087455s] | |
Apr 21 22:54:24.431: INFO: Created: latency-svc-mo4rw | |
Apr 21 22:54:24.505: INFO: Got endpoints: latency-svc-mvgr2 [2.230024121s] | |
Apr 21 22:54:24.538: INFO: Created: latency-svc-n9rgr | |
Apr 21 22:54:24.654: INFO: Got endpoints: latency-svc-kutzb [2.230931664s] | |
Apr 21 22:54:24.682: INFO: Created: latency-svc-zcrx2 | |
Apr 21 22:54:24.809: INFO: Got endpoints: latency-svc-kf673 [2.237023529s] | |
Apr 21 22:54:24.860: INFO: Created: latency-svc-288dn | |
Apr 21 22:54:24.961: INFO: Got endpoints: latency-svc-3cz27 [2.239065535s] | |
Apr 21 22:54:24.994: INFO: Created: latency-svc-qj3y1 | |
Apr 21 22:54:25.105: INFO: Got endpoints: latency-svc-gpfys [2.234164888s] | |
Apr 21 22:54:25.133: INFO: Created: latency-svc-46ywx | |
Apr 21 22:54:25.255: INFO: Got endpoints: latency-svc-nfu1m [2.229918942s] | |
Apr 21 22:54:25.281: INFO: Created: latency-svc-7kpod | |
Apr 21 22:54:25.406: INFO: Got endpoints: latency-svc-u0a25 [2.236482477s] | |
Apr 21 22:54:25.444: INFO: Created: latency-svc-tc9mo | |
Apr 21 22:54:25.555: INFO: Got endpoints: latency-svc-oz7l5 [2.233755043s] | |
Apr 21 22:54:25.707: INFO: Got endpoints: latency-svc-x72b4 [2.228738375s] | |
Apr 21 22:54:25.856: INFO: Got endpoints: latency-svc-w3l2i [2.211779187s] | |
Apr 21 22:54:26.007: INFO: Got endpoints: latency-svc-p732v [2.233210529s] | |
Apr 21 22:54:26.204: INFO: Got endpoints: latency-svc-fur6u [2.281390001s] | |
Apr 21 22:54:26.405: INFO: Got endpoints: latency-svc-ttj2f [2.331447015s] | |
Apr 21 22:54:26.560: INFO: Got endpoints: latency-svc-qe4hm [2.336611257s] | |
Apr 21 22:54:26.706: INFO: Got endpoints: latency-svc-mo4rw [2.293766s] | |
Apr 21 22:54:26.857: INFO: Got endpoints: latency-svc-n9rgr [2.329991036s] | |
Apr 21 22:54:27.005: INFO: Got endpoints: latency-svc-zcrx2 [2.334206608s] | |
Apr 21 22:54:27.155: INFO: Got endpoints: latency-svc-288dn [2.306984335s] | |
Apr 21 22:54:27.305: INFO: Got endpoints: latency-svc-qj3y1 [2.323679865s] | |
Apr 21 22:54:27.454: INFO: Got endpoints: latency-svc-46ywx [2.333396428s] | |
Apr 21 22:54:27.605: INFO: Got endpoints: latency-svc-7kpod [2.334441203s] | |
Apr 21 22:54:27.755: INFO: Got endpoints: latency-svc-tc9mo [2.326362325s] | |
STEP: deleting replication controller svc-latency-rc in namespace e2e-tests-svc-latency-d06nw | |
Apr 21 22:54:29.820: INFO: Deleting RC svc-latency-rc took: 2.044969431s | |
Apr 21 22:54:29.820: INFO: Terminating RC svc-latency-rc pods took: 109.98µs | |
Apr 21 22:54:29.820: INFO: Latencies: [83.789937ms 94.080678ms 114.286049ms 144.385772ms 160.985758ms 161.839794ms 240.810504ms 297.173109ms 298.657026ms 345.330475ms 354.297042ms 373.962372ms 380.728362ms 393.204912ms 694.650662ms 753.174354ms 762.261926ms 790.46621ms 810.285512ms 815.696171ms 1.10224359s 1.142570903s 1.200134882s 1.253876928s 1.274758574s 1.528118383s 1.621103111s 1.666715114s 1.707956384s 1.783847365s 1.808582457s 1.835695744s 1.884396261s 1.984390188s 2.029450664s 2.036261258s 2.081825916s 2.082065675s 2.083666802s 2.131612982s 2.142567481s 2.153628251s 2.167208058s 2.168175238s 2.176612142s 2.178257712s 2.18462523s 2.186139317s 2.197668464s 2.199497303s 2.211544722s 2.211779187s 2.218868301s 2.219179239s 2.2204606s 2.221566892s 2.22468467s 2.224930634s 2.227224934s 2.227759855s 2.228699176s 2.228738375s 2.228765714s 2.228983628s 2.229117001s 2.22988323s 2.229908999s 2.229918942s 2.230014969s 2.230024121s 2.230061292s 2.230214278s 2.230384132s 2.230818191s 2.230931664s 2.231220073s 2.231538634s 2.231630196s 2.231722853s 2.231728489s 2.231833314s 2.232114658s 2.232127427s 2.232235242s 2.232310148s 2.232324833s 2.23248219s 2.232763163s 2.232832432s 2.233210529s 2.233235423s 2.233244543s 2.233563148s 2.233727679s 2.233733912s 2.233755043s 2.233786497s 2.233871061s 2.233950056s 2.233995442s 2.234147299s 2.234164888s 2.234227857s 2.234349963s 2.234536565s 2.234671534s 2.23467812s 2.234679445s 2.234812621s 2.234938945s 2.234989794s 2.234992223s 2.235087455s 2.235110788s 2.235178829s 2.235187954s 2.235463243s 2.23566957s 2.23570535s 2.235987058s 2.236376734s 2.236482477s 2.236993613s 2.237023529s 2.237500492s 2.239065535s 2.241054429s 2.245752879s 2.246659971s 2.25131923s 2.251859924s 2.26095533s 2.268729407s 2.270101765s 2.271977606s 2.276091038s 2.277469159s 2.277718023s 2.277972255s 2.278932787s 2.279313069s 2.279782342s 2.281390001s 2.281717109s 2.282599696s 2.282863883s 2.283621341s 2.286632911s 2.2908959s 2.293766s 2.297843215s 2.306984335s 2.313910885s 2.314688373s 2.323679865s 2.326362325s 2.329036344s 2.329864558s 2.329991036s 2.33027924s 2.331447015s 2.331757957s 2.332521294s 2.333270964s 2.333396428s 2.333581255s 2.333963345s 2.334206608s 2.334441203s 2.334793818s 2.334986701s 2.33517407s 2.335480129s 2.335509924s 2.336611257s 2.362301535s 2.373987773s 2.380301242s 2.382429764s 2.428216563s 2.431075401s 2.483773455s 2.516552798s 2.535132869s 2.535379823s 2.781526334s 2.781624519s 2.784283512s 2.827023322s 2.829794776s 2.829984205s 2.84714527s 2.867913021s 2.880553678s 2.882985268s 2.883883656s 2.888587983s 2.929503359s 2.934732938s 2.937930674s] | |
Apr 21 22:54:29.820: INFO: 50 %ile: 2.234147299s | |
Apr 21 22:54:29.820: INFO: 90 %ile: 2.431075401s | |
Apr 21 22:54:29.820: INFO: 99 %ile: 2.934732938s | |
Apr 21 22:54:29.820: INFO: Total sample count: 200 | |
[AfterEach] [k8s.io] Service endpoints latency | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:29.820: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-svc-latency-d06nw" for this suite. | |
• [SLOW TEST:50.907 seconds] | |
[k8s.io] Service endpoints latency | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should not be very high [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:115 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:30.195: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy logs on node using proxy subresource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64 | |
Apr 21 22:54:30.264: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 8.304516ms) | |
Apr 21 22:54:30.269: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.128429ms) | |
Apr 21 22:54:30.273: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.184433ms) | |
Apr 21 22:54:30.277: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.015045ms) | |
Apr 21 22:54:30.281: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.057285ms) | |
Apr 21 22:54:30.285: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 3.886554ms) | |
Apr 21 22:54:30.289: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.156017ms) | |
Apr 21 22:54:30.293: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.111371ms) | |
Apr 21 22:54:30.297: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 3.966364ms) | |
Apr 21 22:54:30.326: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 29.088402ms) | |
Apr 21 22:54:30.346: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 19.236895ms) | |
Apr 21 22:54:30.349: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 3.834526ms) | |
Apr 21 22:54:30.354: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.201263ms) | |
Apr 21 22:54:30.358: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.117809ms) | |
Apr 21 22:54:30.362: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.160411ms) | |
Apr 21 22:54:30.366: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.04921ms) | |
Apr 21 22:54:30.371: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 5.038608ms) | |
Apr 21 22:54:30.375: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.146132ms) | |
Apr 21 22:54:30.379: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 3.948005ms) | |
Apr 21 22:54:30.383: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0/proxy/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 4.241631ms) | |
[AfterEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:30.384: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-proxy-sqjb2" for this suite. | |
• [SLOW TEST:10.210 seconds] | |
[k8s.io] Proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40 | |
should proxy logs on node using proxy subresource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:49.264: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Update Demo | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:150 | |
[It] should create and stop a replication controller [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:157 | |
STEP: creating a replication controller | |
Apr 21 22:53:51.746: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:53:51.859: INFO: stderr: "" | |
Apr 21 22:53:51.860: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created" | |
STEP: waiting for all containers in name=update-demo pods to come up. | |
Apr 21 22:53:51.860: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:53:51.933: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:53:51.933: INFO: stdout: "" | |
STEP: Replicas for name=update-demo: expected=2 actual=0 | |
Apr 21 22:53:56.933: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:53:57.011: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:53:57.011: INFO: stdout: "update-demo-nautilus-k6xrj update-demo-nautilus-miyl5" | |
Apr 21 22:53:57.011: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-k6xrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:53:57.086: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:53:57.086: INFO: stdout: "" | |
Apr 21 22:53:57.086: INFO: update-demo-nautilus-k6xrj is created but not running | |
Apr 21 22:54:02.086: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:02.168: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:02.168: INFO: stdout: "update-demo-nautilus-k6xrj update-demo-nautilus-miyl5" | |
Apr 21 22:54:02.168: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-k6xrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:02.260: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:02.260: INFO: stdout: "" | |
Apr 21 22:54:02.260: INFO: update-demo-nautilus-k6xrj is created but not running | |
Apr 21 22:54:07.260: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:07.349: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:07.349: INFO: stdout: "update-demo-nautilus-k6xrj update-demo-nautilus-miyl5" | |
Apr 21 22:54:07.349: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-k6xrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:07.428: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:07.428: INFO: stdout: "" | |
Apr 21 22:54:07.428: INFO: update-demo-nautilus-k6xrj is created but not running | |
Apr 21 22:54:12.429: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:12.505: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:12.505: INFO: stdout: "update-demo-nautilus-k6xrj update-demo-nautilus-miyl5" | |
Apr 21 22:54:12.505: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-k6xrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:12.583: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:12.583: INFO: stdout: "true" | |
Apr 21 22:54:12.583: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-k6xrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:12.657: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:12.657: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:54:12.657: INFO: validating pod update-demo-nautilus-k6xrj | |
Apr 21 22:54:12.680: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:54:12.680: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:54:12.680: INFO: update-demo-nautilus-k6xrj is verified up and running | |
Apr 21 22:54:12.680: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-miyl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:12.752: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:12.753: INFO: stdout: "" | |
Apr 21 22:54:12.753: INFO: update-demo-nautilus-miyl5 is created but not running | |
Apr 21 22:54:17.753: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:17.832: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:17.832: INFO: stdout: "update-demo-nautilus-k6xrj update-demo-nautilus-miyl5" | |
Apr 21 22:54:17.833: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-k6xrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:17.907: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:17.907: INFO: stdout: "true" | |
Apr 21 22:54:17.907: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-k6xrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:17.981: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:17.981: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:54:17.981: INFO: validating pod update-demo-nautilus-k6xrj | |
Apr 21 22:54:17.986: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:54:17.986: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:54:17.986: INFO: update-demo-nautilus-k6xrj is verified up and running | |
Apr 21 22:54:17.986: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-miyl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:18.061: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:18.061: INFO: stdout: "true" | |
Apr 21 22:54:18.061: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-miyl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:18.136: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:18.136: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:54:18.136: INFO: validating pod update-demo-nautilus-miyl5 | |
Apr 21 22:54:18.151: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:54:18.151: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:54:18.151: INFO: update-demo-nautilus-miyl5 is verified up and running | |
STEP: using delete to clean up resources | |
Apr 21 22:54:18.151: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:20.303: INFO: stderr: "" | |
Apr 21 22:54:20.303: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted" | |
Apr 21 22:54:20.303: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-4kdst' | |
Apr 21 22:54:20.383: INFO: stderr: "" | |
Apr 21 22:54:20.383: INFO: stdout: "" | |
Apr 21 22:54:20.383: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-4kdst -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:54:20.459: INFO: stderr: "" | |
Apr 21 22:54:20.459: INFO: stdout: "" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:20.459: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-4kdst" for this suite. | |
• [SLOW TEST:51.230 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Update Demo | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create and stop a replication controller [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:157 | |
------------------------------ | |
[BeforeEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:10.716: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume as non-root [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:41 | |
STEP: Creating configMap with name configmap-test-volume-a278e72a-084e-11e6-a9ac-42010af00007 | |
STEP: Creating a pod to test consume configMaps | |
Apr 21 22:54:10.831: INFO: Waiting up to 5m0s for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 status to be success or failure | |
Apr 21 22:54:10.838: INFO: No Status.Info for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' yet | |
Apr 21 22:54:10.838: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.782908ms elapsed) | |
Apr 21 22:54:12.841: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:12.842: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.010538172s elapsed) | |
Apr 21 22:54:14.846: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:14.846: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.014569717s elapsed) | |
Apr 21 22:54:16.849: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:16.849: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.018050796s elapsed) | |
Apr 21 22:54:18.853: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:18.853: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.022431287s elapsed) | |
Apr 21 22:54:20.857: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:20.857: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.025786912s elapsed) | |
Apr 21 22:54:22.863: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:22.863: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.032334213s elapsed) | |
Apr 21 22:54:24.868: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:24.868: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.036946155s elapsed) | |
Apr 21 22:54:26.872: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:26.872: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.041103558s elapsed) | |
Apr 21 22:54:28.876: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:28.876: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (18.044857542s elapsed) | |
Apr 21 22:54:30.879: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:30.879: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (20.048341954s elapsed) | |
Apr 21 22:54:32.883: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:32.883: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (22.052372357s elapsed) | |
Apr 21 22:54:34.888: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:34.888: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (24.056889762s elapsed) | |
Apr 21 22:54:36.892: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007' in namespace 'e2e-tests-configmap-4goy3' so far | |
Apr 21 22:54:36.892: INFO: Waiting for pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 in namespace 'e2e-tests-configmap-4goy3' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.060830972s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-hlmm pod pod-configmaps-a27f7ca7-084e-11e6-a9ac-42010af00007 container configmap-volume-test: <nil> | |
STEP: Successfully fetched pod logs:content of file "/etc/configmap-volume/data-1": value-1 | |
STEP: Cleaning up the configMap | |
[AfterEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:39.021: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-configmap-4goy3" for this suite. | |
• [SLOW TEST:33.331 seconds] | |
[k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be consumable from pods in volume as non-root [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:41 | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:29.990: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should get a host IP [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:227 | |
STEP: creating pod | |
STEP: ensuring that pod is running and has a hostIP | |
W0421 22:54:30.059885 17775 request.go:344] Field selector: v1 - pods - metadata.name - pod-hostip-adf46290-084e-11e6-9641-42010af00007: need to check if this is versioned correctly. | |
Apr 21 22:54:39.281: INFO: Pod pod-hostip-adf46290-084e-11e6-9641-42010af00007 has hostIP: 10.240.0.5 | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:39.291: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-0kawo" for this suite. | |
• [SLOW TEST:14.331 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should get a host IP [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:227 | |
------------------------------ | |
[BeforeEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:17.381: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should run a job to completion when tasks sometimes fail and are locally restarted | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:78 | |
STEP: Creating a job | |
STEP: Ensuring job reaches completions | |
[AfterEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:39.450: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-job-429hs" for this suite. | |
• [SLOW TEST:27.089 seconds] | |
[k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should run a job to completion when tasks sometimes fail and are locally restarted | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:78 | |
------------------------------ | |
SSS | |
------------------------------ | |
[BeforeEach] [k8s.io] kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.964: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:105 | |
[It] kubelet should be able to delete 10 pods per node in 1m0s. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:157 | |
STEP: Creating a RC of 60 pods and wait until all pods of this RC are running | |
STEP: creating replication controller cleanup60-95ed183c-084e-11e6-8c05-42010af00007 in namespace e2e-tests-kubelet-wok42 | |
Apr 21 22:53:49.806: INFO: Created replication controller with name: cleanup60-95ed183c-084e-11e6-8c05-42010af00007, namespace: e2e-tests-kubelet-wok42, replica count: 60 | |
Apr 21 22:53:59.807: INFO: cleanup60-95ed183c-084e-11e6-8c05-42010af00007 Pods: 60 out of 60 created, 6 running, 54 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:09.807: INFO: cleanup60-95ed183c-084e-11e6-8c05-42010af00007 Pods: 60 out of 60 created, 54 running, 6 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:14.279: INFO: Error getting node stats summary on "e2e-gce-master-1-minion-6ch0", err: an error on the server has prevented the request from succeeding (get nodes e2e-gce-master-1-minion-6ch0:10250) | |
Apr 21 22:54:18.567: INFO: Error getting node stats summary on "e2e-gce-master-1-minion-fyts", err: an error on the server has prevented the request from succeeding (get nodes e2e-gce-master-1-minion-fyts:10250) | |
Apr 21 22:54:19.808: INFO: cleanup60-95ed183c-084e-11e6-8c05-42010af00007 Pods: 60 out of 60 created, 59 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:20.265: INFO: Error getting node stats summary on "e2e-gce-master-1-minion-x3cg", err: an error on the server has prevented the request from succeeding (get nodes e2e-gce-master-1-minion-x3cg:10250) | |
Apr 21 22:54:29.808: INFO: cleanup60-95ed183c-084e-11e6-8c05-42010af00007 Pods: 60 out of 60 created, 60 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:30.809: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:30.809: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:30.809: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:30.809: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:30.809: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:30.809: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:30.882: INFO: Resource usage on node "e2e-gce-master-1-master": | |
container cpu(cores) memory_working_set(MB) memory_rss(MB) | |
"/" 0.859 1043.75 25.86 | |
"runtime" 0.005 37.77 15.06 | |
"kubelet" 0.019 12.32 11.78 | |
"misc" 0.011 74.62 25.81 | |
Resource usage on node "e2e-gce-master-1-minion-6ch0": | |
container cpu(cores) memory_working_set(MB) memory_rss(MB) | |
"/" 2.000 1604.89 25.28 | |
"runtime" 0.971 244.53 34.09 | |
"kubelet" 0.051 18.36 16.28 | |
"misc" 0.021 113.02 42.46 | |
Resource usage on node "e2e-gce-master-1-minion-8eot": | |
container cpu(cores) memory_working_set(MB) memory_rss(MB) | |
"runtime" 1.007 193.08 37.16 | |
"kubelet" 0.048 15.11 13.42 | |
"misc" 0.021 148.41 40.81 | |
"/" 1.801 1236.85 25.15 | |
Resource usage on node "e2e-gce-master-1-minion-asea": | |
container cpu(cores) memory_working_set(MB) memory_rss(MB) | |
"/" 1.600 1160.75 25.37 | |
"runtime" 0.785 100.37 29.86 | |
"kubelet" 0.083 16.02 14.01 | |
"misc" 0.058 151.39 42.73 | |
Resource usage on node "e2e-gce-master-1-minion-fyts": | |
container cpu(cores) memory_working_set(MB) memory_rss(MB) | |
"/" 2.040 1641.52 25.22 | |
"runtime" 0.796 260.01 29.70 | |
"kubelet" 0.079 18.50 16.72 | |
"misc" 0.022 113.38 42.04 | |
Resource usage on node "e2e-gce-master-1-minion-hlmm": | |
container cpu(cores) memory_working_set(MB) memory_rss(MB) | |
"/" 1.771 1221.08 25.18 | |
"runtime" 0.943 175.71 38.96 | |
"kubelet" 0.065 15.81 13.91 | |
"misc" 0.016 150.75 43.13 | |
Resource usage on node "e2e-gce-master-1-minion-x3cg": | |
container cpu(cores) memory_working_set(MB) memory_rss(MB) | |
"/" 1.911 1300.01 25.19 | |
"runtime" 0.942 279.80 36.40 | |
"kubelet" 0.070 18.82 16.39 | |
"misc" 0.051 153.34 44.71 | |
STEP: Deleting the RC | |
STEP: deleting replication controller cleanup60-95ed183c-084e-11e6-8c05-42010af00007 in namespace e2e-tests-kubelet-wok42 | |
Apr 21 22:54:33.912: INFO: Deleting RC cleanup60-95ed183c-084e-11e6-8c05-42010af00007 took: 3.027795866s | |
Apr 21 22:54:33.912: INFO: Terminating RC cleanup60-95ed183c-084e-11e6-8c05-42010af00007 pods took: 120.03µs | |
Apr 21 22:54:34.913: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:34.913: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:34.913: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:34.913: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:34.913: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:34.913: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:34.984: INFO: Waiting for 0 pods to be running on the node; 56 are currently running; | |
Apr 21 22:54:35.913: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:35.913: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:35.913: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:35.913: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:35.913: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:35.913: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:36.047: INFO: Waiting for 0 pods to be running on the node; 47 are currently running; | |
Apr 21 22:54:36.913: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:36.913: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:36.913: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:36.913: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:36.913: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:36.913: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:37.062: INFO: Waiting for 0 pods to be running on the node; 39 are currently running; | |
Apr 21 22:54:37.913: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:37.913: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:37.913: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:37.913: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:37.913: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:37.913: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:37.989: INFO: Waiting for 0 pods to be running on the node; 31 are currently running; | |
Apr 21 22:54:38.913: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:38.912: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:38.913: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:38.913: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:38.913: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:38.913: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:38.997: INFO: Waiting for 0 pods to be running on the node; 21 are currently running; | |
Apr 21 22:54:39.848: INFO: Error getting node stats summary on "e2e-gce-master-1-minion-6ch0", err: an error on the server has prevented the request from succeeding (get nodes e2e-gce-master-1-minion-6ch0:10250) | |
Apr 21 22:54:39.913: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:39.913: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:39.913: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:39.913: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:39.913: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:39.913: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:39.994: INFO: Waiting for 0 pods to be running on the node; 13 are currently running; | |
Apr 21 22:54:40.913: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:40.913: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:40.913: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:40.913: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:40.913: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:40.913: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:40.978: INFO: Waiting for 0 pods to be running on the node; 7 are currently running; | |
Apr 21 22:54:41.913: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:41.913: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:41.913: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:41.913: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:41.913: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:41.913: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:42.027: INFO: Waiting for 0 pods to be running on the node; 4 are currently running; | |
Apr 21 22:54:42.913: INFO: Checking pods on node e2e-gce-master-1-minion-x3cg via /runningpods endpoint | |
Apr 21 22:54:42.913: INFO: Checking pods on node e2e-gce-master-1-minion-6ch0 via /runningpods endpoint | |
Apr 21 22:54:42.913: INFO: Checking pods on node e2e-gce-master-1-minion-8eot via /runningpods endpoint | |
Apr 21 22:54:42.913: INFO: Checking pods on node e2e-gce-master-1-minion-fyts via /runningpods endpoint | |
Apr 21 22:54:42.913: INFO: Checking pods on node e2e-gce-master-1-minion-asea via /runningpods endpoint | |
Apr 21 22:54:42.913: INFO: Checking pods on node e2e-gce-master-1-minion-hlmm via /runningpods endpoint | |
Apr 21 22:54:42.988: INFO: Deleting 60 pods on 6 nodes completed in 9.076068402s after the RC was deleted | |
Apr 21 22:54:42.989: INFO: CPU usage of containers on node "e2e-gce-master-1-minion-x3cg" | |
:container 5th% 20th% 50th% 70th% 90th% 95th% 99th% | |
"/" 0.000 0.000 1.258 1.258 1.661 1.661 1.661 | |
"runtime" 0.000 0.000 0.300 0.873 0.873 0.873 0.873 | |
"kubelet" 0.000 0.000 0.071 0.071 0.086 0.086 0.086 | |
"misc" 0.000 0.000 0.038 0.051 0.051 0.051 0.051 | |
CPU usage of containers on node "e2e-gce-master-1-master" | |
:container 5th% 20th% 50th% 70th% 90th% 95th% 99th% | |
"/" 0.000 0.000 0.523 0.859 0.859 0.859 0.859 | |
"runtime" 0.000 0.000 0.004 0.005 0.005 0.005 0.005 | |
"kubelet" 0.000 0.000 0.015 0.019 0.019 0.019 0.019 | |
"misc" 0.000 0.000 0.006 0.007 0.007 0.007 0.007 | |
CPU usage of containers on node "e2e-gce-master-1-minion-6ch0" | |
:container 5th% 20th% 50th% 70th% 90th% 95th% 99th% | |
"/" 0.000 0.000 1.646 1.646 1.646 1.646 1.646 | |
"runtime" 0.000 0.000 0.762 0.762 0.944 0.944 0.944 | |
"kubelet" 0.000 0.000 0.051 0.069 0.069 0.069 0.069 | |
"misc" 0.000 0.000 0.021 0.031 0.031 0.031 0.031 | |
CPU usage of containers on node "e2e-gce-master-1-minion-8eot" | |
:container 5th% 20th% 50th% 70th% 90th% 95th% 99th% | |
"/" 0.000 0.000 1.317 1.317 1.654 1.654 1.654 | |
"runtime" 0.000 0.000 0.200 0.858 0.858 0.858 0.858 | |
"kubelet" 0.000 0.000 0.048 0.078 0.078 0.078 0.078 | |
"misc" 0.000 0.000 0.041 0.041 0.046 0.046 0.046 | |
CPU usage of containers on node "e2e-gce-master-1-minion-asea" | |
:container 5th% 20th% 50th% 70th% 90th% 95th% 99th% | |
"/" 0.000 0.000 0.716 1.586 1.586 1.586 1.586 | |
"runtime" 0.000 0.000 0.238 0.765 0.765 0.765 0.765 | |
"kubelet" 0.000 0.000 0.083 0.083 0.089 0.089 0.089 | |
"misc" 0.000 0.000 0.058 0.060 0.060 0.060 0.060 | |
CPU usage of containers on node "e2e-gce-master-1-minion-fyts" | |
:container 5th% 20th% 50th% 70th% 90th% 95th% 99th% | |
"/" 0.000 0.000 1.572 1.572 1.848 1.848 1.848 | |
"runtime" 0.000 0.000 0.081 0.762 0.762 0.762 0.762 | |
"kubelet" 0.000 0.000 0.053 0.079 0.079 0.079 0.079 | |
"misc" 0.000 0.000 0.035 0.035 0.051 0.051 0.051 | |
CPU usage of containers on node "e2e-gce-master-1-minion-hlmm" | |
:container 5th% 20th% 50th% 70th% 90th% 95th% 99th% | |
"/" 0.000 0.000 0.551 1.771 1.771 1.771 1.771 | |
"runtime" 0.000 0.000 0.898 0.898 0.943 0.943 0.943 | |
"kubelet" 0.000 0.000 0.065 0.074 0.074 0.074 0.074 | |
"misc" 0.000 0.000 0.030 0.030 0.061 0.061 0.061 | |
[AfterEach] [k8s.io] kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:42.989: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubelet-wok42" for this suite. | |
[AfterEach] [k8s.io] kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:109 | |
• [SLOW TEST:59.048 seconds] | |
[k8s.io] kubelet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Clean up pods on node | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
kubelet should be able to delete 10 pods per node in 1m0s. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:157 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:35.540: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Kubectl run rc | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:976 | |
[It] should create an rc from an image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1020 | |
STEP: running the image gcr.io/google_containers/nginx:1.7.9 | |
Apr 21 22:54:35.594: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config run e2e-test-nginx-rc --image=gcr.io/google_containers/nginx:1.7.9 --generator=run/v1 --namespace=e2e-tests-kubectl-2pzdy' | |
Apr 21 22:54:35.674: INFO: stderr: "" | |
Apr 21 22:54:35.674: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" created" | |
STEP: verifying the rc e2e-test-nginx-rc was created | |
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created | |
STEP: confirm that you can get logs from an rc | |
Apr 21 22:54:37.698: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-vfkii] | |
Apr 21 22:54:37.698: INFO: Waiting up to 5m0s for pod e2e-test-nginx-rc-vfkii status to be running and ready | |
Apr 21 22:54:37.701: INFO: Waiting for pod e2e-test-nginx-rc-vfkii in namespace 'e2e-tests-kubectl-2pzdy' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.805626ms elapsed) | |
Apr 21 22:54:39.704: INFO: Waiting for pod e2e-test-nginx-rc-vfkii in namespace 'e2e-tests-kubectl-2pzdy' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.005945404s elapsed) | |
Apr 21 22:54:41.707: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-vfkii] | |
Apr 21 22:54:41.707: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2pzdy' | |
Apr 21 22:54:41.821: INFO: stderr: "" | |
[AfterEach] [k8s.io] Kubectl run rc | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:980 | |
Apr 21 22:54:41.821: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2pzdy' | |
Apr 21 22:54:43.917: INFO: stderr: "" | |
Apr 21 22:54:43.917: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:43.917: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-2pzdy" for this suite. | |
• [SLOW TEST:13.448 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl run rc | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create an rc from an image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1020 | |
------------------------------ | |
[BeforeEach] [k8s.io] Variable Expansion | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:36.032: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should allow substituting values in a container's command [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:100 | |
STEP: Creating a pod to test substitution in container's command | |
Apr 21 22:54:36.115: INFO: Waiting up to 5m0s for pod var-expansion-b18fa441-084e-11e6-bd26-42010af00007 status to be success or failure | |
Apr 21 22:54:36.121: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-b18fa441-084e-11e6-bd26-42010af00007' yet | |
Apr 21 22:54:36.121: INFO: Waiting for pod var-expansion-b18fa441-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-var-expansion-dzbpc' status to be 'success or failure'(found phase: "Pending", readiness: false) (5.159157ms elapsed) | |
Apr 21 22:54:38.124: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-b18fa441-084e-11e6-bd26-42010af00007' yet | |
Apr 21 22:54:38.124: INFO: Waiting for pod var-expansion-b18fa441-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-var-expansion-dzbpc' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.008847495s elapsed) | |
Apr 21 22:54:40.128: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-b18fa441-084e-11e6-bd26-42010af00007' yet | |
Apr 21 22:54:40.128: INFO: Waiting for pod var-expansion-b18fa441-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-var-expansion-dzbpc' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.012683587s elapsed) | |
Apr 21 22:54:42.132: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-b18fa441-084e-11e6-bd26-42010af00007' yet | |
Apr 21 22:54:42.132: INFO: Waiting for pod var-expansion-b18fa441-084e-11e6-bd26-42010af00007 in namespace 'e2e-tests-var-expansion-dzbpc' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.016520681s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-fyts pod var-expansion-b18fa441-084e-11e6-bd26-42010af00007 container dapi-container: <nil> | |
STEP: Successfully fetched pod logs:test-value | |
[AfterEach] [k8s.io] Variable Expansion | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:44.163: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-var-expansion-dzbpc" for this suite. | |
• [SLOW TEST:13.157 seconds] | |
[k8s.io] Variable Expansion | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should allow substituting values in a container's command [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:100 | |
------------------------------ | |
[BeforeEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:56.493: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should keep restarting failed pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:112 | |
STEP: Creating a job | |
STEP: Ensuring job shows many failures | |
[AfterEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:24.558: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-job-2su77" for this suite. | |
• [SLOW TEST:53.084 seconds] | |
[k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should keep restarting failed pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:112 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:36.159: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (root,0644,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:93 | |
STEP: Creating a pod to test emptydir 0644 on node default medium | |
Apr 21 22:54:36.229: INFO: Waiting up to 5m0s for pod pod-b1a2898c-084e-11e6-aee8-42010af00007 status to be success or failure | |
Apr 21 22:54:36.237: INFO: No Status.Info for container 'test-container' in pod 'pod-b1a2898c-084e-11e6-aee8-42010af00007' yet | |
Apr 21 22:54:36.237: INFO: Waiting for pod pod-b1a2898c-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-tmify' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.364005ms elapsed) | |
Apr 21 22:54:38.240: INFO: No Status.Info for container 'test-container' in pod 'pod-b1a2898c-084e-11e6-aee8-42010af00007' yet | |
Apr 21 22:54:38.240: INFO: Waiting for pod pod-b1a2898c-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-tmify' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.010783442s elapsed) | |
Apr 21 22:54:40.243: INFO: No Status.Info for container 'test-container' in pod 'pod-b1a2898c-084e-11e6-aee8-42010af00007' yet | |
Apr 21 22:54:40.243: INFO: Waiting for pod pod-b1a2898c-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-tmify' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.013939369s elapsed) | |
Apr 21 22:54:42.248: INFO: No Status.Info for container 'test-container' in pod 'pod-b1a2898c-084e-11e6-aee8-42010af00007' yet | |
Apr 21 22:54:42.248: INFO: Waiting for pod pod-b1a2898c-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-tmify' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.018321348s elapsed) | |
Apr 21 22:54:44.262: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-b1a2898c-084e-11e6-aee8-42010af00007' in namespace 'e2e-tests-emptydir-tmify' so far | |
Apr 21 22:54:44.262: INFO: Waiting for pod pod-b1a2898c-084e-11e6-aee8-42010af00007 in namespace 'e2e-tests-emptydir-tmify' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.032906749s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-8eot pod pod-b1a2898c-084e-11e6-aee8-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267 | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rw-r--r-- | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:46.320: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-tmify" for this suite. | |
• [SLOW TEST:15.180 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (root,0644,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:93 | |
------------------------------ | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:49.578: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:74 | |
Apr 21 22:54:49.697: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
[It] should provide secure master service [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:81 | |
[AfterEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:49.716: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-u0bzv" for this suite. | |
• [SLOW TEST:5.157 seconds] | |
[k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should provide secure master service [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:81 | |
------------------------------ | |
[BeforeEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:39.844: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should run a job to completion when tasks succeed | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:59 | |
STEP: Creating a job | |
STEP: Ensuring job reaches completions | |
[AfterEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:51.908: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-job-s62wh" for this suite. | |
• [SLOW TEST:17.082 seconds] | |
[k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should run a job to completion when tasks succeed | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:59 | |
------------------------------ | |
[BeforeEach] [k8s.io] Generated release_1_3 clientset | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:44.322: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create pods, delete pods, watch pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:241 | |
STEP: constructing the pod | |
STEP: setting up watch | |
STEP: creating the pod | |
STEP: verifying the pod is in kubernetes | |
STEP: verifying pod creation was observed | |
W0421 22:54:44.438496 17775 request.go:344] Field selector: v1 - pods - metadata.name - podb6833ae1-084e-11e6-9641-42010af00007: need to check if this is versioned correctly. | |
STEP: deleting the pod gracefully | |
STEP: verifying pod deletion was observed | |
[AfterEach] [k8s.io] Generated release_1_3 clientset | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:52.053: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-clientset-97mea" for this suite. | |
• [SLOW TEST:17.754 seconds] | |
[k8s.io] Generated release_1_3 clientset | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create pods, delete pods, watch pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:241 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Secrets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:48.015: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in env vars [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:159 | |
STEP: Creating secret with name secret-test-b8b158fd-084e-11e6-8c05-42010af00007 | |
STEP: Creating a pod to test consume secrets | |
Apr 21 22:54:48.072: INFO: Waiting up to 5m0s for pod pod-secrets-b8b46996-084e-11e6-8c05-42010af00007 status to be success or failure | |
Apr 21 22:54:48.077: INFO: No Status.Info for container 'secret-env-test' in pod 'pod-secrets-b8b46996-084e-11e6-8c05-42010af00007' yet | |
Apr 21 22:54:48.077: INFO: Waiting for pod pod-secrets-b8b46996-084e-11e6-8c05-42010af00007 in namespace 'e2e-tests-secrets-kr2oe' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.13059ms elapsed) | |
Apr 21 22:54:50.080: INFO: Nil State.Terminated for container 'secret-env-test' in pod 'pod-secrets-b8b46996-084e-11e6-8c05-42010af00007' in namespace 'e2e-tests-secrets-kr2oe' so far | |
Apr 21 22:54:50.080: INFO: Waiting for pod pod-secrets-b8b46996-084e-11e6-8c05-42010af00007 in namespace 'e2e-tests-secrets-kr2oe' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.007451452s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-8eot pod pod-secrets-b8b46996-084e-11e6-8c05-42010af00007 container secret-env-test: <nil> | |
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT=443 | |
HOSTNAME=pod-secrets-b8b46996-084e-11e6-8c05-42010af00007 | |
SHLVL=1 | |
HOME=/root | |
SECRET_DATA=value-1 | |
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 | |
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin | |
KUBERNETES_PORT_443_TCP_PORT=443 | |
KUBERNETES_PORT_443_TCP_PROTO=tcp | |
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT_HTTPS=443 | |
PWD=/ | |
KUBERNETES_SERVICE_HOST=10.0.0.1 | |
STEP: Cleaning up the secret | |
[AfterEach] [k8s.io] Secrets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:52.159: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-secrets-kr2oe" for this suite. | |
• [SLOW TEST:14.171 seconds] | |
[k8s.io] Secrets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be consumable from pods in env vars [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:159 | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:00.921: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:687 | |
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-buwhx | |
W0421 22:54:00.993361 17779 request.go:344] Field selector: v1 - pods - metadata.name - liveness-exec: need to check if this is versioned correctly. | |
Apr 21 22:54:06.457: INFO: Started pod liveness-exec in namespace e2e-tests-pods-buwhx | |
STEP: checking the pod's current state and verifying that restartCount is present | |
Apr 21 22:54:06.475: INFO: Initial restart count of pod liveness-exec is 0 | |
Apr 21 22:54:52.569: INFO: Restart count of pod e2e-tests-pods-buwhx/liveness-exec is now 1 (46.093419139s elapsed) | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:52.579: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-buwhx" for this suite. | |
• [SLOW TEST:61.677 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:687 | |
------------------------------ | |
[BeforeEach] [k8s.io] DNS | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.961: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide DNS for services [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:331 | |
STEP: Waiting for DNS Service to be Running | |
W0421 22:53:51.644467 17751 request.go:344] Field selector: v1 - pods - metadata.name - kube-dns-v11-5kbrl: need to check if this is versioned correctly. | |
STEP: Creating a test headless service | |
STEP: Running these commands on wheezy:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search dns-test-service A)" && echo OK > /results/wheezy_udp@dns-test-service;test -n "$$(dig +tcp +noall +answer +search dns-test-service A)" && echo OK > /results/wheezy_tcp@dns-test-service;test -n "$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-c9zef A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-c9zef A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-c9zef.svc A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-c9zef.svc A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-c9zef.svc SRV)" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-c9zef.svc;test -n "$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-c9zef.svc SRV)" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c9zef.svc;test -n "$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-c9zef.svc SRV)" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-c9zef.svc;test -n "$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-c9zef.svc SRV)" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-c9zef.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-c9zef.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done | |
STEP: Running these commands on jessie:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search dns-test-service A)" && echo OK > /results/jessie_udp@dns-test-service;test -n "$$(dig +tcp +noall +answer +search dns-test-service A)" && echo OK > /results/jessie_tcp@dns-test-service;test -n "$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-c9zef A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-c9zef A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-c9zef.svc A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-c9zef.svc A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-c9zef.svc SRV)" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c9zef.svc;test -n "$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-c9zef.svc SRV)" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c9zef.svc;test -n "$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-c9zef.svc SRV)" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-c9zef.svc;test -n "$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-c9zef.svc SRV)" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-c9zef.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-c9zef.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done | |
STEP: creating a pod to probe DNS | |
STEP: submitting the pod to kubernetes | |
W0421 22:53:51.709705 17751 request.go:344] Field selector: v1 - pods - metadata.name - dns-test-971a3e67-084e-11e6-b067-42010af00007: need to check if this is versioned correctly. | |
STEP: retrieving the pod | |
STEP: looking for the results for each expected name from probiers | |
Apr 21 22:54:53.782: INFO: DNS probes using dns-test-971a3e67-084e-11e6-b067-42010af00007 succeeded | |
STEP: deleting the pod | |
STEP: deleting the test service | |
STEP: deleting the test headless service | |
[AfterEach] [k8s.io] DNS | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:53.835: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-dns-c9zef" for this suite. | |
• [SLOW TEST:74.895 seconds] | |
[k8s.io] DNS | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should provide DNS for services [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:331 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.920: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] RecreateDeployment should delete old pods and create new ones | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:64 | |
Apr 21 22:53:49.733: INFO: Pod name sample-pod-3: Found 0 pods out of 3 | |
Apr 21 22:53:54.739: INFO: Pod name sample-pod-3: Found 3 pods out of 3 | |
STEP: ensuring each pod is running | |
W0421 22:53:54.739610 17649 request.go:344] Field selector: v1 - pods - metadata.name - test-recreate-controller-mx8u9: need to check if this is versioned correctly. | |
W0421 22:54:44.098521 17649 request.go:344] Field selector: v1 - pods - metadata.name - test-recreate-controller-vlh0i: need to check if this is versioned correctly. | |
W0421 22:54:44.116537 17649 request.go:344] Field selector: v1 - pods - metadata.name - test-recreate-controller-xj405: need to check if this is versioned correctly. | |
STEP: trying to dial each unique pod | |
Apr 21 22:54:44.181: INFO: Controller sample-pod-3: Got non-empty result from replica 1 [test-recreate-controller-mx8u9]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 1 of 3 required successes so far | |
Apr 21 22:54:44.191: INFO: Controller sample-pod-3: Got non-empty result from replica 2 [test-recreate-controller-vlh0i]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 2 of 3 required successes so far | |
Apr 21 22:54:44.203: INFO: Controller sample-pod-3: Got non-empty result from replica 3 [test-recreate-controller-xj405]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 3 of 3 required successes so far | |
Apr 21 22:54:44.203: INFO: Creating deployment test-recreate-deployment | |
Apr 21 22:54:50.254: INFO: Deleting deployment test-recreate-deployment | |
Apr 21 22:54:54.357: INFO: Ensuring deployment test-recreate-deployment was deleted | |
Apr 21 22:54:54.360: INFO: Ensuring deployment test-recreate-deployment's RSes were deleted | |
Apr 21 22:54:54.362: INFO: Ensuring deployment test-recreate-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:54.365: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-wa6ce" for this suite. | |
• [SLOW TEST:75.464 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
RecreateDeployment should delete old pods and create new ones | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:64 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:44.048: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should create a job from an image, then delete the job [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1138 | |
STEP: executing a command with run --rm and attach with stdin | |
Apr 21 22:54:44.118: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --restart=Never --attach=true --stdin -- sh -c cat && echo 'stdin closed'' | |
Apr 21 22:54:54.524: INFO: stderr: "" | |
Apr 21 22:54:54.524: INFO: stdout: "Waiting for pod default/e2e-test-rm-busybox-job-8h205 to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-8h205 to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-8h205 to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-8h205 to be running, status is Pending, pod ready: false\nabcd1234stdin closed\njob \"e2e-test-rm-busybox-job\" deleted" | |
STEP: verifying the job e2e-test-rm-busybox-job was deleted | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:54.527: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-xask4" for this suite. | |
• [SLOW TEST:20.500 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl run --rm job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a job from an image, then delete the job [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1138 | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:54.736: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a pod. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:272 | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
STEP: Creating a Pod that fits quota | |
STEP: Ensuring ResourceQuota status captures the pod usage | |
STEP: Not allowing a pod to be created that exceeds remaining quota | |
STEP: Deleting the pod | |
STEP: Ensuring resource quota status released the pod usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:00.828: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-tpb3j" for this suite. | |
• [SLOW TEST:11.110 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a ResourceQuota and capture the life of a pod. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:272 | |
------------------------------ | |
[BeforeEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:57.591: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should fail a job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202 | |
STEP: Creating a job | |
STEP: Ensuring job was failed | |
[AfterEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:23.669: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-v1job-7vr9a" for this suite. | |
• [SLOW TEST:71.098 seconds] | |
[k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should fail a job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202 | |
------------------------------ | |
SSS | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:20.766: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0777,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:85 | |
STEP: Creating a pod to test emptydir 0777 on tmpfs | |
Apr 21 22:54:20.859: INFO: Waiting up to 5m0s for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 status to be success or failure | |
Apr 21 22:54:20.861: INFO: No Status.Info for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' yet | |
Apr 21 22:54:20.861: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.778195ms elapsed) | |
Apr 21 22:54:22.865: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:22.865: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.006605792s elapsed) | |
Apr 21 22:54:24.875: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:24.875: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.01646731s elapsed) | |
Apr 21 22:54:26.882: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:26.882: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.023051856s elapsed) | |
Apr 21 22:54:28.885: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:28.885: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.026646257s elapsed) | |
Apr 21 22:54:30.888: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:30.888: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.02988137s elapsed) | |
Apr 21 22:54:32.894: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:32.894: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.035063124s elapsed) | |
Apr 21 22:54:34.898: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:34.898: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.039073926s elapsed) | |
Apr 21 22:54:36.901: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:36.901: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.042491701s elapsed) | |
Apr 21 22:54:38.918: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:38.918: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (18.059569084s elapsed) | |
Apr 21 22:54:40.940: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:40.940: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (20.080948591s elapsed) | |
Apr 21 22:54:42.959: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:42.959: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (22.100125859s elapsed) | |
Apr 21 22:54:44.963: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:44.963: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (24.104384016s elapsed) | |
Apr 21 22:54:46.967: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:46.967: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.108770766s elapsed) | |
Apr 21 22:54:48.972: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:48.972: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.113032434s elapsed) | |
Apr 21 22:54:50.976: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:50.976: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.117320357s elapsed) | |
Apr 21 22:54:52.980: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a8795fb9-084e-11e6-bb3d-42010af00007' in namespace 'e2e-tests-emptydir-mok3z' so far | |
Apr 21 22:54:52.980: INFO: Waiting for pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 in namespace 'e2e-tests-emptydir-mok3z' status to be 'success or failure'(found phase: "Pending", readiness: false) (32.121236776s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-a8795fb9-084e-11e6-bb3d-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rwxrwxrwx | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:55.035: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-mok3z" for this suite. | |
• [SLOW TEST:49.304 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (non-root,0777,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:85 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:44.474: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (root,0666,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:97 | |
STEP: Creating a pod to test emptydir 0666 on node default medium | |
Apr 21 22:54:44.549: INFO: Waiting up to 5m0s for pod pod-b697fc20-084e-11e6-bd92-42010af00007 status to be success or failure | |
Apr 21 22:54:44.553: INFO: No Status.Info for container 'test-container' in pod 'pod-b697fc20-084e-11e6-bd92-42010af00007' yet | |
Apr 21 22:54:44.553: INFO: Waiting for pod pod-b697fc20-084e-11e6-bd92-42010af00007 in namespace 'e2e-tests-emptydir-rx4zf' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.076122ms elapsed) | |
Apr 21 22:54:46.556: INFO: No Status.Info for container 'test-container' in pod 'pod-b697fc20-084e-11e6-bd92-42010af00007' yet | |
Apr 21 22:54:46.556: INFO: Waiting for pod pod-b697fc20-084e-11e6-bd92-42010af00007 in namespace 'e2e-tests-emptydir-rx4zf' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.007736158s elapsed) | |
Apr 21 22:54:48.560: INFO: No Status.Info for container 'test-container' in pod 'pod-b697fc20-084e-11e6-bd92-42010af00007' yet | |
Apr 21 22:54:48.561: INFO: Waiting for pod pod-b697fc20-084e-11e6-bd92-42010af00007 in namespace 'e2e-tests-emptydir-rx4zf' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.011818279s elapsed) | |
Apr 21 22:54:50.565: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-b697fc20-084e-11e6-bd92-42010af00007' in namespace 'e2e-tests-emptydir-rx4zf' so far | |
Apr 21 22:54:50.565: INFO: Waiting for pod pod-b697fc20-084e-11e6-bd92-42010af00007 in namespace 'e2e-tests-emptydir-rx4zf' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.01598268s elapsed) | |
Apr 21 22:54:52.569: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-b697fc20-084e-11e6-bd92-42010af00007' in namespace 'e2e-tests-emptydir-rx4zf' so far | |
Apr 21 22:54:52.569: INFO: Waiting for pod pod-b697fc20-084e-11e6-bd92-42010af00007 in namespace 'e2e-tests-emptydir-rx4zf' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.019970458s elapsed) | |
Apr 21 22:54:54.573: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-b697fc20-084e-11e6-bd92-42010af00007' in namespace 'e2e-tests-emptydir-rx4zf' so far | |
Apr 21 22:54:54.573: INFO: Waiting for pod pod-b697fc20-084e-11e6-bd92-42010af00007 in namespace 'e2e-tests-emptydir-rx4zf' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.023999848s elapsed) | |
Apr 21 22:54:56.577: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-b697fc20-084e-11e6-bd92-42010af00007' in namespace 'e2e-tests-emptydir-rx4zf' so far | |
Apr 21 22:54:56.577: INFO: Waiting for pod pod-b697fc20-084e-11e6-bd92-42010af00007 in namespace 'e2e-tests-emptydir-rx4zf' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.027994311s elapsed) | |
Apr 21 22:54:58.581: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-b697fc20-084e-11e6-bd92-42010af00007' in namespace 'e2e-tests-emptydir-rx4zf' so far | |
Apr 21 22:54:58.581: INFO: Waiting for pod pod-b697fc20-084e-11e6-bd92-42010af00007 in namespace 'e2e-tests-emptydir-rx4zf' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.032030907s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-6ch0 pod pod-b697fc20-084e-11e6-bd92-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267 | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rw-rw-rw- | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:00.605: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-rx4zf" for this suite. | |
• [SLOW TEST:26.150 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (root,0666,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:97 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Downward API | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:02.078: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide pod name and namespace as env vars [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:61 | |
STEP: Creating a pod to test downward api env vars | |
Apr 21 22:55:02.157: INFO: Waiting up to 5m0s for pod downward-api-c116b8ca-084e-11e6-9641-42010af00007 status to be success or failure | |
Apr 21 22:55:02.163: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-c116b8ca-084e-11e6-9641-42010af00007' yet | |
Apr 21 22:55:02.163: INFO: Waiting for pod downward-api-c116b8ca-084e-11e6-9641-42010af00007 in namespace 'e2e-tests-downward-api-qisuc' status to be 'success or failure'(found phase: "Pending", readiness: false) (5.981084ms elapsed) | |
Apr 21 22:55:04.167: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-c116b8ca-084e-11e6-9641-42010af00007' in namespace 'e2e-tests-downward-api-qisuc' so far | |
Apr 21 22:55:04.167: INFO: Waiting for pod downward-api-c116b8ca-084e-11e6-9641-42010af00007 in namespace 'e2e-tests-downward-api-qisuc' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.010173449s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-fyts pod downward-api-c116b8ca-084e-11e6-9641-42010af00007 container dapi-container: <nil> | |
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT=443 | |
HOSTNAME=downward-api-c116b8ca-084e-11e6-9641-42010af00007 | |
SHLVL=1 | |
HOME=/root | |
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 | |
POD_NAME=downward-api-c116b8ca-084e-11e6-9641-42010af00007 | |
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin | |
KUBERNETES_PORT_443_TCP_PORT=443 | |
KUBERNETES_PORT_443_TCP_PROTO=tcp | |
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT_HTTPS=443 | |
POD_NAMESPACE=e2e-tests-downward-api-qisuc | |
PWD=/ | |
KUBERNETES_SERVICE_HOST=10.0.0.1 | |
[AfterEach] [k8s.io] Downward API | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:06.246: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-downward-api-qisuc" for this suite. | |
• [SLOW TEST:9.200 seconds] | |
[k8s.io] Downward API | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should provide pod name and namespace as env vars [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:61 | |
------------------------------ | |
[BeforeEach] [k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:02.599: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:42 | |
Apr 21 22:55:02.640: INFO: Only supported for providers [mesos/docker] (not gce) | |
[AfterEach] [k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:02.641: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-sosh6" for this suite. | |
S [SKIPPING] in Spec Setup (BeforeEach) [10.071 seconds] | |
[k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
schedules pods annotated with roles on correct slaves [BeforeEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:119 | |
Apr 21 22:55:02.640: Only supported for providers [mesos/docker] (not gce) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:276 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.953: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Kubectl label | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:798 | |
STEP: creating the pod | |
Apr 21 22:53:50.947: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-4z3tr' | |
Apr 21 22:53:51.313: INFO: stderr: "" | |
Apr 21 22:53:51.313: INFO: stdout: "pod \"nginx\" created" | |
Apr 21 22:53:51.313: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx] | |
Apr 21 22:53:51.314: INFO: Waiting up to 5m0s for pod nginx status to be running and ready | |
Apr 21 22:53:51.346: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (32.537653ms elapsed) | |
Apr 21 22:53:53.350: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.036050766s elapsed) | |
Apr 21 22:53:55.353: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.039272083s elapsed) | |
Apr 21 22:53:57.356: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (6.042202548s elapsed) | |
Apr 21 22:53:59.359: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (8.045732975s elapsed) | |
Apr 21 22:54:01.363: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (10.049583573s elapsed) | |
Apr 21 22:54:03.367: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (12.053501575s elapsed) | |
Apr 21 22:54:05.371: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (14.056900924s elapsed) | |
Apr 21 22:54:07.374: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (16.060468108s elapsed) | |
Apr 21 22:54:09.378: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (18.064621299s elapsed) | |
Apr 21 22:54:11.383: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (20.069649466s elapsed) | |
Apr 21 22:54:13.386: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (22.072656458s elapsed) | |
Apr 21 22:54:15.390: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (24.076102325s elapsed) | |
Apr 21 22:54:17.393: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (26.079736009s elapsed) | |
Apr 21 22:54:19.397: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (28.083448998s elapsed) | |
Apr 21 22:54:21.401: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (30.087255468s elapsed) | |
Apr 21 22:54:23.404: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (32.090375611s elapsed) | |
Apr 21 22:54:25.413: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (34.099379605s elapsed) | |
Apr 21 22:54:27.418: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (36.104022748s elapsed) | |
Apr 21 22:54:29.421: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (38.107700963s elapsed) | |
Apr 21 22:54:31.425: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (40.111382443s elapsed) | |
Apr 21 22:54:33.428: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (42.114692454s elapsed) | |
Apr 21 22:54:35.432: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (44.11839117s elapsed) | |
Apr 21 22:54:37.441: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (46.127615671s elapsed) | |
Apr 21 22:54:39.444: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (48.130623594s elapsed) | |
Apr 21 22:54:41.448: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (50.133983931s elapsed) | |
Apr 21 22:54:43.451: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (52.137258753s elapsed) | |
Apr 21 22:54:45.523: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (54.209458486s elapsed) | |
Apr 21 22:54:47.529: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (56.215256544s elapsed) | |
Apr 21 22:54:49.534: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (58.219924733s elapsed) | |
Apr 21 22:54:51.543: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m0.229721643s elapsed) | |
Apr 21 22:54:53.546: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m2.232844874s elapsed) | |
Apr 21 22:54:55.550: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m4.236705982s elapsed) | |
Apr 21 22:54:57.554: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m6.240694233s elapsed) | |
Apr 21 22:54:59.559: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m8.245506796s elapsed) | |
Apr 21 22:55:01.565: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-4z3tr' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m10.2510396s elapsed) | |
Apr 21 22:55:03.569: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx] | |
[It] should update the label on a resource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:822 | |
STEP: adding the label testing-label with value testing-label-value to a pod | |
Apr 21 22:55:03.569: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config label pods nginx testing-label=testing-label-value --namespace=e2e-tests-kubectl-4z3tr' | |
Apr 21 22:55:03.650: INFO: stderr: "" | |
Apr 21 22:55:03.650: INFO: stdout: "pod \"nginx\" labeled" | |
STEP: verifying the pod has the label testing-label with the value testing-label-value | |
Apr 21 22:55:03.650: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-4z3tr' | |
Apr 21 22:55:03.718: INFO: stderr: "" | |
Apr 21 22:55:03.718: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\nnginx 1/1 Running 0 1m testing-label-value" | |
STEP: removing the label testing-label of a pod | |
Apr 21 22:55:03.718: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config label pods nginx testing-label- --namespace=e2e-tests-kubectl-4z3tr' | |
Apr 21 22:55:03.803: INFO: stderr: "" | |
Apr 21 22:55:03.803: INFO: stdout: "pod \"nginx\" labeled" | |
STEP: verifying the pod doesn't have the label testing-label | |
Apr 21 22:55:03.803: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-4z3tr' | |
Apr 21 22:55:03.872: INFO: stderr: "" | |
Apr 21 22:55:03.872: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\nnginx 1/1 Running 0 1m <none>" | |
[AfterEach] [k8s.io] Kubectl label | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:801 | |
STEP: using delete to clean up resources | |
Apr 21 22:55:03.872: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-4z3tr' | |
Apr 21 22:55:03.972: INFO: stderr: "" | |
Apr 21 22:55:03.973: INFO: stdout: "pod \"nginx\" deleted" | |
Apr 21 22:55:03.973: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-4z3tr' | |
Apr 21 22:55:04.048: INFO: stderr: "" | |
Apr 21 22:55:04.048: INFO: stdout: "" | |
Apr 21 22:55:04.048: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-4z3tr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:04.122: INFO: stderr: "" | |
Apr 21 22:55:04.123: INFO: stdout: "" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:04.123: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-4z3tr" for this suite. | |
• [SLOW TEST:85.190 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl label | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should update the label on a resource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:822 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:00.318: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Update Demo | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:150 | |
[It] should scale a replication controller [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:171 | |
STEP: creating a replication controller | |
Apr 21 22:54:00.451: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:00.565: INFO: stderr: "" | |
Apr 21 22:54:00.565: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created" | |
STEP: waiting for all containers in name=update-demo pods to come up. | |
Apr 21 22:54:00.565: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:00.649: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:00.650: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:00.650: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:00.735: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:00.735: INFO: stdout: "" | |
Apr 21 22:54:00.735: INFO: update-demo-nautilus-oam2h is created but not running | |
Apr 21 22:54:05.735: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:05.810: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:05.810: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:05.810: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:05.884: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:05.884: INFO: stdout: "" | |
Apr 21 22:54:05.884: INFO: update-demo-nautilus-oam2h is created but not running | |
Apr 21 22:54:10.884: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:10.962: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:10.962: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:10.962: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:11.041: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:11.042: INFO: stdout: "" | |
Apr 21 22:54:11.042: INFO: update-demo-nautilus-oam2h is created but not running | |
Apr 21 22:54:16.042: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:16.124: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:16.124: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:16.124: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:16.200: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:16.200: INFO: stdout: "" | |
Apr 21 22:54:16.200: INFO: update-demo-nautilus-oam2h is created but not running | |
Apr 21 22:54:21.200: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:21.302: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:21.302: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:21.302: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:21.384: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:21.384: INFO: stdout: "" | |
Apr 21 22:54:21.384: INFO: update-demo-nautilus-oam2h is created but not running | |
Apr 21 22:54:26.385: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:26.464: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:26.464: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:26.464: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:26.536: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:26.536: INFO: stdout: "" | |
Apr 21 22:54:26.536: INFO: update-demo-nautilus-oam2h is created but not running | |
Apr 21 22:54:31.536: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:31.627: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:31.627: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:31.627: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:31.700: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:31.701: INFO: stdout: "" | |
Apr 21 22:54:31.701: INFO: update-demo-nautilus-oam2h is created but not running | |
Apr 21 22:54:36.701: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:36.777: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:36.777: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:36.777: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:36.862: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:36.862: INFO: stdout: "" | |
Apr 21 22:54:36.862: INFO: update-demo-nautilus-oam2h is created but not running | |
Apr 21 22:54:41.863: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:42.026: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:42.026: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:42.026: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:42.099: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:42.099: INFO: stdout: "" | |
Apr 21 22:54:42.099: INFO: update-demo-nautilus-oam2h is created but not running | |
Apr 21 22:54:47.099: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:47.178: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:47.178: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
Apr 21 22:54:47.178: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:47.255: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:47.255: INFO: stdout: "true" | |
Apr 21 22:54:47.255: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-oam2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:47.330: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:47.330: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:54:47.330: INFO: validating pod update-demo-nautilus-oam2h | |
Apr 21 22:54:47.391: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:54:47.391: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:54:47.391: INFO: update-demo-nautilus-oam2h is verified up and running | |
Apr 21 22:54:47.391: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-vo7p2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:47.465: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:47.465: INFO: stdout: "true" | |
Apr 21 22:54:47.465: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-vo7p2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:47.551: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:47.551: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:54:47.551: INFO: validating pod update-demo-nautilus-vo7p2 | |
Apr 21 22:54:47.563: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:54:47.563: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:54:47.563: INFO: update-demo-nautilus-vo7p2 is verified up and running | |
STEP: scaling down the replication controller | |
Apr 21 22:54:47.563: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:49.652: INFO: stderr: "" | |
Apr 21 22:54:49.652: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" scaled" | |
STEP: waiting for all containers in name=update-demo pods to come up. | |
Apr 21 22:54:49.652: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:49.733: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:49.733: INFO: stdout: "update-demo-nautilus-oam2h update-demo-nautilus-vo7p2" | |
STEP: Replicas for name=update-demo: expected=1 actual=2 | |
Apr 21 22:54:54.733: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:54.813: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:54.813: INFO: stdout: "update-demo-nautilus-vo7p2" | |
Apr 21 22:54:54.813: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-vo7p2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:54.888: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:54.888: INFO: stdout: "true" | |
Apr 21 22:54:54.888: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-vo7p2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:54.963: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:54.963: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:54:54.963: INFO: validating pod update-demo-nautilus-vo7p2 | |
Apr 21 22:54:54.968: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:54:54.968: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:54:54.968: INFO: update-demo-nautilus-vo7p2 is verified up and running | |
STEP: scaling up the replication controller | |
Apr 21 22:54:54.969: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:57.072: INFO: stderr: "" | |
Apr 21 22:54:57.072: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" scaled" | |
STEP: waiting for all containers in name=update-demo pods to come up. | |
Apr 21 22:54:57.072: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:57.146: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:57.146: INFO: stdout: "update-demo-nautilus-snwn9 update-demo-nautilus-vo7p2" | |
Apr 21 22:54:57.147: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-snwn9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:54:57.254: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:54:57.254: INFO: stdout: "" | |
Apr 21 22:54:57.254: INFO: update-demo-nautilus-snwn9 is created but not running | |
Apr 21 22:55:02.254: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:55:02.373: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:55:02.373: INFO: stdout: "update-demo-nautilus-snwn9 update-demo-nautilus-vo7p2" | |
Apr 21 22:55:02.373: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-snwn9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:55:02.445: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:55:02.445: INFO: stdout: "true" | |
Apr 21 22:55:02.445: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-snwn9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:55:02.518: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:55:02.518: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:55:02.518: INFO: validating pod update-demo-nautilus-snwn9 | |
Apr 21 22:55:02.526: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:55:02.526: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:55:02.526: INFO: update-demo-nautilus-snwn9 is verified up and running | |
Apr 21 22:55:02.526: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-vo7p2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:55:02.600: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:55:02.600: INFO: stdout: "true" | |
Apr 21 22:55:02.600: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-vo7p2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:55:02.676: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:55:02.676: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:55:02.676: INFO: validating pod update-demo-nautilus-vo7p2 | |
Apr 21 22:55:02.682: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:55:02.682: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:55:02.682: INFO: update-demo-nautilus-vo7p2 is verified up and running | |
STEP: using delete to clean up resources | |
Apr 21 22:55:02.682: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:55:04.814: INFO: stderr: "" | |
Apr 21 22:55:04.814: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted" | |
Apr 21 22:55:04.814: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-dy1so' | |
Apr 21 22:55:04.887: INFO: stderr: "" | |
Apr 21 22:55:04.887: INFO: stdout: "" | |
Apr 21 22:55:04.888: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-dy1so -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:04.960: INFO: stderr: "" | |
Apr 21 22:55:04.960: INFO: stdout: "" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:04.960: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-dy1so" for this suite. | |
• [SLOW TEST:74.675 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Update Demo | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should scale a replication controller [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:171 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.908: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be submitted and removed [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:407 | |
STEP: creating the pod | |
STEP: setting up watch | |
STEP: submitting the pod to kubernetes | |
STEP: verifying the pod is in kubernetes | |
STEP: verifying pod creation was observed | |
W0421 22:53:49.218763 17620 request.go:344] Field selector: v1 - pods - metadata.name - pod-update-9591284e-084e-11e6-9214-42010af00007: need to check if this is versioned correctly. | |
STEP: deleting the pod gracefully | |
STEP: verifying the kubelet observed the termination notice | |
Apr 21 22:55:06.715: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed | |
STEP: verifying pod deletion was observed | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:06.725: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-hh2e1" for this suite. | |
• [SLOW TEST:87.914 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be submitted and removed [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:407 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:12.675: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should check if v1 is in available api versions [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:557 | |
STEP: validating api verions | |
Apr 21 22:55:12.718: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config api-versions' | |
Apr 21 22:55:12.785: INFO: stderr: "" | |
Apr 21 22:55:12.785: INFO: stdout: "apps/v1alpha1\nautoscaling/v1\nbatch/v1\nextensions/v1beta1\nv1" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:12.785: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-4b26g" for this suite. | |
• [SLOW TEST:10.142 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl api-versions | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should check if v1 is in available api versions [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:557 | |
------------------------------ | |
[BeforeEach] [k8s.io] Generated release_1_2 clientset | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:03.859: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create pods, delete pods, watch pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:168 | |
STEP: constructing the pod | |
STEP: setting up watch | |
STEP: creating the pod | |
STEP: verifying the pod is in kubernetes | |
STEP: verifying pod creation was observed | |
W0421 22:55:03.950260 17751 request.go:344] Field selector: v1 - pods - metadata.name - podc2246d34-084e-11e6-b067-42010af00007: need to check if this is versioned correctly. | |
STEP: deleting the pod gracefully | |
STEP: verifying pod deletion was observed | |
[AfterEach] [k8s.io] Generated release_1_2 clientset | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:12.964: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-clientset-bm5ek" for this suite. | |
• [SLOW TEST:19.126 seconds] | |
[k8s.io] Generated release_1_2 clientset | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create pods, delete pods, watch pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:168 | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:05.848: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should contain environment variables for services [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:660 | |
W0421 22:55:05.918078 17597 request.go:344] Field selector: v1 - pods - metadata.name - server-envvars-c3551937-084e-11e6-82d3-42010af00007: need to check if this is versioned correctly. | |
STEP: Creating a pod to test service env | |
Apr 21 22:55:08.495: INFO: Waiting up to 5m0s for pod client-envvars-c4e081a1-084e-11e6-82d3-42010af00007 status to be success or failure | |
Apr 21 22:55:08.499: INFO: No Status.Info for container 'env3cont' in pod 'client-envvars-c4e081a1-084e-11e6-82d3-42010af00007' yet | |
Apr 21 22:55:08.499: INFO: Waiting for pod client-envvars-c4e081a1-084e-11e6-82d3-42010af00007 in namespace 'e2e-tests-pods-qepy2' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.080799ms elapsed) | |
Apr 21 22:55:10.502: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-c4e081a1-084e-11e6-82d3-42010af00007' in namespace 'e2e-tests-pods-qepy2' so far | |
Apr 21 22:55:10.502: INFO: Waiting for pod client-envvars-c4e081a1-084e-11e6-82d3-42010af00007 in namespace 'e2e-tests-pods-qepy2' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00673373s elapsed) | |
Apr 21 22:55:12.506: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-c4e081a1-084e-11e6-82d3-42010af00007' in namespace 'e2e-tests-pods-qepy2' so far | |
Apr 21 22:55:12.508: INFO: Waiting for pod client-envvars-c4e081a1-084e-11e6-82d3-42010af00007 in namespace 'e2e-tests-pods-qepy2' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.012616693s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod client-envvars-c4e081a1-084e-11e6-82d3-42010af00007 container env3cont: <nil> | |
STEP: Successfully fetched pod logs:KUBERNETES_SERVICE_PORT=443 | |
KUBERNETES_PORT=tcp://10.0.0.1:443 | |
FOOSERVICE_PORT_8765_TCP_PORT=8765 | |
FOOSERVICE_PORT_8765_TCP_PROTO=tcp | |
HOSTNAME=client-envvars-c4e081a1-084e-11e6-82d3-42010af00007 | |
SHLVL=1 | |
HOME=/root | |
FOOSERVICE_PORT_8765_TCP=tcp://10.0.253.24:8765 | |
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 | |
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin | |
KUBERNETES_PORT_443_TCP_PORT=443 | |
KUBERNETES_PORT_443_TCP_PROTO=tcp | |
FOOSERVICE_SERVICE_HOST=10.0.253.24 | |
KUBERNETES_SERVICE_PORT_HTTPS=443 | |
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 | |
PWD=/ | |
KUBERNETES_SERVICE_HOST=10.0.0.1 | |
FOOSERVICE_SERVICE_PORT=8765 | |
FOOSERVICE_PORT=tcp://10.0.253.24:8765 | |
FOOSERVICE_PORT_8765_TCP_ADDR=10.0.253.24 | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:14.607: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-qepy2" for this suite. | |
• [SLOW TEST:18.783 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should contain environment variables for services [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:660 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Port forwarding | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:56.927: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support a client that connects, sends no data, and disconnects [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:196 | |
STEP: creating the target pod | |
W0421 22:54:56.980905 17726 request.go:344] Field selector: v1 - pods - metadata.name - pfpod: need to check if this is versioned correctly. | |
STEP: Running 'kubectl port-forward' | |
Apr 21 22:55:12.893: INFO: starting port-forward command and streaming output | |
Apr 21 22:55:12.893: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config port-forward --namespace=e2e-tests-port-forwarding-cu4si pfpod :80' | |
Apr 21 22:55:12.895: INFO: reading from `kubectl port-forward` command's stderr | |
STEP: Dialing the local port | |
STEP: Closing the connection to the local port | |
STEP: Waiting for the target pod to stop running | |
W0421 22:55:13.023541 17726 request.go:344] Field selector: v1 - pods - metadata.name - pfpod: need to check if this is versioned correctly. | |
STEP: Retrieving logs from the target pod | |
STEP: Verifying logs | |
[AfterEach] [k8s.io] Port forwarding | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:14.734: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-port-forwarding-cu4si" for this suite. | |
• [SLOW TEST:32.827 seconds] | |
[k8s.io] Port forwarding | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] With a server that expects a client request | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support a client that connects, sends no data, and disconnects [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:196 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.919: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] RollingUpdateDeployment should delete old pods and create new ones | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:58 | |
Apr 21 22:53:49.519: INFO: Pod name sample-pod: Found 0 pods out of 3 | |
Apr 21 22:53:54.534: INFO: Pod name sample-pod: Found 3 pods out of 3 | |
STEP: ensuring each pod is running | |
W0421 22:53:54.534947 17697 request.go:344] Field selector: v1 - pods - metadata.name - test-rolling-update-controller-0ibye: need to check if this is versioned correctly. | |
W0421 22:54:22.067596 17697 request.go:344] Field selector: v1 - pods - metadata.name - test-rolling-update-controller-a0j91: need to check if this is versioned correctly. | |
W0421 22:54:24.786501 17697 request.go:344] Field selector: v1 - pods - metadata.name - test-rolling-update-controller-pmlxk: need to check if this is versioned correctly. | |
STEP: trying to dial each unique pod | |
Apr 21 22:55:01.866: INFO: Controller sample-pod: Got non-empty result from replica 1 [test-rolling-update-controller-0ibye]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 1 of 3 required successes so far | |
Apr 21 22:55:01.877: INFO: Controller sample-pod: Got non-empty result from replica 2 [test-rolling-update-controller-a0j91]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 2 of 3 required successes so far | |
Apr 21 22:55:01.913: INFO: Controller sample-pod: Got non-empty result from replica 3 [test-rolling-update-controller-pmlxk]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 3 of 3 required successes so far | |
Apr 21 22:55:01.913: INFO: Creating deployment test-rolling-update-deployment | |
Apr 21 22:55:15.965: INFO: Deleting deployment test-rolling-update-deployment | |
Apr 21 22:55:20.046: INFO: Ensuring deployment test-rolling-update-deployment was deleted | |
Apr 21 22:55:20.048: INFO: Ensuring deployment test-rolling-update-deployment's RSes were deleted | |
Apr 21 22:55:20.051: INFO: Ensuring deployment test-rolling-update-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:20.053: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-6458h" for this suite. | |
• [SLOW TEST:101.153 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
RollingUpdateDeployment should delete old pods and create new ones | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:58 | |
------------------------------ | |
SSS | |
------------------------------ | |
[BeforeEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:08.693: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should keep restarting failed pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:116 | |
STEP: Creating a job | |
STEP: Ensuring job shows many failures | |
[AfterEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:20.767: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-v1job-zin6p" for this suite. | |
• [SLOW TEST:27.095 seconds] | |
[k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should keep restarting failed pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:116 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:22.986: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should allow activeDeadlineSeconds to be updated [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:573 | |
STEP: creating the pod | |
STEP: submitting the pod to kubernetes | |
W0421 22:55:23.048903 17751 request.go:344] Field selector: v1 - pods - metadata.name - pod-update-activedeadlineseconds-cd8afad9-084e-11e6-b067-42010af00007: need to check if this is versioned correctly. | |
STEP: verifying the pod is in kubernetes | |
STEP: updating the pod | |
Apr 21 22:55:27.913: INFO: Conflicting update to pod, re-get and re-update: Operation cannot be fulfilled on pods "pod-update-activedeadlineseconds-cd8afad9-084e-11e6-b067-42010af00007": the object has been modified; please apply your changes to the latest version and try again | |
STEP: updating the pod | |
Apr 21 22:55:28.419: INFO: Successfully updated pod | |
Apr 21 22:55:28.419: INFO: Waiting up to 5m0s for pod pod-update-activedeadlineseconds-cd8afad9-084e-11e6-b067-42010af00007 status to be terminated due to deadline exceeded | |
Apr 21 22:55:28.424: INFO: Waiting for pod pod-update-activedeadlineseconds-cd8afad9-084e-11e6-b067-42010af00007 in namespace 'e2e-tests-pods-mofvb' status to be 'terminated due to deadline exceeded'(found phase: "Running", readiness: true) (5.543311ms elapsed) | |
Apr 21 22:55:30.428: INFO: Waiting for pod pod-update-activedeadlineseconds-cd8afad9-084e-11e6-b067-42010af00007 in namespace 'e2e-tests-pods-mofvb' status to be 'terminated due to deadline exceeded'(found phase: "Running", readiness: true) (2.009626844s elapsed) | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:32.493: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-mofvb" for this suite. | |
• [SLOW TEST:14.558 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should allow activeDeadlineSeconds to be updated [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:573 | |
------------------------------ | |
[BeforeEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:22.818: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:91 | |
[It] should grab all metrics from a ControllerManager. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:158 | |
STEP: Proxying to Pod through the API server | |
[AfterEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:22.930: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-metrics-grabber-v1dpz" for this suite. | |
• [SLOW TEST:15.130 seconds] | |
[k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should grab all metrics from a ControllerManager. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:158 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:28.460: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should apply a new configuration to an existing RC | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:576 | |
STEP: creating Redis RC | |
Apr 21 22:54:28.501: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-0f32r' | |
Apr 21 22:54:28.612: INFO: stderr: "" | |
Apr 21 22:54:28.612: INFO: stdout: "replicationcontroller \"redis-master\" created" | |
STEP: applying a modified configuration | |
Apr 21 22:54:28.614: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config apply -f - --namespace=e2e-tests-kubectl-0f32r' | |
Apr 21 22:54:28.769: INFO: stderr: "" | |
Apr 21 22:54:28.769: INFO: stdout: "replicationcontroller \"redis-master\" configured" | |
STEP: checking the result | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:28.797: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-0f32r" for this suite. | |
• [SLOW TEST:70.418 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl apply | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should apply a new configuration to an existing RC | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:576 | |
------------------------------ | |
SSS | |
------------------------------ | |
[BeforeEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:30.076: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should delete a job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:185 | |
STEP: Creating a job | |
STEP: Ensuring active pods == parallelism | |
STEP: delete a job | |
STEP: Ensuring job was deleted | |
[AfterEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:36.200: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-job-e5fyt" for this suite. | |
• [SLOW TEST:11.150 seconds] | |
[k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should delete a job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:185 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubernetes Dashboard | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:29.757: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should check that the kubernetes-dashboard instance is alive | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:87 | |
STEP: Checking whether the kubernetes-dashboard service exists. | |
Apr 21 22:55:29.817: INFO: Service kubernetes-dashboard in namespace kube-system found. | |
STEP: Checking to make sure the kubernetes-dashboard pods are running | |
STEP: Checking to make sure we get a response from the kubernetes-dashboard. | |
STEP: Checking that the ApiServer /ui endpoint redirects to a valid server. | |
[AfterEach] [k8s.io] Kubernetes Dashboard | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:36.846: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubernetes-dashboard-vdrz8" for this suite. | |
• [SLOW TEST:12.161 seconds] | |
[k8s.io] Kubernetes Dashboard | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should check that the kubernetes-dashboard instance is alive | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:87 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:54.456: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should scale a job down | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:166 | |
STEP: Creating a job | |
STEP: Ensuring active pods == startParallelism | |
STEP: scale job down | |
STEP: Ensuring active pods == endParallelism | |
[AfterEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:56.595: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-v1job-i65pc" for this suite. | |
• [SLOW TEST:112.182 seconds] | |
[k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should scale a job down | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:166 | |
------------------------------ | |
[BeforeEach] [k8s.io] ReplicaSet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:14.997: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should serve a basic image on each replica with a public image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:40 | |
STEP: Creating ReplicaSet my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007 | |
Apr 21 22:55:15.066: INFO: Pod name my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007: Found 0 pods out of 2 | |
Apr 21 22:55:20.070: INFO: Pod name my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007: Found 2 pods out of 2 | |
STEP: Ensuring each pod is running | |
W0421 22:55:20.070826 17553 request.go:344] Field selector: v1 - pods - metadata.name - my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007-7r6mx: need to check if this is versioned correctly. | |
W0421 22:55:20.072862 17553 request.go:344] Field selector: v1 - pods - metadata.name - my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007-8kig6: need to check if this is versioned correctly. | |
STEP: Trying to dial each unique pod | |
Apr 21 22:55:33.257: INFO: Controller my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007: Got expected result from replica 1 [my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007-7r6mx]: "my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007-7r6mx", 1 of 2 required successes so far | |
Apr 21 22:55:33.269: INFO: Controller my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007: Got expected result from replica 2 [my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007-8kig6]: "my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007-8kig6", 2 of 2 required successes so far | |
STEP: deleting ReplicaSet my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007 in namespace e2e-tests-replicaset-ct4v8 | |
Apr 21 22:55:35.296: INFO: Deleting RS my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007 took: 2.022779537s | |
Apr 21 22:55:37.302: INFO: Terminating ReplicaSet my-hostname-basic-c8c8cf62-084e-11e6-84cd-42010af00007 pods took: 2.006177852s | |
[AfterEach] [k8s.io] ReplicaSet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:37.302: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-replicaset-ct4v8" for this suite. | |
• [SLOW TEST:32.350 seconds] | |
[k8s.io] ReplicaSet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should serve a basic image on each replica with a public image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:40 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:11.353: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should create services for rc [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:786 | |
STEP: creating Redis RC | |
Apr 21 22:54:11.458: INFO: namespace e2e-tests-kubectl-usglf | |
Apr 21 22:54:11.458: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-usglf' | |
Apr 21 22:54:11.579: INFO: stderr: "" | |
Apr 21 22:54:11.579: INFO: stdout: "replicationcontroller \"redis-master\" created" | |
Apr 21 22:54:11.607: INFO: Selector matched 1 pods for map[app:redis] | |
Apr 21 22:54:11.607: INFO: ForEach: Found 0 pods from the filter. Now looping through them. | |
STEP: exposing RC | |
Apr 21 22:54:11.607: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-usglf' | |
Apr 21 22:54:11.689: INFO: stderr: "" | |
Apr 21 22:54:11.689: INFO: stdout: "service \"rm2\" exposed" | |
Apr 21 22:54:11.692: INFO: Service rm2 in namespace e2e-tests-kubectl-usglf found. | |
Apr 21 22:54:13.695: INFO: Get endpoints failed (interval 2s): <nil> | |
Apr 21 22:54:15.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:17.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:19.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:21.696: INFO: No endpoint found, retrying | |
Apr 21 22:54:23.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:25.696: INFO: No endpoint found, retrying | |
Apr 21 22:54:27.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:29.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:31.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:33.694: INFO: No endpoint found, retrying | |
Apr 21 22:54:35.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:37.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:39.699: INFO: No endpoint found, retrying | |
Apr 21 22:54:41.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:43.696: INFO: No endpoint found, retrying | |
Apr 21 22:54:45.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:47.706: INFO: No endpoint found, retrying | |
Apr 21 22:54:49.696: INFO: No endpoint found, retrying | |
Apr 21 22:54:51.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:53.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:55.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:57.695: INFO: No endpoint found, retrying | |
Apr 21 22:54:59.695: INFO: No endpoint found, retrying | |
Apr 21 22:55:01.695: INFO: No endpoint found, retrying | |
Apr 21 22:55:03.695: INFO: No endpoint found, retrying | |
STEP: exposing service | |
Apr 21 22:55:05.698: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-usglf' | |
Apr 21 22:55:05.777: INFO: stderr: "" | |
Apr 21 22:55:05.777: INFO: stdout: "service \"rm3\" exposed" | |
Apr 21 22:55:05.782: INFO: Service rm3 in namespace e2e-tests-kubectl-usglf found. | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:07.788: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-usglf" for this suite. | |
• [SLOW TEST:96.509 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl expose | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create services for rc [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:786 | |
------------------------------ | |
[BeforeEach] [k8s.io] Downward API | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:37.546: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide pod IP as an env var | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:82 | |
STEP: Creating a pod to test downward api env vars | |
Apr 21 22:55:37.611: INFO: Waiting up to 5m0s for pod downward-api-d6392bd4-084e-11e6-b067-42010af00007 status to be success or failure | |
Apr 21 22:55:37.615: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-d6392bd4-084e-11e6-b067-42010af00007' yet | |
Apr 21 22:55:37.615: INFO: Waiting for pod downward-api-d6392bd4-084e-11e6-b067-42010af00007 in namespace 'e2e-tests-downward-api-r58xs' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.103496ms elapsed) | |
Apr 21 22:55:39.757: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-d6392bd4-084e-11e6-b067-42010af00007' in namespace 'e2e-tests-downward-api-r58xs' so far | |
Apr 21 22:55:39.757: INFO: Waiting for pod downward-api-d6392bd4-084e-11e6-b067-42010af00007 in namespace 'e2e-tests-downward-api-r58xs' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.146411134s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod downward-api-d6392bd4-084e-11e6-b067-42010af00007 container dapi-container: <nil> | |
STEP: Successfully fetched pod logs:POD_IP=10.245.1.4 | |
KUBERNETES_PORT=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT=443 | |
HOSTNAME=downward-api-d6392bd4-084e-11e6-b067-42010af00007 | |
SHLVL=1 | |
HOME=/root | |
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 | |
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin | |
KUBERNETES_PORT_443_TCP_PORT=443 | |
KUBERNETES_PORT_443_TCP_PROTO=tcp | |
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT_HTTPS=443 | |
PWD=/ | |
KUBERNETES_SERVICE_HOST=10.0.0.1 | |
[AfterEach] [k8s.io] Downward API | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:42.078: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-downward-api-r58xs" for this suite. | |
• [SLOW TEST:14.660 seconds] | |
[k8s.io] Downward API | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should provide pod IP as an env var | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:82 | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:10.071: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] deployment should support rollback when there's replica set with no revision | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:79 | |
Apr 21 22:55:10.139: INFO: Creating deployment test-rollback-no-revision-deployment | |
Apr 21 22:55:14.176: INFO: rolling back deployment test-rollback-no-revision-deployment to last revision | |
Apr 21 22:55:16.209: INFO: Updating deployment test-rollback-no-revision-deployment | |
Apr 21 22:55:22.236: INFO: rolling back deployment test-rollback-no-revision-deployment to revision 1 | |
Apr 21 22:55:28.269: INFO: rolling back deployment test-rollback-no-revision-deployment to revision 10 | |
Apr 21 22:55:30.283: INFO: rolling back deployment test-rollback-no-revision-deployment to revision 3 | |
Apr 21 22:55:32.299: INFO: Deleting deployment test-rollback-no-revision-deployment | |
Apr 21 22:55:38.460: INFO: Ensuring deployment test-rollback-no-revision-deployment was deleted | |
Apr 21 22:55:38.462: INFO: Ensuring deployment test-rollback-no-revision-deployment's RSes were deleted | |
Apr 21 22:55:38.464: INFO: Ensuring deployment test-rollback-no-revision-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:38.466: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-a1fc5" for this suite. | |
• [SLOW TEST:43.417 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
deployment should support rollback when there's replica set with no revision | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:79 | |
------------------------------ | |
[BeforeEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:48.990: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should scale a job up | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:141 | |
STEP: Creating a job | |
STEP: Ensuring active pods == startParallelism | |
STEP: scale job up | |
STEP: Ensuring active pods == endParallelism | |
[AfterEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:05.070: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-v1job-ulp69" for this suite. | |
• [SLOW TEST:66.102 seconds] | |
[k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should scale a job up | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:141 | |
------------------------------ | |
[BeforeEach] [k8s.io] LimitRange | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:16.823: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/limit_range.go:102 | |
STEP: Creating a LimitRange | |
STEP: Fetching the LimitRange to ensure it has proper values | |
Apr 21 22:55:16.888: INFO: Verifying requests: expected map[cpu:{0.100000000 DecimalSI} memory:{209715200.000000000 BinarySI}] with actual map[cpu:{0.100 DecimalSI} memory:{209715200.000 BinarySI}] | |
Apr 21 22:55:16.888: INFO: Verifying limits: expected map[cpu:{0.500000000 DecimalSI} memory:{524288000.000000000 BinarySI}] with actual map[cpu:{0.500 DecimalSI} memory:{524288000.000 BinarySI}] | |
STEP: Creating a Pod with no resource requirements | |
STEP: Ensuring Pod has resource requirements applied from LimitRange | |
Apr 21 22:55:16.904: INFO: Verifying requests: expected map[cpu:{0.100000000 DecimalSI} memory:{209715200.000000000 BinarySI}] with actual map[cpu:{0.100 DecimalSI} memory:{209715200.000 BinarySI}] | |
Apr 21 22:55:16.904: INFO: Verifying limits: expected map[cpu:{0.500000000 DecimalSI} memory:{524288000.000000000 BinarySI}] with actual map[memory:{524288000.000 BinarySI} cpu:{0.500 DecimalSI}] | |
STEP: Creating a Pod with partial resource requirements | |
STEP: Ensuring Pod has merged resource requirements applied from LimitRange | |
Apr 21 22:55:16.921: INFO: Verifying requests: expected map[cpu:{0.300000000 DecimalSI} memory:{157286400.000000000 BinarySI}] with actual map[cpu:{0.300 DecimalSI} memory:{157286400.000 BinarySI}] | |
Apr 21 22:55:16.921: INFO: Verifying limits: expected map[cpu:{0.300000000 DecimalSI} memory:{524288000.000000000 BinarySI}] with actual map[cpu:{0.300 DecimalSI} memory:{524288000.000 BinarySI}] | |
STEP: Failing to create a Pod with less than min resources | |
STEP: Failing to create a Pod with more than max resources | |
[AfterEach] [k8s.io] LimitRange | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:16.931: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-limitrange-bsqnr" for this suite. | |
• [SLOW TEST:40.153 seconds] | |
[k8s.io] LimitRange | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a LimitRange with defaults and ensure pod has those defaults applied. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/limit_range.go:102 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Downward API volume | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:46.639: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should update annotations on modification [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:133 | |
STEP: Creating the pod | |
W0421 22:55:46.924314 17745 request.go:344] Field selector: v1 - pods - metadata.name - annotationupdatedbb92f93-084e-11e6-8b58-42010af00007: need to check if this is versioned correctly. | |
STEP: Deleting the pod | |
[AfterEach] [k8s.io] Downward API volume | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:54.097: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-downward-api-ucxdd" for this suite. | |
• [SLOW TEST:12.493 seconds] | |
[k8s.io] Downward API volume | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should update annotations on modification [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:133 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:49.266: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Guestbook application | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:189 | |
[It] should create and stop a working application [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:201 | |
STEP: creating all guestbook components | |
Apr 21 22:53:51.955: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-ucahf' | |
Apr 21 22:53:52.412: INFO: stderr: "" | |
Apr 21 22:53:52.412: INFO: stdout: "deployment \"frontend\" created\nservice \"frontend\" created\ndeployment \"redis-master\" created\nservice \"redis-master\" created\ndeployment \"redis-slave\" created\nservice \"redis-slave\" created" | |
STEP: validating guestbook app | |
Apr 21 22:53:52.412: INFO: Waiting for frontend to serve content. | |
Apr 21 22:53:52.458: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:53:57.462: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:02.469: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:07.473: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:12.478: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:17.497: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:22.502: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:27.506: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:32.513: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:37.517: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:42.522: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: | |
Apr 21 22:54:47.551: INFO: Trying to add a new entry to the guestbook. | |
Apr 21 22:54:47.570: INFO: Verifying that added entry can be retrieved. | |
Apr 21 22:54:47.583: INFO: Failed to get response from guestbook. err: <nil>, response: {"data": ""} | |
Apr 21 22:54:52.790: INFO: Failed to get response from guestbook. err: an error on the server has prevented the request from succeeding (get services frontend), response: | |
Apr 21 22:54:57.810: INFO: Failed to get response from guestbook. err: <nil>, response: {"data": ""} | |
Apr 21 22:55:02.827: INFO: Failed to get response from guestbook. err: <nil>, response: {"data": ""} | |
Apr 21 22:55:08.035: INFO: Failed to get response from guestbook. err: an error on the server has prevented the request from succeeding (get services frontend), response: | |
STEP: using delete to clean up resources | |
Apr 21 22:55:13.056: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-ucahf' | |
Apr 21 22:55:19.512: INFO: stderr: "" | |
Apr 21 22:55:19.512: INFO: stdout: "deployment \"frontend\" deleted\nservice \"frontend\" deleted\ndeployment \"redis-master\" deleted\nservice \"redis-master\" deleted\ndeployment \"redis-slave\" deleted\nservice \"redis-slave\" deleted" | |
Apr 21 22:55:19.512: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l app=guestbook,tier=frontend --no-headers --namespace=e2e-tests-kubectl-ucahf' | |
Apr 21 22:55:19.584: INFO: stderr: "" | |
Apr 21 22:55:19.584: INFO: stdout: "" | |
Apr 21 22:55:19.584: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l app=guestbook,tier=frontend --namespace=e2e-tests-kubectl-ucahf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:19.661: INFO: stderr: "" | |
Apr 21 22:55:19.661: INFO: stdout: "" | |
Apr 21 22:55:19.661: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l app=redis,role=master --no-headers --namespace=e2e-tests-kubectl-ucahf' | |
Apr 21 22:55:19.732: INFO: stderr: "" | |
Apr 21 22:55:19.732: INFO: stdout: "" | |
Apr 21 22:55:19.733: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l app=redis,role=master --namespace=e2e-tests-kubectl-ucahf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:19.808: INFO: stderr: "" | |
Apr 21 22:55:19.808: INFO: stdout: "" | |
Apr 21 22:55:19.808: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l app=redis,role=slave --no-headers --namespace=e2e-tests-kubectl-ucahf' | |
Apr 21 22:55:19.880: INFO: stderr: "" | |
Apr 21 22:55:19.880: INFO: stdout: "" | |
Apr 21 22:55:19.880: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l app=redis,role=slave --namespace=e2e-tests-kubectl-ucahf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:19.961: INFO: stderr: "" | |
Apr 21 22:55:19.961: INFO: stdout: "" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:19.961: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-ucahf" for this suite. | |
• [SLOW TEST:130.730 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Guestbook application | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create and stop a working application [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:201 | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:47.348: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be updated [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:493 | |
STEP: creating the pod | |
STEP: submitting the pod to kubernetes | |
W0421 22:55:47.714492 17553 request.go:344] Field selector: v1 - pods - metadata.name - pod-update-dc271160-084e-11e6-84cd-42010af00007: need to check if this is versioned correctly. | |
STEP: verifying the pod is in kubernetes | |
STEP: updating the pod | |
Apr 21 22:55:50.126: INFO: Conflicting update to pod, re-get and re-update: Operation cannot be fulfilled on pods "pod-update-dc271160-084e-11e6-84cd-42010af00007": the object has been modified; please apply your changes to the latest version and try again | |
STEP: updating the pod | |
Apr 21 22:55:50.725: INFO: Successfully updated pod | |
W0421 22:55:50.726048 17553 request.go:344] Field selector: v1 - pods - metadata.name - pod-update-dc271160-084e-11e6-84cd-42010af00007: need to check if this is versioned correctly. | |
STEP: verifying the updated pod is in kubernetes | |
Apr 21 22:55:50.792: INFO: Pod update OK | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:50.864: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-06dob" for this suite. | |
• [SLOW TEST:13.677 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be updated [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:493 | |
------------------------------ | |
[BeforeEach] [k8s.io] Port forwarding | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:47.864: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support a client that connects, sends no data, and disconnects [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:308 | |
STEP: creating the target pod | |
W0421 22:55:49.274347 17701 request.go:344] Field selector: v1 - pods - metadata.name - pfpod: need to check if this is versioned correctly. | |
STEP: Running 'kubectl port-forward' | |
Apr 21 22:55:50.998: INFO: starting port-forward command and streaming output | |
Apr 21 22:55:50.998: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config port-forward --namespace=e2e-tests-port-forwarding-dxcxb pfpod :80' | |
Apr 21 22:55:51.000: INFO: reading from `kubectl port-forward` command's stderr | |
STEP: Dialing the local port | |
STEP: Reading data from the local port | |
STEP: Waiting for the target pod to stop running | |
W0421 22:55:52.654473 17701 request.go:344] Field selector: v1 - pods - metadata.name - pfpod: need to check if this is versioned correctly. | |
STEP: Retrieving logs from the target pod | |
STEP: Verifying logs | |
STEP: Closing the connection to the local port | |
[AfterEach] [k8s.io] Port forwarding | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:52.999: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-port-forwarding-dxcxb" for this suite. | |
• [SLOW TEST:15.207 seconds] | |
[k8s.io] Port forwarding | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] With a server that expects no client request | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support a client that connects, sends no data, and disconnects [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:308 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:53.490: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Kubectl run job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1072 | |
[It] should create a job from an image when restart is Never [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1114 | |
STEP: running the image gcr.io/google_containers/nginx:1.7.9 | |
Apr 21 22:55:53.573: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config run e2e-test-nginx-job --restart=Never --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-x7h5f' | |
Apr 21 22:55:53.683: INFO: stderr: "" | |
Apr 21 22:55:53.683: INFO: stdout: "job \"e2e-test-nginx-job\" created" | |
STEP: verifying the job e2e-test-nginx-job was created | |
[AfterEach] [k8s.io] Kubectl run job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1076 | |
Apr 21 22:55:53.695: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-x7h5f' | |
Apr 21 22:55:55.860: INFO: stderr: "" | |
Apr 21 22:55:55.860: INFO: stdout: "job \"e2e-test-nginx-job\" deleted" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:55.860: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-x7h5f" for this suite. | |
• [SLOW TEST:12.422 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl run job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a job from an image when restart is Never [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1114 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:38.884: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:212 | |
STEP: creating the pod from /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml | |
Apr 21 22:55:39.090: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-hqeix' | |
Apr 21 22:55:39.270: INFO: stderr: "" | |
Apr 21 22:55:39.270: INFO: stdout: "pod \"nginx\" created" | |
Apr 21 22:55:39.270: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx] | |
Apr 21 22:55:39.270: INFO: Waiting up to 5m0s for pod nginx status to be running and ready | |
Apr 21 22:55:39.730: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-hqeix' status to be 'running and ready'(found phase: "Pending", readiness: false) (459.349195ms elapsed) | |
Apr 21 22:55:41.762: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-hqeix' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.49117481s elapsed) | |
Apr 21 22:55:43.770: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-hqeix' status to be 'running and ready'(found phase: "Running", readiness: false) (4.49966664s elapsed) | |
Apr 21 22:55:45.806: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-hqeix' status to be 'running and ready'(found phase: "Running", readiness: false) (6.536022549s elapsed) | |
Apr 21 22:55:47.846: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-hqeix' status to be 'running and ready'(found phase: "Running", readiness: false) (8.575632487s elapsed) | |
Apr 21 22:55:49.868: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx] | |
[It] should support port-forward | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:547 | |
STEP: forwarding the container port to a local port | |
Apr 21 22:55:49.868: INFO: starting port-forward command and streaming output | |
Apr 21 22:55:49.868: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config port-forward --namespace=e2e-tests-kubectl-hqeix nginx :80' | |
Apr 21 22:55:49.870: INFO: reading from `kubectl port-forward` command's stderr | |
STEP: curling local port output | |
Apr 21 22:55:50.289: INFO: got: <!DOCTYPE html> | |
<html> | |
<head> | |
<title>Welcome to nginx!</title> | |
<style> | |
body { | |
width: 35em; | |
margin: 0 auto; | |
font-family: Tahoma, Verdana, Arial, sans-serif; | |
} | |
</style> | |
</head> | |
<body> | |
<h1>Welcome to nginx!</h1> | |
<p>If you see this page, the nginx web server is successfully installed and | |
working. Further configuration is required.</p> | |
<p>For online documentation and support please refer to | |
<a href="http://nginx.org/">nginx.org</a>.<br/> | |
Commercial support is available at | |
<a href="http://nginx.com/">nginx.com</a>.</p> | |
<p><em>Thank you for using nginx.</em></p> | |
</body> | |
</html> | |
[AfterEach] [k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:215 | |
STEP: using delete to clean up resources | |
Apr 21 22:55:50.292: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-hqeix' | |
Apr 21 22:55:50.464: INFO: stderr: "" | |
Apr 21 22:55:50.464: INFO: stdout: "pod \"nginx\" deleted" | |
Apr 21 22:55:50.464: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-hqeix' | |
Apr 21 22:55:50.607: INFO: stderr: "" | |
Apr 21 22:55:50.607: INFO: stdout: "" | |
Apr 21 22:55:50.607: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-hqeix -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:50.776: INFO: stderr: "" | |
Apr 21 22:55:50.776: INFO: stdout: "" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:50.776: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-hqeix" for this suite. | |
• [SLOW TEST:27.103 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support port-forward | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:547 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:52.208: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (root,0777,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:73 | |
STEP: Creating a pod to test emptydir 0777 on tmpfs | |
Apr 21 22:55:52.589: INFO: Waiting up to 5m0s for pod pod-df149cd0-084e-11e6-b067-42010af00007 status to be success or failure | |
Apr 21 22:55:52.600: INFO: No Status.Info for container 'test-container' in pod 'pod-df149cd0-084e-11e6-b067-42010af00007' yet | |
Apr 21 22:55:52.600: INFO: Waiting for pod pod-df149cd0-084e-11e6-b067-42010af00007 in namespace 'e2e-tests-emptydir-r3akq' status to be 'success or failure'(found phase: "Pending", readiness: false) (11.708927ms elapsed) | |
Apr 21 22:55:54.605: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-df149cd0-084e-11e6-b067-42010af00007' in namespace 'e2e-tests-emptydir-r3akq' so far | |
Apr 21 22:55:54.605: INFO: Waiting for pod pod-df149cd0-084e-11e6-b067-42010af00007 in namespace 'e2e-tests-emptydir-r3akq' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.01605174s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-df149cd0-084e-11e6-b067-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rwxrwxrwx | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:56.679: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-r3akq" for this suite. | |
• [SLOW TEST:14.576 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (root,0777,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:73 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:49.264: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:212 | |
STEP: creating the pod from /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml | |
Apr 21 22:53:52.373: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-0ky4z' | |
Apr 21 22:53:52.544: INFO: stderr: "" | |
Apr 21 22:53:52.544: INFO: stdout: "pod \"nginx\" created" | |
Apr 21 22:53:52.544: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx] | |
Apr 21 22:53:52.544: INFO: Waiting up to 5m0s for pod nginx status to be running and ready | |
Apr 21 22:53:52.571: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (27.348032ms elapsed) | |
Apr 21 22:53:54.575: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.030742028s elapsed) | |
Apr 21 22:53:56.589: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.045272389s elapsed) | |
Apr 21 22:53:58.647: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (6.103192107s elapsed) | |
Apr 21 22:54:00.650: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (8.10609923s elapsed) | |
Apr 21 22:54:02.655: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (10.110509018s elapsed) | |
Apr 21 22:54:04.658: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (12.113961079s elapsed) | |
Apr 21 22:54:06.697: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (14.15292113s elapsed) | |
Apr 21 22:54:08.700: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (16.155782556s elapsed) | |
Apr 21 22:54:10.705: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (18.160504359s elapsed) | |
Apr 21 22:54:12.715: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (20.170635611s elapsed) | |
Apr 21 22:54:14.719: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (22.174760255s elapsed) | |
Apr 21 22:54:16.723: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (24.178627436s elapsed) | |
Apr 21 22:54:18.727: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (26.182533884s elapsed) | |
Apr 21 22:54:20.731: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (28.187272796s elapsed) | |
Apr 21 22:54:22.738: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (30.193810107s elapsed) | |
Apr 21 22:54:24.743: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (32.199253933s elapsed) | |
Apr 21 22:54:26.747: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (34.202928682s elapsed) | |
Apr 21 22:54:28.755: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (36.210407653s elapsed) | |
Apr 21 22:54:30.759: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (38.214829075s elapsed) | |
Apr 21 22:54:32.762: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (40.218192187s elapsed) | |
Apr 21 22:54:34.766: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (42.221956355s elapsed) | |
Apr 21 22:54:36.770: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (44.225847492s elapsed) | |
Apr 21 22:54:38.774: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (46.229819248s elapsed) | |
Apr 21 22:54:40.780: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (48.235872607s elapsed) | |
Apr 21 22:54:42.784: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (50.239420686s elapsed) | |
Apr 21 22:54:44.788: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (52.2435907s elapsed) | |
Apr 21 22:54:46.791: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (54.247281083s elapsed) | |
Apr 21 22:54:48.795: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (56.251064409s elapsed) | |
Apr 21 22:54:50.799: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (58.254590866s elapsed) | |
Apr 21 22:54:52.803: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m0.258417983s elapsed) | |
Apr 21 22:54:54.807: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m2.262510132s elapsed) | |
Apr 21 22:54:56.812: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m4.267838256s elapsed) | |
Apr 21 22:54:58.815: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m6.271128224s elapsed) | |
Apr 21 22:55:00.819: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Pending", readiness: false) (1m8.275200432s elapsed) | |
Apr 21 22:55:02.823: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Running", readiness: false) (1m10.278942393s elapsed) | |
Apr 21 22:55:04.829: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Running", readiness: false) (1m12.284841455s elapsed) | |
Apr 21 22:55:06.833: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Running", readiness: false) (1m14.28865211s elapsed) | |
Apr 21 22:55:08.836: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Running", readiness: false) (1m16.291995743s elapsed) | |
Apr 21 22:55:10.840: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Running", readiness: false) (1m18.295513813s elapsed) | |
Apr 21 22:55:12.843: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-0ky4z' status to be 'running and ready'(found phase: "Running", readiness: false) (1m20.298765678s elapsed) | |
Apr 21 22:55:14.847: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx] | |
[It] should support inline execution and attach | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:530 | |
STEP: executing a command with run and attach with stdin | |
Apr 21 22:55:14.848: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config --namespace=e2e-tests-kubectl-0ky4z run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=Never --attach=true --stdin -- sh -c cat && echo 'stdin closed'' | |
Apr 21 22:55:19.356: INFO: stderr: "" | |
Apr 21 22:55:19.356: INFO: stdout: "Waiting for pod e2e-tests-kubectl-0ky4z/run-test-9zp7f to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-0ky4z/run-test-9zp7f to be running, status is Pending, pod ready: false\nabcd1234stdin closed" | |
STEP: executing a command with run and attach without stdin | |
Apr 21 22:55:19.365: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config --namespace=e2e-tests-kubectl-0ky4z run run-test-2 --image=gcr.io/google_containers/busybox:1.24 --restart=Never --attach=true --leave-stdin-open=true -- sh -c cat && echo 'stdin closed'' | |
Apr 21 22:55:23.538: INFO: stderr: "" | |
Apr 21 22:55:23.539: INFO: stdout: "Waiting for pod e2e-tests-kubectl-0ky4z/run-test-2-b32gy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-0ky4z/run-test-2-b32gy to be running, status is Pending, pod ready: false\nstdin closed" | |
STEP: executing a command with run and attach with stdin with open stdin should remain running | |
Apr 21 22:55:23.543: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config --namespace=e2e-tests-kubectl-0ky4z run run-test-3 --image=gcr.io/google_containers/busybox:1.24 --restart=Never --attach=true --leave-stdin-open=true --stdin -- sh -c cat && echo 'stdin closed'' | |
Apr 21 22:55:27.791: INFO: stderr: "" | |
Apr 21 22:55:27.791: INFO: stdout: "Waiting for pod e2e-tests-kubectl-0ky4z/run-test-3-5bzdb to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-0ky4z/run-test-3-5bzdb to be running, status is Pending, pod ready: false" | |
Apr 21 22:55:27.796: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [run-test-3-5bzdb] | |
Apr 21 22:55:27.796: INFO: Waiting up to 1m0s for pod run-test-3-5bzdb status to be running and ready | |
Apr 21 22:55:27.799: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [run-test-3-5bzdb] | |
Apr 21 22:55:27.799: INFO: Waiting up to 1s for 1 pods to be running and ready: [run-test-3-5bzdb] | |
Apr 21 22:55:27.799: INFO: Waiting up to 1s for pod run-test-3-5bzdb status to be running and ready | |
Apr 21 22:55:27.801: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [run-test-3-5bzdb] | |
Apr 21 22:55:27.802: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config --namespace=e2e-tests-kubectl-0ky4z logs run-test-3-5bzdb' | |
Apr 21 22:55:27.887: INFO: stderr: "" | |
Apr 21 22:55:27.887: INFO: stdout: "abcd1234" | |
[AfterEach] [k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:215 | |
STEP: using delete to clean up resources | |
Apr 21 22:55:27.891: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-0ky4z' | |
Apr 21 22:55:27.970: INFO: stderr: "" | |
Apr 21 22:55:27.970: INFO: stdout: "pod \"nginx\" deleted" | |
Apr 21 22:55:27.970: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-0ky4z' | |
Apr 21 22:55:28.039: INFO: stderr: "" | |
Apr 21 22:55:28.039: INFO: stdout: "" | |
Apr 21 22:55:28.039: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-0ky4z -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:28.111: INFO: stderr: "" | |
Apr 21 22:55:28.111: INFO: stdout: "" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:28.111: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-0ky4z" for this suite. | |
• [SLOW TEST:138.875 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support inline execution and attach | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:530 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:04.386: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:212 | |
STEP: creating the pod from /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml | |
Apr 21 22:55:04.500: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:04.625: INFO: stderr: "" | |
Apr 21 22:55:04.625: INFO: stdout: "pod \"nginx\" created" | |
Apr 21 22:55:04.625: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx] | |
Apr 21 22:55:04.625: INFO: Waiting up to 5m0s for pod nginx status to be running and ready | |
Apr 21 22:55:04.660: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (34.423508ms elapsed) | |
Apr 21 22:55:06.667: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (2.041569023s elapsed) | |
Apr 21 22:55:08.671: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (4.046036398s elapsed) | |
Apr 21 22:55:10.675: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (6.050019071s elapsed) | |
Apr 21 22:55:12.679: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (8.053506232s elapsed) | |
Apr 21 22:55:14.682: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx] | |
[It] should support exec through an HTTP proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:478 | |
STEP: Finding a static kubectl for upload | |
STEP: Using the kubectl in /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/386/kubectl | |
Apr 21 22:55:14.684: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/images/netexec/pod.yaml --namespace=e2e-tests-kubectl-8c8dt --validate=true' | |
Apr 21 22:55:14.800: INFO: stderr: "" | |
Apr 21 22:55:14.800: INFO: stdout: "pod \"netexec\" created" | |
Apr 21 22:55:14.801: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [netexec] | |
Apr 21 22:55:14.801: INFO: Waiting up to 5m0s for pod netexec status to be running and ready | |
Apr 21 22:55:14.805: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (3.972274ms elapsed) | |
Apr 21 22:55:16.809: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.007846145s elapsed) | |
Apr 21 22:55:18.813: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (4.011890474s elapsed) | |
Apr 21 22:55:20.817: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (6.016158098s elapsed) | |
Apr 21 22:55:22.820: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (8.019608896s elapsed) | |
Apr 21 22:55:24.824: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (10.023775309s elapsed) | |
Apr 21 22:55:26.829: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (12.028051298s elapsed) | |
Apr 21 22:55:28.833: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (14.032310451s elapsed) | |
Apr 21 22:55:30.837: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (16.036665393s elapsed) | |
Apr 21 22:55:32.841: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Running", readiness: false) (18.040631504s elapsed) | |
Apr 21 22:55:34.846: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [netexec] | |
STEP: uploading kubeconfig to netexec | |
STEP: uploading kubectl to netexec | |
STEP: Running kubectl in netexec via an HTTP proxy using https_proxy | |
Apr 21 22:55:35.427: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:35.547: INFO: stderr: "" | |
Apr 21 22:55:35.547: INFO: stdout: "pod \"goproxy\" created" | |
Apr 21 22:55:35.547: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [goproxy] | |
Apr 21 22:55:35.547: INFO: Waiting up to 5m0s for pod goproxy status to be running and ready | |
Apr 21 22:55:35.551: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (3.800285ms elapsed) | |
Apr 21 22:55:37.555: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.008374388s elapsed) | |
Apr 21 22:55:39.757: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.210089238s elapsed) | |
Apr 21 22:55:41.800: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (6.252521712s elapsed) | |
Apr 21 22:55:43.807: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (8.259966859s elapsed) | |
Apr 21 22:55:45.840: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [goproxy] | |
Apr 21 22:55:45.865: INFO: About to remote exec: https_proxy=http://10.245.6.8:8080 ./uploads/upload364200617 --kubeconfig=/uploads/upload006810911 --server=https://146.148.88.146:443 --namespace=e2e-tests-kubectl-8c8dt exec nginx echo running in container | |
Apr 21 22:55:46.581: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config log goproxy --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:46.738: INFO: stderr: "" | |
Apr 21 22:55:46.738: INFO: stdout: "2016/04/22 05:55:45 [001] INFO: Running 0 CONNECT handlers\n2016/04/22 05:55:45 [001] INFO: Accepting CONNECT to 146.148.88.146:443\n2016/04/22 05:55:46 [002] INFO: Running 0 CONNECT handlers\n2016/04/22 05:55:46 [002] INFO: Accepting CONNECT to 146.148.88.146:443" | |
STEP: using delete to clean up resources | |
Apr 21 22:55:46.738: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:47.004: INFO: stderr: "" | |
Apr 21 22:55:47.005: INFO: stdout: "pod \"goproxy\" deleted" | |
Apr 21 22:55:47.005: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=goproxy --no-headers --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:47.157: INFO: stderr: "" | |
Apr 21 22:55:47.157: INFO: stdout: "" | |
Apr 21 22:55:47.157: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=goproxy --namespace=e2e-tests-kubectl-8c8dt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:47.308: INFO: stderr: "" | |
Apr 21 22:55:47.308: INFO: stdout: "" | |
STEP: Running kubectl in netexec via an HTTP proxy using HTTPS_PROXY | |
Apr 21 22:55:47.308: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:47.554: INFO: stderr: "" | |
Apr 21 22:55:47.554: INFO: stdout: "pod \"goproxy\" created" | |
Apr 21 22:55:47.554: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [goproxy] | |
Apr 21 22:55:47.555: INFO: Waiting up to 5m0s for pod goproxy status to be running and ready | |
Apr 21 22:55:47.603: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (48.481117ms elapsed) | |
Apr 21 22:55:49.621: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-8c8dt' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.066591647s elapsed) | |
Apr 21 22:55:51.634: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [goproxy] | |
Apr 21 22:55:51.639: INFO: About to remote exec: HTTPS_PROXY=http://10.245.6.6:8080 ./uploads/upload364200617 --kubeconfig=/uploads/upload006810911 --server=https://146.148.88.146:443 --namespace=e2e-tests-kubectl-8c8dt exec nginx echo running in container | |
Apr 21 22:55:52.329: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config log goproxy --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:52.510: INFO: stderr: "" | |
Apr 21 22:55:52.510: INFO: stdout: "2016/04/22 05:55:51 [001] INFO: Running 0 CONNECT handlers\n2016/04/22 05:55:51 [001] INFO: Accepting CONNECT to 146.148.88.146:443\n2016/04/22 05:55:51 [002] INFO: Running 0 CONNECT handlers\n2016/04/22 05:55:51 [002] INFO: Accepting CONNECT to 146.148.88.146:443" | |
STEP: using delete to clean up resources | |
Apr 21 22:55:52.510: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:52.767: INFO: stderr: "" | |
Apr 21 22:55:52.767: INFO: stdout: "pod \"goproxy\" deleted" | |
Apr 21 22:55:52.767: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=goproxy --no-headers --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:52.872: INFO: stderr: "" | |
Apr 21 22:55:52.872: INFO: stdout: "" | |
Apr 21 22:55:52.872: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=goproxy --namespace=e2e-tests-kubectl-8c8dt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:53.000: INFO: stderr: "" | |
Apr 21 22:55:53.001: INFO: stdout: "" | |
STEP: using delete to clean up resources | |
Apr 21 22:55:53.001: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/images/netexec/pod.yaml --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:53.122: INFO: stderr: "" | |
Apr 21 22:55:53.122: INFO: stdout: "pod \"netexec\" deleted" | |
Apr 21 22:55:53.122: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=netexec --no-headers --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:53.218: INFO: stderr: "" | |
Apr 21 22:55:53.218: INFO: stdout: "" | |
Apr 21 22:55:53.219: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=netexec --namespace=e2e-tests-kubectl-8c8dt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:53.351: INFO: stderr: "" | |
Apr 21 22:55:53.351: INFO: stdout: "" | |
[AfterEach] [k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:215 | |
STEP: using delete to clean up resources | |
Apr 21 22:55:53.351: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:53.453: INFO: stderr: "" | |
Apr 21 22:55:53.453: INFO: stdout: "pod \"nginx\" deleted" | |
Apr 21 22:55:53.453: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-8c8dt' | |
Apr 21 22:55:53.535: INFO: stderr: "" | |
Apr 21 22:55:53.535: INFO: stdout: "" | |
Apr 21 22:55:53.535: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-8c8dt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:55:53.623: INFO: stderr: "" | |
Apr 21 22:55:53.623: INFO: stdout: "" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:53.623: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-8c8dt" for this suite. | |
• [SLOW TEST:64.288 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support exec through an HTTP proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:478 | |
------------------------------ | |
[BeforeEach] [k8s.io] ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:41.920: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should serve a basic image on each replica with a public image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38 | |
STEP: Creating replication controller my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007 | |
Apr 21 22:55:42.368: INFO: Pod name my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007: Found 0 pods out of 2 | |
Apr 21 22:55:47.385: INFO: Pod name my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007: Found 2 pods out of 2 | |
STEP: Ensuring each pod is running | |
W0421 22:55:47.385292 17726 request.go:344] Field selector: v1 - pods - metadata.name - my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007-xjwyg: need to check if this is versioned correctly. | |
W0421 22:55:47.391420 17726 request.go:344] Field selector: v1 - pods - metadata.name - my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007-xlws8: need to check if this is versioned correctly. | |
STEP: Trying to dial each unique pod | |
Apr 21 22:55:52.596: INFO: Controller my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007: Got expected result from replica 1 [my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007-xjwyg]: "my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007-xjwyg", 1 of 2 required successes so far | |
Apr 21 22:55:52.717: INFO: Controller my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007: Got expected result from replica 2 [my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007-xlws8]: "my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007-xlws8", 2 of 2 required successes so far | |
STEP: deleting replication controller my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007 in namespace e2e-tests-replication-controller-vyotm | |
Apr 21 22:55:54.826: INFO: Deleting RC my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007 took: 2.062121895s | |
Apr 21 22:55:54.826: INFO: Terminating RC my-hostname-basic-d9075b1b-084e-11e6-bed5-42010af00007 pods took: 107.531µs | |
[AfterEach] [k8s.io] ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:54.826: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-replication-controller-vyotm" for this suite. | |
• [SLOW TEST:27.931 seconds] | |
[k8s.io] ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should serve a basic image on each replica with a public image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir wrapper volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:55.093: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should becomes running | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:179 | |
W0421 22:55:55.182575 17762 request.go:344] Field selector: v1 - pods - metadata.name - pod-secrets-e0b41d9b-084e-11e6-bcb9-42010af00007: need to check if this is versioned correctly. | |
STEP: Cleaning up the secret | |
STEP: Cleaning up the git server pod | |
STEP: Cleaning up the git server svc | |
STEP: Cleaning up the git vol pod | |
[AfterEach] [k8s.io] EmptyDir wrapper volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:59.944: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-krwsi" for this suite. | |
• [SLOW TEST:14.914 seconds] | |
[k8s.io] EmptyDir wrapper volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should becomes running | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:179 | |
------------------------------ | |
[BeforeEach] [k8s.io] Monitoring | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:56.979: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Monitoring | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:39 | |
[It] should verify monitoring pods and all cluster nodes are available on influxdb using heapster. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43 | |
[AfterEach] [k8s.io] Monitoring | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:57.281: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-monitoring-1x9ya" for this suite. | |
• [SLOW TEST:15.800 seconds] | |
[k8s.io] Monitoring | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should verify monitoring pods and all cluster nodes are available on influxdb using heapster. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:59.134: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (root,0666,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:69 | |
STEP: Creating a pod to test emptydir 0666 on tmpfs | |
Apr 21 22:55:59.854: INFO: Waiting up to 5m0s for pod pod-e36a26ce-084e-11e6-8b58-42010af00007 status to be success or failure | |
Apr 21 22:55:59.884: INFO: No Status.Info for container 'test-container' in pod 'pod-e36a26ce-084e-11e6-8b58-42010af00007' yet | |
Apr 21 22:55:59.884: INFO: Waiting for pod pod-e36a26ce-084e-11e6-8b58-42010af00007 in namespace 'e2e-tests-emptydir-gebvv' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.156737ms elapsed) | |
Apr 21 22:56:01.889: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e36a26ce-084e-11e6-8b58-42010af00007' in namespace 'e2e-tests-emptydir-gebvv' so far | |
Apr 21 22:56:01.889: INFO: Waiting for pod pod-e36a26ce-084e-11e6-8b58-42010af00007 in namespace 'e2e-tests-emptydir-gebvv' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.034507507s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-e36a26ce-084e-11e6-8b58-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rw-rw-rw- | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:04.151: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-gebvv" for this suite. | |
• [SLOW TEST:15.152 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (root,0666,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:69 | |
------------------------------ | |
[BeforeEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:40.496: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should scale a job down | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:162 | |
STEP: Creating a job | |
STEP: Ensuring active pods == startParallelism | |
STEP: scale job down | |
STEP: Ensuring active pods == endParallelism | |
[AfterEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:20.662: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-job-4burc" for this suite. | |
• [SLOW TEST:95.188 seconds] | |
[k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should scale a job down | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:162 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:59.998: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:74 | |
Apr 21 22:56:00.286: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
[It] should serve multiport endpoints from pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:225 | |
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-24ysp | |
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-24ysp to expose endpoints map[] | |
Apr 21 22:56:00.360: INFO: Get endpoints failed (7.535062ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found | |
Apr 21 22:56:01.374: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ysp exposes endpoints map[] (1.021996482s elapsed) | |
STEP: creating pod pod1 in namespace e2e-tests-services-24ysp | |
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-24ysp to expose endpoints map[pod1:[100]] | |
Apr 21 22:56:04.509: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ysp exposes endpoints map[pod1:[100]] (3.105924297s elapsed) | |
STEP: creating pod pod2 in namespace e2e-tests-services-24ysp | |
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-24ysp to expose endpoints map[pod1:[100] pod2:[101]] | |
Apr 21 22:56:06.634: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ysp exposes endpoints map[pod1:[100] pod2:[101]] (2.107436541s elapsed) | |
STEP: deleting pod pod1 in namespace e2e-tests-services-24ysp | |
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-24ysp to expose endpoints map[pod2:[101]] | |
Apr 21 22:56:07.714: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ysp exposes endpoints map[pod2:[101]] (1.051989047s elapsed) | |
STEP: deleting pod pod2 in namespace e2e-tests-services-24ysp | |
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-24ysp to expose endpoints map[] | |
Apr 21 22:56:08.774: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ysp exposes endpoints map[] (1.031158094s elapsed) | |
[AfterEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:09.282: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-24ysp" for this suite. | |
• [SLOW TEST:19.515 seconds] | |
[k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should serve multiport endpoints from pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:225 | |
------------------------------ | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:01.026: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy logs on node with explicit kubelet port [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:59 | |
Apr 21 22:56:01.204: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 7.309156ms) | |
Apr 21 22:56:01.213: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 9.451468ms) | |
Apr 21 22:56:01.233: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 19.685781ms) | |
Apr 21 22:56:01.249: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 15.889494ms) | |
Apr 21 22:56:01.262: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 13.115087ms) | |
Apr 21 22:56:01.286: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 24.088537ms) | |
Apr 21 22:56:01.300: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 13.436283ms) | |
Apr 21 22:56:01.314: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 14.076294ms) | |
Apr 21 22:56:01.324: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 9.925521ms) | |
Apr 21 22:56:01.334: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 9.834046ms) | |
Apr 21 22:56:01.348: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 14.249999ms) | |
Apr 21 22:56:01.357: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 8.520818ms) | |
Apr 21 22:56:01.388: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 31.177409ms) | |
Apr 21 22:56:01.423: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 35.465933ms) | |
Apr 21 22:56:01.459: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 35.639658ms) | |
Apr 21 22:56:01.470: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 11.046432ms) | |
Apr 21 22:56:01.513: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 43.223762ms) | |
Apr 21 22:56:01.532: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 18.655773ms) | |
Apr 21 22:56:01.569: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 37.399388ms) | |
Apr 21 22:56:01.664: INFO: /api/v1/proxy/nodes/e2e-gce-master-1-minion-6ch0:10250/logs/: <pre> | |
<a href="alternatives.log">alternatives.log</a> | |
<a href="apt/">apt/</a> | |
<a href="auth.log">... (200; 94.927317ms) | |
[AfterEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:01.664: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-proxy-uluct" for this suite. | |
• [SLOW TEST:20.726 seconds] | |
[k8s.io] Proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40 | |
should proxy logs on node with explicit kubelet port [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:59 | |
------------------------------ | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:14.145: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:74 | |
Apr 21 22:55:14.190: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
[It] should be able to create a functioning NodePort service | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:405 | |
STEP: creating service nodeport-test with type=NodePort in namespace e2e-tests-services-gjmvo | |
STEP: creating pod to be part of service nodeport-test | |
Apr 21 22:55:14.370: INFO: Waiting up to 2m0s for 1 pods to be created | |
Apr 21 22:55:14.379: INFO: Found 0/1 pods - will retry | |
Apr 21 22:55:16.394: INFO: Found all 1 pods | |
Apr 21 22:55:16.394: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [nodeport-test-gn8q6] | |
Apr 21 22:55:16.394: INFO: Waiting up to 2m0s for pod nodeport-test-gn8q6 status to be running and ready | |
Apr 21 22:55:16.404: INFO: Waiting for pod nodeport-test-gn8q6 in namespace 'e2e-tests-services-gjmvo' status to be 'running and ready'(found phase: "Pending", readiness: false) (10.231708ms elapsed) | |
Apr 21 22:55:18.408: INFO: Waiting for pod nodeport-test-gn8q6 in namespace 'e2e-tests-services-gjmvo' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.014352718s elapsed) | |
Apr 21 22:55:20.412: INFO: Waiting for pod nodeport-test-gn8q6 in namespace 'e2e-tests-services-gjmvo' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.018192463s elapsed) | |
Apr 21 22:55:22.417: INFO: Waiting for pod nodeport-test-gn8q6 in namespace 'e2e-tests-services-gjmvo' status to be 'running and ready'(found phase: "Pending", readiness: false) (6.023029379s elapsed) | |
Apr 21 22:55:24.422: INFO: Waiting for pod nodeport-test-gn8q6 in namespace 'e2e-tests-services-gjmvo' status to be 'running and ready'(found phase: "Pending", readiness: false) (8.027559302s elapsed) | |
Apr 21 22:55:26.426: INFO: Waiting for pod nodeport-test-gn8q6 in namespace 'e2e-tests-services-gjmvo' status to be 'running and ready'(found phase: "Pending", readiness: false) (10.031834766s elapsed) | |
Apr 21 22:55:28.430: INFO: Waiting for pod nodeport-test-gn8q6 in namespace 'e2e-tests-services-gjmvo' status to be 'running and ready'(found phase: "Pending", readiness: false) (12.035392759s elapsed) | |
Apr 21 22:55:30.433: INFO: Waiting for pod nodeport-test-gn8q6 in namespace 'e2e-tests-services-gjmvo' status to be 'running and ready'(found phase: "Pending", readiness: false) (14.038803608s elapsed) | |
Apr 21 22:55:32.444: INFO: Waiting for pod nodeport-test-gn8q6 in namespace 'e2e-tests-services-gjmvo' status to be 'running and ready'(found phase: "Running", readiness: false) (16.049638707s elapsed) | |
Apr 21 22:55:34.448: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nodeport-test-gn8q6] | |
STEP: hitting the pod through the service's NodePort | |
Apr 21 22:55:34.448: INFO: Testing HTTP reachability of http://8.34.213.250:30345/echo?msg=hello | |
Apr 21 22:55:34.477: INFO: Successfully reached http://8.34.213.250:30345/echo?msg=hello | |
STEP: verifying the node port is locked | |
W0421 22:55:34.481564 17665 request.go:344] Field selector: v1 - pods - metadata.name - hostexec: need to check if this is versioned correctly. | |
Apr 21 22:55:36.715: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-services-gjmvo hostexec -- /bin/sh -c for i in $(seq 1 300); do if ss -ant46 'sport = :30345' | grep ^LISTEN; then exit 0; fi; sleep 1; done; exit 1' | |
Apr 21 22:55:36.976: INFO: stderr: "" | |
[AfterEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:36.977: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-gjmvo" for this suite. | |
• [SLOW TEST:67.893 seconds] | |
[k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be able to create a functioning NodePort service | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:405 | |
------------------------------ | |
[BeforeEach] [k8s.io] ReplicaSet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:06.785: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should serve a basic image on each replica with a private image | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47 | |
STEP: Creating ReplicaSet my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007 | |
Apr 21 22:56:07.047: INFO: Pod name my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007: Found 0 pods out of 2 | |
Apr 21 22:56:12.052: INFO: Pod name my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007: Found 2 pods out of 2 | |
STEP: Ensuring each pod is running | |
W0421 22:56:12.052460 17751 request.go:344] Field selector: v1 - pods - metadata.name - my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007-ja6y3: need to check if this is versioned correctly. | |
W0421 22:56:12.054628 17751 request.go:344] Field selector: v1 - pods - metadata.name - my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007-l49h2: need to check if this is versioned correctly. | |
STEP: Trying to dial each unique pod | |
Apr 21 22:56:17.137: INFO: Controller my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007: Got expected result from replica 1 [my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007-ja6y3]: "my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007-ja6y3", 1 of 2 required successes so far | |
Apr 21 22:56:17.158: INFO: Controller my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007: Got expected result from replica 2 [my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007-l49h2]: "my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007-l49h2", 2 of 2 required successes so far | |
STEP: deleting ReplicaSet my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007 in namespace e2e-tests-replicaset-hy7yn | |
Apr 21 22:56:19.229: INFO: Deleting RS my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007 took: 2.06267589s | |
Apr 21 22:56:19.245: INFO: Terminating ReplicaSet my-hostname-private-e7bc0fb4-084e-11e6-b067-42010af00007 pods took: 16.357289ms | |
[AfterEach] [k8s.io] ReplicaSet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:19.245: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-replicaset-hy7yn" for this suite. | |
• [SLOW TEST:17.588 seconds] | |
[k8s.io] ReplicaSet | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should serve a basic image on each replica with a private image | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47 | |
------------------------------ | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:03.074: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:74 | |
Apr 21 22:56:03.186: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
[It] should serve a basic endpoint from pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:141 | |
STEP: creating service endpoint-test2 in namespace e2e-tests-services-b35u5 | |
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-b35u5 to expose endpoints map[] | |
Apr 21 22:56:03.255: INFO: Get endpoints failed (17.963653ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found | |
Apr 21 22:56:04.265: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b35u5 exposes endpoints map[] (1.028548924s elapsed) | |
STEP: creating pod pod1 in namespace e2e-tests-services-b35u5 | |
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-b35u5 to expose endpoints map[pod1:[80]] | |
Apr 21 22:56:06.411: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b35u5 exposes endpoints map[pod1:[80]] (2.099232048s elapsed) | |
STEP: creating pod pod2 in namespace e2e-tests-services-b35u5 | |
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-b35u5 to expose endpoints map[pod1:[80] pod2:[80]] | |
Apr 21 22:56:07.506: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b35u5 exposes endpoints map[pod2:[80] pod1:[80]] (1.075507638s elapsed) | |
STEP: deleting pod pod1 in namespace e2e-tests-services-b35u5 | |
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-b35u5 to expose endpoints map[pod2:[80]] | |
Apr 21 22:56:08.574: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b35u5 exposes endpoints map[pod2:[80]] (1.046811653s elapsed) | |
STEP: deleting pod pod2 in namespace e2e-tests-services-b35u5 | |
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-b35u5 to expose endpoints map[] | |
Apr 21 22:56:09.654: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b35u5 exposes endpoints map[] (1.047867512s elapsed) | |
[AfterEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:09.770: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-b35u5" for this suite. | |
• [SLOW TEST:22.275 seconds] | |
[k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should serve a basic endpoint from pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:141 | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.950: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should *not* be restarted with a /healthz http liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:806 | |
STEP: Creating pod liveness-http in namespace e2e-tests-pods-5lkm4 | |
W0421 22:53:50.908095 17657 request.go:344] Field selector: v1 - pods - metadata.name - liveness-http: need to check if this is versioned correctly. | |
Apr 21 22:54:07.861: INFO: Started pod liveness-http in namespace e2e-tests-pods-5lkm4 | |
STEP: checking the pod's current state and verifying that restartCount is present | |
Apr 21 22:54:07.880: INFO: Initial restart count of pod liveness-http is 0 | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:08.422: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-5lkm4" for this suite. | |
• [SLOW TEST:159.555 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should *not* be restarted with a /healthz http liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:806 | |
------------------------------ | |
SSS | |
------------------------------ | |
[BeforeEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:08.142: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:91 | |
[It] should grab all metrics from a Kubelet. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:110 | |
STEP: Proxying to Node through the API server | |
[AfterEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:08.591: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-metrics-grabber-xcj51" for this suite. | |
• [SLOW TEST:20.555 seconds] | |
[k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should grab all metrics from a Kubelet. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:110 | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:08.675: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a service. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:89 | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
STEP: Creating a Service | |
STEP: Ensuring resource quota status captures service creation | |
STEP: Deleting a Service | |
STEP: Ensuring resource quota status released usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:15.244: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-xkmc0" for this suite. | |
• [SLOW TEST:21.930 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a ResourceQuota and capture the life of a service. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:89 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:41.229: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:713 | |
Apr 21 22:55:41.501: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-6ms0m' | |
Apr 21 22:55:41.700: INFO: stderr: "" | |
Apr 21 22:55:41.700: INFO: stdout: "replicationcontroller \"redis-master\" created" | |
Apr 21 22:55:41.700: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook-go/redis-master-service.json --namespace=e2e-tests-kubectl-6ms0m' | |
Apr 21 22:55:42.170: INFO: stderr: "" | |
Apr 21 22:55:42.170: INFO: stdout: "service \"redis-master\" created" | |
Apr 21 22:55:43.175: INFO: Selector matched 1 pods for map[app:redis] | |
Apr 21 22:55:43.175: INFO: Found 0 / 1 | |
Apr 21 22:55:44.174: INFO: Selector matched 1 pods for map[app:redis] | |
Apr 21 22:55:44.174: INFO: Found 0 / 1 | |
Apr 21 22:55:45.175: INFO: Selector matched 1 pods for map[app:redis] | |
Apr 21 22:55:45.175: INFO: Found 1 / 1 | |
Apr 21 22:55:45.175: INFO: WaitFor completed with timeout 1m30s. Pods found = 1 out of 1 | |
Apr 21 22:55:45.209: INFO: Selector matched 1 pods for map[app:redis] | |
Apr 21 22:55:45.209: INFO: ForEach: Found 1 pods from the filter. Now looping through them. | |
Apr 21 22:55:45.209: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config describe pod redis-master-ndtji --namespace=e2e-tests-kubectl-6ms0m' | |
Apr 21 22:55:45.580: INFO: stderr: "" | |
Apr 21 22:55:45.580: INFO: stdout: "Name:\t\tredis-master-ndtji\nNamespace:\te2e-tests-kubectl-6ms0m\nNode:\t\te2e-gce-master-1-minion-x3cg/10.240.0.3\nStart Time:\tThu, 21 Apr 2016 22:55:41 -0700\nLabels:\t\tapp=redis,role=master\nStatus:\t\tRunning\nIP:\t\t10.245.1.5\nControllers:\tReplicationController/redis-master\nContainers:\n redis-master:\n Container ID:\tdocker://5c15d508e0ae106d338ef6b39ec4f726ce2848b69c25fa02254de445ce597632\n Image:\t\tredis\n Image ID:\t\tdocker://1f4ff6e27d642d55bf77f9fffb02a662e7705e966aa314f6745a67e8c6b30718\n Port:\t\t6379/TCP\n QoS Tier:\n cpu:\t\t\tBestEffort\n memory:\t\t\tBestEffort\n State:\t\t\tRunning\n Started:\t\t\tThu, 21 Apr 2016 22:55:44 -0700\n Ready:\t\t\tTrue\n Restart Count:\t\t0\n Environment Variables:\t<none>\nConditions:\n Type\t\tStatus\n Ready \tTrue \nVolumes:\n default-token-8lr0n:\n Type:\tSecret (a volume populated by a Secret)\n SecretName:\tdefault-token-8lr0n\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t\t\tSubobjectPath\t\t\tType\t\tReason\t\tMessage\n ---------\t--------\t-----\t----\t\t\t\t\t-------------\t\t\t--------\t------\t\t-------\n 4s\t\t4s\t\t1\t{default-scheduler }\t\t\t\t\t\t\tNormal\t\tScheduled\tSuccessfully assigned redis-master-ndtji to e2e-gce-master-1-minion-x3cg\n 2s\t\t2s\t\t1\t{kubelet e2e-gce-master-1-minion-x3cg}\tspec.containers{redis-master}\tNormal\t\tPulling\t\tpulling image \"redis\"\n 1s\t\t1s\t\t1\t{kubelet e2e-gce-master-1-minion-x3cg}\tspec.containers{redis-master}\tNormal\t\tPulled\t\tSuccessfully pulled image \"redis\"\n 1s\t\t1s\t\t1\t{kubelet e2e-gce-master-1-minion-x3cg}\tspec.containers{redis-master}\tNormal\t\tCreated\t\tCreated container with docker id 5c15d508e0ae\n 1s\t\t1s\t\t1\t{kubelet e2e-gce-master-1-minion-x3cg}\tspec.containers{redis-master}\tNormal\t\tStarted\t\tStarted container with docker id 5c15d508e0ae" | |
Apr 21 22:55:45.580: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-6ms0m' | |
Apr 21 22:55:45.780: INFO: stderr: "" | |
Apr 21 22:55:45.780: INFO: stdout: "Name:\t\tredis-master\nNamespace:\te2e-tests-kubectl-6ms0m\nImage(s):\tredis\nSelector:\tapp=redis,role=master\nLabels:\t\tapp=redis,role=master\nReplicas:\t1 current / 1 desired\nPods Status:\t1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nNo volumes.\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t\tSubobjectPath\tType\t\tReason\t\t\tMessage\n ---------\t--------\t-----\t----\t\t\t\t-------------\t--------\t------\t\t\t-------\n 4s\t\t4s\t\t1\t{replication-controller }\t\t\tNormal\t\tSuccessfulCreate\tCreated pod: redis-master-ndtji" | |
Apr 21 22:55:45.780: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-6ms0m' | |
Apr 21 22:55:45.967: INFO: stderr: "" | |
Apr 21 22:55:45.967: INFO: stdout: "Name:\t\t\tredis-master\nNamespace:\t\te2e-tests-kubectl-6ms0m\nLabels:\t\t\tapp=redis,role=master\nSelector:\t\tapp=redis,role=master\nType:\t\t\tClusterIP\nIP:\t\t\t10.0.119.96\nPort:\t\t\t<unset>\t6379/TCP\nEndpoints:\t\t10.245.1.5:6379\nSession Affinity:\tNone\nNo events." | |
Apr 21 22:55:45.977: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config describe node e2e-gce-master-1-master' | |
Apr 21 22:55:46.354: INFO: stderr: "" | |
Apr 21 22:55:46.354: INFO: stdout: "Name:\t\t\te2e-gce-master-1-master\nLabels:\t\t\tbeta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=e2e-gce-master-1-master,master=\nCreationTimestamp:\tThu, 21 Apr 2016 22:51:10 -0700\nPhase:\t\t\t\nConditions:\n Type\t\tStatus\tLastHeartbeatTime\t\t\tLastTransitionTime\t\t\tReason\t\t\t\tMessage\n ----\t\t------\t-----------------\t\t\t------------------\t\t\t------\t\t\t\t-------\n OutOfDisk \tFalse \tThu, 21 Apr 2016 22:55:42 -0700 \tThu, 21 Apr 2016 22:51:10 -0700 \tKubeletHasSufficientDisk \tkubelet has sufficient disk space available\n Ready \tTrue \tThu, 21 Apr 2016 22:55:42 -0700 \tThu, 21 Apr 2016 22:51:10 -0700 \tKubeletReady \t\t\tkubelet is posting ready status. WARNING: CPU hardcapping unsupported\nAddresses:\t10.240.0.2,146.148.88.146\nCapacity:\n cpu:\t\t2\n memory:\t7679820Ki\n pods:\t\t110\nSystem Info:\n Machine ID:\t\t\t\n System UUID:\t\t\t2C164C14-5F22-23CD-0A00-1C580AB99D23\n Boot ID:\t\t\te5b88aa1-0744-4279-bcea-63398dc012f3\n Kernel Version:\t\t3.16.0-4-amd64\n OS Image:\t\t\tDebian GNU/Linux 7 (wheezy)\n Container Runtime Version:\tdocker://1.9.1\n Kubelet Version:\t\tv1.3.0-alpha.2.487+fe22780e894ff3\n Kube-Proxy Version:\t\tv1.3.0-alpha.2.487+fe22780e894ff3\nPodCIDR:\t\t\t10.245.0.0/24\nExternalID:\t\t\t7202783440443540890\nNon-terminated Pods:\t\t(5 in total)\n Namespace\t\t\tName\t\t\t\t\t\t\tCPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n ---------\t\t\t----\t\t\t\t\t\t\t------------\t----------\t---------------\t-------------\n kube-system\t\t\tetcd-server-e2e-gce-master-1-master\t\t\t200m (10%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n kube-system\t\t\tetcd-server-events-e2e-gce-master-1-master\t\t100m (5%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n kube-system\t\t\tkube-apiserver-e2e-gce-master-1-master\t\t\t250m (12%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n kube-system\t\t\tkube-controller-manager-e2e-gce-master-1-master\t\t200m (10%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n kube-system\t\t\tkube-scheduler-e2e-gce-master-1-master\t\t\t100m (5%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)\n CPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n ------------\t----------\t---------------\t-------------\n 850m (42%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t\t\tSubobjectPath\tType\t\tReason\t\t\tMessage\n ---------\t--------\t-----\t----\t\t\t\t\t-------------\t--------\t------\t\t\t-------\n 4m\t\t4m\t\t1\t{kubelet e2e-gce-master-1-master}\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n 4m\t\t4m\t\t1\t{kubelet e2e-gce-master-1-master}\t\t\tNormal\t\tNodeNotSchedulable\tNode e2e-gce-master-1-master status is now: NodeNotSchedulable\n 4m\t\t4m\t\t13\t{kubelet e2e-gce-master-1-master}\t\t\tNormal\t\tNodeHasSufficientDisk\tNode e2e-gce-master-1-master status is now: NodeHasSufficientDisk\n 4m\t\t4m\t\t1\t{controllermanager }\t\t\t\t\tNormal\t\tRegisteredNode\t\tNode e2e-gce-master-1-master event: Registered Node e2e-gce-master-1-master in NodeController" | |
Apr 21 22:55:46.355: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config describe namespace e2e-tests-kubectl-6ms0m' | |
Apr 21 22:55:46.455: INFO: stderr: "" | |
Apr 21 22:55:46.455: INFO: stdout: "Name:\te2e-tests-kubectl-6ms0m\nLabels:\te2e-framework=kubectl,e2e-run=9528e6b3-084e-11e6-b9e9-42010af00007\nStatus:\tActive\n\nNo resource quota.\n\nNo resource limits." | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:46.456: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-6ms0m" for this suite. | |
• [SLOW TEST:50.284 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl describe | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should check if kubectl describe prints relevant information for rc and pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:713 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:10.008: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support retrieving logs from the container over websockets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:954 | |
Apr 21 22:56:10.083: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: creating the pod | |
STEP: submitting the pod to kubernetes | |
W0421 22:56:10.114277 17762 request.go:344] Field selector: v1 - pods - metadata.name - pod-logs-websocket-e9973bc5-084e-11e6-bcb9-42010af00007: need to check if this is versioned correctly. | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:11.744: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-f3e7p" for this suite. | |
• [SLOW TEST:21.762 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support retrieving logs from the container over websockets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:954 | |
------------------------------ | |
[BeforeEach] [k8s.io] Downward API volume | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:12.779: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should update labels on modification [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:96 | |
STEP: Creating the pod | |
W0421 22:56:13.044790 17620 request.go:344] Field selector: v1 - pods - metadata.name - labelsupdateeb4cbab7-084e-11e6-9214-42010af00007: need to check if this is versioned correctly. | |
STEP: Deleting the pod | |
[AfterEach] [k8s.io] Downward API volume | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:19.315: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-downward-api-7ut21" for this suite. | |
• [SLOW TEST:21.623 seconds] | |
[k8s.io] Downward API volume | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should update labels on modification [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:96 | |
------------------------------ | |
[BeforeEach] [k8s.io] Addon update | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:37.951: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Addon update | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:225 | |
Apr 21 22:55:38.029: INFO: Executing 'sudo TEST_ADDON_CHECK_INTERVAL_SEC=1 /etc/init.d/kube-addons restart' on 146.148.88.146:22 | |
[It] should propagate add-on file changes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:343 | |
Apr 21 22:55:38.055: INFO: Executing 'mkdir -p addon-test-dir/e2e-tests-addon-update-test-wy09b' on 146.148.88.146:22 | |
Apr 21 22:55:38.059: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-controller-v1.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:38.066: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-controller-v2.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:38.069: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-service-v1.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:38.073: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-service-v2.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:38.077: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-wy09b/invalid-addon-controller-v1.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:38.081: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-wy09b/invalid-addon-service-v1.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:38.085: INFO: Executing 'sudo rm -rf /etc/kubernetes/addons/addon-test-dir' on 146.148.88.146:22 | |
Apr 21 22:55:38.092: INFO: Executing 'sudo mkdir -p /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b' on 146.148.88.146:22 | |
STEP: copy invalid manifests to the destination dir (without kubernetes.io/cluster-service label) | |
Apr 21 22:55:38.099: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-wy09b/invalid-addon-controller-v1.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/invalid-addon-controller-v1.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:38.106: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-wy09b/invalid-addon-service-v1.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/invalid-addon-service-v1.yaml' on 146.148.88.146:22 | |
STEP: copy new manifests | |
Apr 21 22:55:38.116: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-controller-v1.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-controller-v1.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:38.123: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-service-v1.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-service-v1.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:47.155: INFO: Service addon-test in namespace e2e-tests-addon-update-test-wy09b found. | |
Apr 21 22:55:47.158: INFO: ReplicationController addon-test-v1 in namespace default found. | |
STEP: update manifests | |
Apr 21 22:55:47.158: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-controller-v2.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-controller-v2.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:47.173: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-service-v2.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-service-v2.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:47.210: INFO: Executing 'sudo rm /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-controller-v1.yaml' on 146.148.88.146:22 | |
Apr 21 22:55:47.251: INFO: Executing 'sudo rm /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-service-v1.yaml' on 146.148.88.146:22 | |
Apr 21 22:56:05.311: INFO: Service addon-test-updated in namespace e2e-tests-addon-update-test-wy09b found. | |
Apr 21 22:56:05.314: INFO: ReplicationController addon-test-v2 in namespace e2e-tests-addon-update-test-wy09b found. | |
Apr 21 22:56:05.318: INFO: Service addon-test in namespace e2e-tests-addon-update-test-wy09b disappeared. | |
Apr 21 22:56:05.320: INFO: Get ReplicationController addon-test-v1 in namespace default failed (replicationcontrollers "addon-test-v1" not found). | |
STEP: remove manifests | |
Apr 21 22:56:05.320: INFO: Executing 'sudo rm /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-controller-v2.yaml' on 146.148.88.146:22 | |
Apr 21 22:56:05.342: INFO: Executing 'sudo rm /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-wy09b/addon-service-v2.yaml' on 146.148.88.146:22 | |
Apr 21 22:56:11.391: INFO: Service addon-test-updated in namespace e2e-tests-addon-update-test-wy09b disappeared. | |
Apr 21 22:56:11.393: INFO: ReplicationController addon-test-v2 in namespace e2e-tests-addon-update-test-wy09b found. | |
Apr 21 22:56:14.403: INFO: Get ReplicationController addon-test-v2 in namespace e2e-tests-addon-update-test-wy09b failed (replicationcontrollers "addon-test-v2" not found). | |
STEP: verify invalid API addons weren't created | |
Apr 21 22:56:14.499: INFO: Executing 'sudo rm -rf /etc/kubernetes/addons/addon-test-dir' on 146.148.88.146:22 | |
Apr 21 22:56:14.543: INFO: Executing 'rm -rf addon-test-dir' on 146.148.88.146:22 | |
[AfterEach] [k8s.io] Addon update | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:14.562: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-addon-update-test-wy09b" for this suite. | |
[AfterEach] [k8s.io] Addon update | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:243 | |
Apr 21 22:56:34.705: INFO: Executing 'sudo /etc/init.d/kube-addons restart' on 146.148.88.146:22 | |
• [SLOW TEST:56.819 seconds] | |
[k8s.io] Addon update | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should propagate add-on file changes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:343 | |
------------------------------ | |
[BeforeEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:14.288: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:91 | |
[It] should grab all metrics from a Scheduler. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:134 | |
STEP: Proxying to Pod through the API server | |
[AfterEach] [k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:14.811: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-metrics-grabber-a20hc" for this suite. | |
• [SLOW TEST:20.612 seconds] | |
[k8s.io] MetricsGrabber | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should grab all metrics from a Scheduler. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:134 | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.946: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] deployment should support rollover | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70 | |
Apr 21 22:53:50.260: INFO: Pod name rollover-pod: Found 0 pods out of 4 | |
Apr 21 22:53:55.264: INFO: Pod name rollover-pod: Found 4 pods out of 4 | |
STEP: ensuring each pod is running | |
W0421 22:53:55.264780 17586 request.go:344] Field selector: v1 - pods - metadata.name - test-rollover-controller-31up0: need to check if this is versioned correctly. | |
W0421 22:54:18.848819 17586 request.go:344] Field selector: v1 - pods - metadata.name - test-rollover-controller-9ak2s: need to check if this is versioned correctly. | |
W0421 22:55:03.044751 17586 request.go:344] Field selector: v1 - pods - metadata.name - test-rollover-controller-vrnew: need to check if this is versioned correctly. | |
W0421 22:55:03.058422 17586 request.go:344] Field selector: v1 - pods - metadata.name - test-rollover-controller-z04pg: need to check if this is versioned correctly. | |
STEP: trying to dial each unique pod | |
Apr 21 22:55:03.097: INFO: Controller rollover-pod: Got non-empty result from replica 1 [test-rollover-controller-31up0]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 1 of 4 required successes so far | |
Apr 21 22:55:03.110: INFO: Controller rollover-pod: Got non-empty result from replica 2 [test-rollover-controller-9ak2s]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 2 of 4 required successes so far | |
Apr 21 22:55:03.120: INFO: Controller rollover-pod: Got non-empty result from replica 3 [test-rollover-controller-vrnew]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 3 of 4 required successes so far | |
Apr 21 22:55:03.128: INFO: Controller rollover-pod: Got non-empty result from replica 4 [test-rollover-controller-z04pg]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 4 of 4 required successes so far | |
Apr 21 22:55:09.134: INFO: Creating deployment test-rollover-deployment | |
Apr 21 22:55:11.237: INFO: Updating deployment test-rollover-deployment | |
Apr 21 22:56:03.376: INFO: Deleting deployment test-rollover-deployment | |
Apr 21 22:56:10.104: INFO: Ensuring deployment test-rollover-deployment was deleted | |
Apr 21 22:56:10.107: INFO: Ensuring deployment test-rollover-deployment's RSes were deleted | |
Apr 21 22:56:10.114: INFO: Ensuring deployment test-rollover-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:10.119: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-elw1m" for this suite. | |
• [SLOW TEST:166.229 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
deployment should support rollover | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:21.754: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0777,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113 | |
STEP: Creating a pod to test emptydir 0777 on node default medium | |
Apr 21 22:56:21.900: INFO: Waiting up to 5m0s for pod pod-f09af8cc-084e-11e6-84cd-42010af00007 status to be success or failure | |
Apr 21 22:56:21.911: INFO: No Status.Info for container 'test-container' in pod 'pod-f09af8cc-084e-11e6-84cd-42010af00007' yet | |
Apr 21 22:56:21.911: INFO: Waiting for pod pod-f09af8cc-084e-11e6-84cd-42010af00007 in namespace 'e2e-tests-emptydir-xa0ry' status to be 'success or failure'(found phase: "Pending", readiness: false) (11.664625ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-f09af8cc-084e-11e6-84cd-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267 | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rwxrwxrwx | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:23.942: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-xa0ry" for this suite. | |
• [SLOW TEST:17.220 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (non-root,0777,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:15.687: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:36 | |
[It] should be able to override the image's default command and arguments [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:72 | |
STEP: Creating a pod to test override all | |
Apr 21 22:56:15.890: INFO: Waiting up to 5m0s for pod client-containers-ed03da35-084e-11e6-bd1e-42010af00007 status to be success or failure | |
Apr 21 22:56:15.910: INFO: No Status.Info for container 'test-container' in pod 'client-containers-ed03da35-084e-11e6-bd1e-42010af00007' yet | |
Apr 21 22:56:15.910: INFO: Waiting for pod client-containers-ed03da35-084e-11e6-bd1e-42010af00007 in namespace 'e2e-tests-containers-hkaiy' status to be 'success or failure'(found phase: "Pending", readiness: false) (20.218915ms elapsed) | |
Apr 21 22:56:17.934: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-ed03da35-084e-11e6-bd1e-42010af00007' in namespace 'e2e-tests-containers-hkaiy' so far | |
Apr 21 22:56:17.934: INFO: Waiting for pod client-containers-ed03da35-084e-11e6-bd1e-42010af00007 in namespace 'e2e-tests-containers-hkaiy' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.043827922s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod client-containers-ed03da35-084e-11e6-bd1e-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:[/ep-2 override arguments] | |
[AfterEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:20.077: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-containers-hkaiy" for this suite. | |
• [SLOW TEST:24.548 seconds] | |
[k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be able to override the image's default command and arguments [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:72 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:19.515: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a nodePort service. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:170 | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
STEP: Creating a NodePort type Service | |
STEP: Ensuring resource quota status captures service creation | |
STEP: Deleting a Service | |
STEP: Ensuring resource quota status released usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:26.100: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-2hikw" for this suite. | |
• [SLOW TEST:21.619 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a ResourceQuota and capture the life of a nodePort service. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:170 | |
------------------------------ | |
[BeforeEach] [k8s.io] Variable Expansion | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:25.350: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should allow substituting values in a container's args [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:131 | |
STEP: Creating a pod to test substitution in container's args | |
Apr 21 22:56:25.784: INFO: Waiting up to 5m0s for pod var-expansion-f2ec2d28-084e-11e6-9698-42010af00007 status to be success or failure | |
Apr 21 22:56:25.795: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-f2ec2d28-084e-11e6-9698-42010af00007' yet | |
Apr 21 22:56:25.795: INFO: Waiting for pod var-expansion-f2ec2d28-084e-11e6-9698-42010af00007 in namespace 'e2e-tests-var-expansion-jdhzh' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.484396ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod var-expansion-f2ec2d28-084e-11e6-9698-42010af00007 container dapi-container: <nil> | |
STEP: Successfully fetched pod logs:test-value | |
[AfterEach] [k8s.io] Variable Expansion | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:27.895: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-var-expansion-jdhzh" for this suite. | |
• [SLOW TEST:17.598 seconds] | |
[k8s.io] Variable Expansion | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should allow substituting values in a container's args [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:131 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:22.039: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] deployment should label adopted RSs and pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:82 | |
Apr 21 22:56:22.224: INFO: Pod name nginx: Found 0 pods out of 3 | |
Apr 21 22:56:27.235: INFO: Pod name nginx: Found 3 pods out of 3 | |
STEP: ensuring each pod is running | |
W0421 22:56:27.235680 17665 request.go:344] Field selector: v1 - pods - metadata.name - test-adopted-controller-4qe89: need to check if this is versioned correctly. | |
W0421 22:56:27.239447 17665 request.go:344] Field selector: v1 - pods - metadata.name - test-adopted-controller-6lk7d: need to check if this is versioned correctly. | |
W0421 22:56:27.274479 17665 request.go:344] Field selector: v1 - pods - metadata.name - test-adopted-controller-hkyo6: need to check if this is versioned correctly. | |
STEP: trying to dial each unique pod | |
Apr 21 22:56:27.369: INFO: Controller nginx: Got non-empty result from replica 1 [test-adopted-controller-4qe89]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 1 of 3 required successes so far | |
Apr 21 22:56:27.390: INFO: Controller nginx: Got non-empty result from replica 2 [test-adopted-controller-6lk7d]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 2 of 3 required successes so far | |
Apr 21 22:56:27.406: INFO: Controller nginx: Got non-empty result from replica 3 [test-adopted-controller-hkyo6]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 3 of 3 required successes so far | |
Apr 21 22:56:27.406: INFO: Creating deployment test-adopted-deployment | |
Apr 21 22:56:31.558: INFO: Deleting deployment test-adopted-deployment | |
Apr 21 22:56:33.746: INFO: Ensuring deployment test-adopted-deployment was deleted | |
Apr 21 22:56:33.753: INFO: Ensuring deployment test-adopted-deployment's RSes were deleted | |
Apr 21 22:56:33.761: INFO: Ensuring deployment test-adopted-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:33.764: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-o2qr8" for this suite. | |
• [SLOW TEST:21.783 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
deployment should label adopted RSs and pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:82 | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:28.510: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a persistent volume claim. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:377 | |
STEP: Creating a ResourceQuota | |
STEP: Ensuring resource quota status is calculated | |
STEP: Creating a PersistentVolumeClaim | |
STEP: Ensuring resource quota status captures persistent volume claimcreation | |
STEP: Deleting a PersistentVolumeClaim | |
STEP: Ensuring resource quota status released usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:34.899: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-1ixwh" for this suite. | |
• [SLOW TEST:16.430 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a ResourceQuota and capture the life of a persistent volume claim. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:377 | |
------------------------------ | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:28.551: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:74 | |
Apr 21 22:54:28.596: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
[It] should be able to up and down services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:277 | |
STEP: creating service1 in namespace e2e-tests-services-k8kls | |
STEP: creating service service1 in namespace e2e-tests-services-k8kls | |
STEP: creating replication controller service1 in namespace e2e-tests-services-k8kls | |
Apr 21 22:54:28.649: INFO: Created replication controller with name: service1, namespace: e2e-tests-services-k8kls, replica count: 3 | |
Apr 21 22:54:31.649: INFO: service1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:34.650: INFO: service1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:37.650: INFO: service1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:40.651: INFO: service1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:43.651: INFO: service1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:46.652: INFO: service1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:49.652: INFO: service1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:52.652: INFO: service1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:55.653: INFO: service1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:54:58.654: INFO: service1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:01.655: INFO: service1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:04.655: INFO: service1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
STEP: creating service2 in namespace e2e-tests-services-k8kls | |
STEP: creating service service2 in namespace e2e-tests-services-k8kls | |
STEP: creating replication controller service2 in namespace e2e-tests-services-k8kls | |
Apr 21 22:55:04.720: INFO: Created replication controller with name: service2, namespace: e2e-tests-services-k8kls, replica count: 3 | |
Apr 21 22:55:07.720: INFO: service2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:10.721: INFO: service2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:13.721: INFO: service2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:16.721: INFO: service2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:19.722: INFO: service2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:22.722: INFO: service2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:25.723: INFO: service2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:28.723: INFO: service2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
STEP: verifying service1 is up | |
Apr 21 22:55:28.746: INFO: Creating new exec pod | |
STEP: verifying service has 3 reachable backends | |
Apr 21 22:55:32.759: INFO: Executing cmd "set -e; for i in $(seq 1 150); do wget -q --timeout=0.2 --tries=1 -O - http://10.0.143.249:80 2>&1 || true; echo; done" on host 8.34.213.250:22 | |
Apr 21 22:55:33.416: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.143.249:80 2>&1 || true; echo; done" in pod e2e-tests-services-k8kls/execpod | |
Apr 21 22:55:33.416: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-services-k8kls execpod -- /bin/sh -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.143.249:80 2>&1 || true; echo; done' | |
Apr 21 22:55:34.090: INFO: stderr: "" | |
STEP: deleting pod execpod in namespace e2e-tests-services-k8kls | |
STEP: verifying service2 is up | |
Apr 21 22:55:34.102: INFO: Creating new exec pod | |
STEP: verifying service has 3 reachable backends | |
Apr 21 22:55:38.133: INFO: Executing cmd "set -e; for i in $(seq 1 150); do wget -q --timeout=0.2 --tries=1 -O - http://10.0.138.76:80 2>&1 || true; echo; done" on host 8.34.213.250:22 | |
Apr 21 22:55:38.757: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.138.76:80 2>&1 || true; echo; done" in pod e2e-tests-services-k8kls/execpod | |
Apr 21 22:55:38.757: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-services-k8kls execpod -- /bin/sh -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.138.76:80 2>&1 || true; echo; done' | |
Apr 21 22:55:39.753: INFO: stderr: "" | |
STEP: deleting pod execpod in namespace e2e-tests-services-k8kls | |
STEP: stopping service1 | |
STEP: deleting replication controller service1 in namespace e2e-tests-services-k8kls | |
Apr 21 22:55:42.133: INFO: Deleting RC service1 took: 2.242351805s | |
Apr 21 22:55:42.133: INFO: Terminating RC service1 pods took: 108.971µs | |
STEP: verifying service1 is not up | |
STEP: verifying service2 is still up | |
Apr 21 22:55:44.328: INFO: Creating new exec pod | |
STEP: verifying service has 3 reachable backends | |
Apr 21 22:55:46.373: INFO: Executing cmd "set -e; for i in $(seq 1 150); do wget -q --timeout=0.2 --tries=1 -O - http://10.0.138.76:80 2>&1 || true; echo; done" on host 8.34.213.250:22 | |
Apr 21 22:55:46.815: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.138.76:80 2>&1 || true; echo; done" in pod e2e-tests-services-k8kls/execpod | |
Apr 21 22:55:46.816: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-services-k8kls execpod -- /bin/sh -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.138.76:80 2>&1 || true; echo; done' | |
Apr 21 22:55:47.711: INFO: stderr: "" | |
STEP: deleting pod execpod in namespace e2e-tests-services-k8kls | |
STEP: creating service3 in namespace e2e-tests-services-k8kls | |
STEP: creating service service3 in namespace e2e-tests-services-k8kls | |
STEP: creating replication controller service3 in namespace e2e-tests-services-k8kls | |
Apr 21 22:55:47.971: INFO: Created replication controller with name: service3, namespace: e2e-tests-services-k8kls, replica count: 3 | |
Apr 21 22:55:50.971: INFO: service3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
STEP: verifying service2 is still up | |
Apr 21 22:55:51.045: INFO: Creating new exec pod | |
STEP: verifying service has 3 reachable backends | |
Apr 21 22:55:53.084: INFO: Executing cmd "set -e; for i in $(seq 1 150); do wget -q --timeout=0.2 --tries=1 -O - http://10.0.138.76:80 2>&1 || true; echo; done" on host 8.34.213.250:22 | |
Apr 21 22:55:53.614: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.138.76:80 2>&1 || true; echo; done" in pod e2e-tests-services-k8kls/execpod | |
Apr 21 22:55:53.614: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-services-k8kls execpod -- /bin/sh -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.138.76:80 2>&1 || true; echo; done' | |
Apr 21 22:55:54.450: INFO: stderr: "" | |
STEP: deleting pod execpod in namespace e2e-tests-services-k8kls | |
STEP: verifying service3 is up | |
Apr 21 22:55:54.466: INFO: Creating new exec pod | |
STEP: verifying service has 3 reachable backends | |
Apr 21 22:55:56.484: INFO: Executing cmd "set -e; for i in $(seq 1 150); do wget -q --timeout=0.2 --tries=1 -O - http://10.0.12.60:80 2>&1 || true; echo; done" on host 8.34.213.250:22 | |
Apr 21 22:55:56.925: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.12.60:80 2>&1 || true; echo; done" in pod e2e-tests-services-k8kls/execpod | |
Apr 21 22:55:56.925: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-services-k8kls execpod -- /bin/sh -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.12.60:80 2>&1 || true; echo; done' | |
Apr 21 22:55:57.810: INFO: stderr: "" | |
STEP: deleting pod execpod in namespace e2e-tests-services-k8kls | |
[AfterEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:55:57.867: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-k8kls" for this suite. | |
• [SLOW TEST:139.373 seconds] | |
[k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be able to up and down services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:277 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] ServiceAccounts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:28.699: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should mount an API token into pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240 | |
STEP: getting the auto-created API token | |
STEP: Creating a pod to test consume service account token | |
Apr 21 22:56:29.502: INFO: Waiting up to 5m0s for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 status to be success or failure | |
Apr 21 22:56:29.533: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-f526884c-084e-11e6-9c8c-42010af00007' yet | |
Apr 21 22:56:29.533: INFO: Waiting for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 in namespace 'e2e-tests-svcaccounts-4ohrr' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.714158ms elapsed) | |
Apr 21 22:56:31.542: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-f526884c-084e-11e6-9c8c-42010af00007' in namespace 'e2e-tests-svcaccounts-4ohrr' so far | |
Apr 21 22:56:31.542: INFO: Waiting for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 in namespace 'e2e-tests-svcaccounts-4ohrr' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.040314273s elapsed) | |
STEP: Saw pod success | |
Apr 21 22:56:33.555: INFO: Waiting up to 5m0s for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 status to be success or failure | |
STEP: Saw pod success | |
Apr 21 22:56:33.567: INFO: Waiting up to 5m0s for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 status to be success or failure | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 container token-test: <nil> | |
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/token": eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMtc3ZjYWNjb3VudHMtNG9ocnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi1hM2RjOSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjRiMmEzOWEtMDg0ZS0xMWU2LTk0ZmQtNDIwMTBhZjAwMDAyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmUyZS10ZXN0cy1zdmNhY2NvdW50cy00b2hycjpkZWZhdWx0In0.WimjPP7htG4lbeeE27mqDTDGuvp3FlwPL-eQjIqsxoFGmJwNIjvuD23vcqDjVkEvhmeqXr8ePcRm4c6Ruoyq00_1OMYdBmQkYJQwsHUIRtVfCI2fK42wJetaV3-LU4HffjuJwL7We1N2LBlpvXjoC47mfFUJ32g_8rjdfjONEN3OSlnbUBgcSfYhNkbC3RUqrcKpPGt79ZymzcVN4VXaXqoGHZPTCc7PcP_km_eZg0SntK9PQAszLl_MZ6aSMOv3QeQZ1vaDlZ85NSPMxYSNs_8qsfbk81ykeRIDhPkl05kjt80we-u-9NOvC4rjx0t3FrCxuqFQF2vXJKxyGG-VtA | |
STEP: Creating a pod to test consume service account root CA | |
Apr 21 22:56:33.658: INFO: Waiting up to 5m0s for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 status to be success or failure | |
Apr 21 22:56:33.676: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-f526884c-084e-11e6-9c8c-42010af00007' yet | |
Apr 21 22:56:33.676: INFO: Waiting for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 in namespace 'e2e-tests-svcaccounts-4ohrr' status to be 'success or failure'(found phase: "Pending", readiness: false) (18.670947ms elapsed) | |
STEP: Saw pod success | |
Apr 21 22:56:35.688: INFO: Waiting up to 5m0s for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 status to be success or failure | |
STEP: Saw pod success | |
Apr 21 22:56:35.699: INFO: Waiting up to 5m0s for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 status to be success or failure | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 container root-ca-test: <nil> | |
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt": -----BEGIN CERTIFICATE----- | |
MIIDXzCCAkegAwIBAgIJAOCZBvgtkK1PMA0GCSqGSIb3DQEBCwUAMCQxIjAgBgNV | |
BAMUGTE0Ni4xNDguODguMTQ2QDE0NjEzMDQxNzkwHhcNMTYwNDIyMDU0OTM5WhcN | |
MjYwNDIwMDU0OTM5WjAkMSIwIAYDVQQDFBkxNDYuMTQ4Ljg4LjE0NkAxNDYxMzA0 | |
MTc5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA7/niG7rzToGK6uNs | |
koJbqG+0tB02wQ6AUfBadtbCrMLoCbUOAxt9VL4klSclb+ButzfjovYBHbxdO30m | |
eR0PZO3ln/XpRrYsVOo06MWxCfCN0Wmgg7Xk3+r1ZuqraXKMKciLvKRgdYxifDLi | |
zjZ63vOFAXWYY0AY9/0ovYA3JQd45Tze5+NQAP6QhKPgFLlwfNhkf3z9qyYnWhCW | |
tHvnTMV/VulrY4V3QOMLiG4h3B9O9XPxA6IMGkygAbnlyhYQGGOxY6X23KBkaStL | |
vsuYFgK1JKG1+wPFRxM4O0cXhr1WURviZyEJQOTNVvkvtroxdv7iYN13I/J35qfH | |
QFIybwIDAQABo4GTMIGQMB0GA1UdDgQWBBRZ8ryDDVADo0ne14GARGE05LEW/jBU | |
BgNVHSMETTBLgBRZ8ryDDVADo0ne14GARGE05LEW/qEopCYwJDEiMCAGA1UEAxQZ | |
MTQ2LjE0OC44OC4xNDZAMTQ2MTMwNDE3OYIJAOCZBvgtkK1PMAwGA1UdEwQFMAMB | |
Af8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQCNGUu8f5z2vf1TKHcc | |
dRd/+KUS3FWEWCDGEC5J5kr20Ublc0UovPp8ZNGVPsqNB2litlixb5Wpi2lSQQsi | |
UfNrRhrTQEw6ZfQ1+Yq2P2SqmEvKd0Qivue4HrJ6VdxmGv2t9TDAwfKlCzfFoRkB | |
cs2QNvEI06COFP4/5GzeA4GTN0mib9imtNeinsPbAJxjAq4V1R4aSfazw2OwuVhk | |
Kp1Leq8NFMlOptaHP4ByfO7VLo2e27IptDR+RJKzxfZNQj32LBjwXVCTdEM772Z5 | |
5QNp0odPNwesy+wgNKf+EfzIOalxmp9y426DcqXutgLJhhKFGyaYgBp1kjK/VuRA | |
w2lP | |
-----END CERTIFICATE----- | |
STEP: Creating a pod to test consume service account namespace | |
Apr 21 22:56:35.804: INFO: Waiting up to 5m0s for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 status to be success or failure | |
Apr 21 22:56:35.825: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-f526884c-084e-11e6-9c8c-42010af00007' yet | |
Apr 21 22:56:35.825: INFO: Waiting for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 in namespace 'e2e-tests-svcaccounts-4ohrr' status to be 'success or failure'(found phase: "Pending", readiness: false) (21.068213ms elapsed) | |
STEP: Saw pod success | |
Apr 21 22:56:37.834: INFO: Waiting up to 5m0s for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 status to be success or failure | |
STEP: Saw pod success | |
Apr 21 22:56:37.842: INFO: Waiting up to 5m0s for pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 status to be success or failure | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-service-account-f526884c-084e-11e6-9c8c-42010af00007 container namespace-test: <nil> | |
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/namespace": e2e-tests-svcaccounts-4ohrr | |
[AfterEach] [k8s.io] ServiceAccounts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:37.952: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-svcaccounts-4ohrr" for this suite. | |
• [SLOW TEST:19.324 seconds] | |
[k8s.io] ServiceAccounts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should mount an API token into pods [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:30.608: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Kubectl run job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1072 | |
[It] should create a job from an image when restart is OnFailure [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1095 | |
STEP: running the image gcr.io/google_containers/nginx:1.7.9 | |
Apr 21 22:56:31.134: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config run e2e-test-nginx-job --restart=OnFailure --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-iu3f5' | |
Apr 21 22:56:31.296: INFO: stderr: "" | |
Apr 21 22:56:31.296: INFO: stdout: "job \"e2e-test-nginx-job\" created" | |
STEP: verifying the job e2e-test-nginx-job was created | |
[AfterEach] [k8s.io] Kubectl run job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1076 | |
Apr 21 22:56:31.314: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-iu3f5' | |
Apr 21 22:56:33.541: INFO: stderr: "" | |
Apr 21 22:56:33.541: INFO: stdout: "job \"e2e-test-nginx-job\" deleted" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:33.541: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-iu3f5" for this suite. | |
• [SLOW TEST:17.986 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl run job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a job from an image when restart is OnFailure [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1095 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] hostPath | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:31.772: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] hostPath | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:46 | |
[It] should give a volume the correct mode [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:63 | |
STEP: Creating a pod to test hostPath mode | |
Apr 21 22:56:31.953: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure | |
Apr 21 22:56:31.960: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet | |
Apr 21 22:56:31.960: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-crcii' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.62199ms elapsed) | |
STEP: Saw pod success | |
Apr 21 22:56:33.971: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-6ch0 pod pod-host-path-test container test-container-1: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267 | |
mode of file "/test-volume": dtrwxrwxrwx | |
[AfterEach] [k8s.io] hostPath | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:34.042: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-hostpath-crcii" for this suite. | |
• [SLOW TEST:17.311 seconds] | |
[k8s.io] hostPath | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should give a volume the correct mode [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:63 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:34.404: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (root,0777,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:101 | |
STEP: Creating a pod to test emptydir 0777 on node default medium | |
Apr 21 22:56:34.582: INFO: Waiting up to 5m0s for pod pod-f82879ae-084e-11e6-9214-42010af00007 status to be success or failure | |
Apr 21 22:56:34.601: INFO: No Status.Info for container 'test-container' in pod 'pod-f82879ae-084e-11e6-9214-42010af00007' yet | |
Apr 21 22:56:34.601: INFO: Waiting for pod pod-f82879ae-084e-11e6-9214-42010af00007 in namespace 'e2e-tests-emptydir-9l3p9' status to be 'success or failure'(found phase: "Pending", readiness: false) (19.09496ms elapsed) | |
Apr 21 22:56:36.605: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f82879ae-084e-11e6-9214-42010af00007' in namespace 'e2e-tests-emptydir-9l3p9' so far | |
Apr 21 22:56:36.605: INFO: Waiting for pod pod-f82879ae-084e-11e6-9214-42010af00007 in namespace 'e2e-tests-emptydir-9l3p9' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.023317481s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-fyts pod pod-f82879ae-084e-11e6-9214-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267 | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rwxrwxrwx | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:38.712: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-9l3p9" for this suite. | |
• [SLOW TEST:19.370 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (root,0777,default) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:101 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:35.179: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should delete a job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:189 | |
STEP: Creating a job | |
STEP: Ensuring active pods == parallelism | |
STEP: delete a job | |
STEP: Ensuring job was deleted | |
[AfterEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:39.442: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-v1job-f87aa" for this suite. | |
• [SLOW TEST:19.298 seconds] | |
[k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should delete a job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:189 | |
------------------------------ | |
SSS | |
------------------------------ | |
[BeforeEach] [k8s.io] PreStop | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:05.989: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should call prestop when killing a pod [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167 | |
STEP: Creating server pod server in namespace e2e-tests-prestop-gbzd4 | |
STEP: Waiting for pods to come up. | |
W0421 22:56:06.442009 17734 request.go:344] Field selector: v1 - pods - metadata.name - server: need to check if this is versioned correctly. | |
STEP: Creating tester pod tester in namespace e2e-tests-prestop-gbzd4 | |
W0421 22:56:09.665966 17734 request.go:344] Field selector: v1 - pods - metadata.name - tester: need to check if this is versioned correctly. | |
STEP: Deleting pre-stop pod | |
Apr 21 22:56:16.279: INFO: Saw: { | |
"Hostname": "server", | |
"Sent": null, | |
"Received": { | |
"prestop": 1 | |
}, | |
"Errors": null, | |
"Log": [ | |
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.", | |
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again." | |
], | |
"StillContactingPeers": true | |
} | |
STEP: Deleting the server pod | |
[AfterEach] [k8s.io] PreStop | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:16.289: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-prestop-gbzd4" for this suite. | |
• [SLOW TEST:50.359 seconds] | |
[k8s.io] PreStop | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should call prestop when killing a pod [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:41.137: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0666,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:81 | |
STEP: Creating a pod to test emptydir 0666 on tmpfs | |
Apr 21 22:56:41.231: INFO: Waiting up to 5m0s for pod pod-fc239677-084e-11e6-87d2-42010af00007 status to be success or failure | |
Apr 21 22:56:41.234: INFO: No Status.Info for container 'test-container' in pod 'pod-fc239677-084e-11e6-87d2-42010af00007' yet | |
Apr 21 22:56:41.234: INFO: Waiting for pod pod-fc239677-084e-11e6-87d2-42010af00007 in namespace 'e2e-tests-emptydir-i9zd3' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.701061ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-8eot pod pod-fc239677-084e-11e6-87d2-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rw-rw-rw- | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:43.258: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-i9zd3" for this suite. | |
• [SLOW TEST:17.145 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (non-root,0666,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:81 | |
------------------------------ | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:43.824: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy to cadvisor using proxy subresource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:65 | |
Apr 21 22:56:43.896: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 7.112937ms) | |
Apr 21 22:56:43.902: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 5.048753ms) | |
Apr 21 22:56:43.906: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.853048ms) | |
Apr 21 22:56:43.911: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.437785ms) | |
Apr 21 22:56:43.916: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.741211ms) | |
Apr 21 22:56:43.920: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.308084ms) | |
Apr 21 22:56:43.925: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.738142ms) | |
Apr 21 22:56:43.929: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.422169ms) | |
Apr 21 22:56:43.934: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.685657ms) | |
Apr 21 22:56:43.939: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 5.533349ms) | |
Apr 21 22:56:43.944: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.753ms) | |
Apr 21 22:56:43.949: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.585164ms) | |
Apr 21 22:56:43.953: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.574412ms) | |
Apr 21 22:56:43.958: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.919812ms) | |
Apr 21 22:56:43.963: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.462475ms) | |
Apr 21 22:56:43.968: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.80274ms) | |
Apr 21 22:56:43.972: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.606104ms) | |
Apr 21 22:56:43.977: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.632906ms) | |
Apr 21 22:56:43.982: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.732617ms) | |
Apr 21 22:56:43.986: INFO: /api/v1/nodes/e2e-gce-master-1-minion-6ch0:4194/proxy/containers/: | |
<html> | |
<head> | |
<title>cAdvisor - /</title> | |
<link rel="stylesheet" href="../static/... (200; 4.714168ms) | |
[AfterEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:43.987: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-proxy-56b4t" for this suite. | |
• [SLOW TEST:15.183 seconds] | |
[k8s.io] Proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40 | |
should proxy to cadvisor using proxy subresource [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:65 | |
------------------------------ | |
[BeforeEach] [k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:44.941: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:50 | |
STEP: Executing a successful http request from the external internet | |
[It] should function for intra-pod communication [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:214 | |
STEP: Creating a service named "nettest" in namespace "e2e-tests-nettest-ajyvs" | |
STEP: Creating a webserver (pending) pod on each node | |
Apr 21 22:56:45.066: INFO: Created pod nettest-rj1w9 on node e2e-gce-master-1-minion-6ch0 | |
Apr 21 22:56:45.074: INFO: Created pod nettest-pupk3 on node e2e-gce-master-1-minion-8eot | |
Apr 21 22:56:45.078: INFO: Created pod nettest-mxs0e on node e2e-gce-master-1-minion-asea | |
Apr 21 22:56:45.086: INFO: Created pod nettest-newo2 on node e2e-gce-master-1-minion-fyts | |
Apr 21 22:56:45.104: INFO: Created pod nettest-zbk9u on node e2e-gce-master-1-minion-hlmm | |
Apr 21 22:56:45.114: INFO: Created pod nettest-dbuzt on node e2e-gce-master-1-minion-x3cg | |
STEP: Waiting for the webserver pods to transition to Running state | |
W0421 22:56:45.115024 17657 request.go:344] Field selector: v1 - pods - metadata.name - nettest-rj1w9: need to check if this is versioned correctly. | |
W0421 22:56:47.172723 17657 request.go:344] Field selector: v1 - pods - metadata.name - nettest-pupk3: need to check if this is versioned correctly. | |
W0421 22:56:47.627673 17657 request.go:344] Field selector: v1 - pods - metadata.name - nettest-mxs0e: need to check if this is versioned correctly. | |
W0421 22:56:47.645090 17657 request.go:344] Field selector: v1 - pods - metadata.name - nettest-newo2: need to check if this is versioned correctly. | |
W0421 22:56:47.962847 17657 request.go:344] Field selector: v1 - pods - metadata.name - nettest-zbk9u: need to check if this is versioned correctly. | |
W0421 22:56:47.981770 17657 request.go:344] Field selector: v1 - pods - metadata.name - nettest-dbuzt: need to check if this is versioned correctly. | |
STEP: Waiting for connectivity to be verified | |
Apr 21 22:56:49.998: INFO: About to make a proxy status call | |
Apr 21 22:56:50.020: INFO: Proxy status call returned in 22.204378ms | |
Apr 21 22:56:50.020: INFO: Attempt 0: test still running | |
Apr 21 22:56:52.020: INFO: About to make a proxy status call | |
Apr 21 22:56:52.025: INFO: Proxy status call returned in 4.98622ms | |
Apr 21 22:56:52.025: INFO: Attempt 1: test still running | |
Apr 21 22:56:54.026: INFO: About to make a proxy status call | |
Apr 21 22:56:54.030: INFO: Proxy status call returned in 4.764682ms | |
Apr 21 22:56:54.031: INFO: Passed on attempt 2. Cleaning up. | |
STEP: Cleaning up the webserver pods | |
STEP: Cleaning up the service | |
[AfterEach] [k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:54.098: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-nettest-ajyvs" for this suite. | |
• [SLOW TEST:14.178 seconds] | |
[k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should function for intra-pod communication [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:214 | |
------------------------------ | |
[BeforeEach] [k8s.io] Downward API volume | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:40.237: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide podname only [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:43 | |
STEP: Creating a pod to test downward API volume plugin | |
Apr 21 22:56:40.529: INFO: Waiting up to 5m0s for pod downwardapi-volume-fbb938a7-084e-11e6-bd1e-42010af00007 status to be success or failure | |
Apr 21 22:56:40.536: INFO: No Status.Info for container 'client-container' in pod 'downwardapi-volume-fbb938a7-084e-11e6-bd1e-42010af00007' yet | |
Apr 21 22:56:40.536: INFO: Waiting for pod downwardapi-volume-fbb938a7-084e-11e6-bd1e-42010af00007 in namespace 'e2e-tests-downward-api-sx584' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.030788ms elapsed) | |
Apr 21 22:56:42.540: INFO: Nil State.Terminated for container 'client-container' in pod 'downwardapi-volume-fbb938a7-084e-11e6-bd1e-42010af00007' in namespace 'e2e-tests-downward-api-sx584' so far | |
Apr 21 22:56:42.540: INFO: Waiting for pod downwardapi-volume-fbb938a7-084e-11e6-bd1e-42010af00007 in namespace 'e2e-tests-downward-api-sx584' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.010510552s elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-fyts pod downwardapi-volume-fbb938a7-084e-11e6-bd1e-42010af00007 container client-container: <nil> | |
STEP: Successfully fetched pod logs:content of file "/etc/podname": downwardapi-volume-fbb938a7-084e-11e6-bd1e-42010af00007 | |
[AfterEach] [k8s.io] Downward API volume | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:44.565: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-downward-api-sx584" for this suite. | |
• [SLOW TEST:19.349 seconds] | |
[k8s.io] Downward API volume | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should provide podname only [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:43 | |
------------------------------ | |
[BeforeEach] [k8s.io] Port forwarding | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:49.084: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support a client that connects, sends data, and disconnects [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:257 | |
STEP: creating the target pod | |
W0421 22:56:49.163447 17762 request.go:344] Field selector: v1 - pods - metadata.name - pfpod: need to check if this is versioned correctly. | |
STEP: Running 'kubectl port-forward' | |
Apr 21 22:56:51.010: INFO: starting port-forward command and streaming output | |
Apr 21 22:56:51.010: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config port-forward --namespace=e2e-tests-port-forwarding-i7tsn pfpod :80' | |
Apr 21 22:56:51.012: INFO: reading from `kubectl port-forward` command's stderr | |
STEP: Dialing the local port | |
STEP: Sending the expected data to the local port | |
STEP: Closing the write half of the client's connection | |
STEP: Reading data from the local port | |
STEP: Waiting for the target pod to stop running | |
W0421 22:56:52.045070 17762 request.go:344] Field selector: v1 - pods - metadata.name - pfpod: need to check if this is versioned correctly. | |
STEP: Retrieving logs from the target pod | |
STEP: Verifying logs | |
STEP: Closing the connection to the local port | |
[AfterEach] [k8s.io] Port forwarding | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:53.075: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-port-forwarding-i7tsn" for this suite. | |
• [SLOW TEST:14.011 seconds] | |
[k8s.io] Port forwarding | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] With a server that expects a client request | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support a client that connects, sends data, and disconnects [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:257 | |
------------------------------ | |
[BeforeEach] [k8s.io] Events | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:48.596: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128 | |
STEP: creating the pod | |
STEP: submitting the pod to kubernetes | |
W0421 22:56:48.658870 17649 request.go:344] Field selector: v1 - pods - metadata.name - send-events-0091d254-084f-11e6-825b-42010af00007: need to check if this is versioned correctly. | |
STEP: verifying the pod is in kubernetes | |
STEP: retrieving the pod | |
kind:"" apiVersion:"" | |
STEP: checking for scheduler event about the pod | |
Saw scheduler event for our pod. | |
STEP: checking for kubelet event about the pod | |
Saw kubelet event for our pod. | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Events | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:53.677: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-events-by666" for this suite. | |
• [SLOW TEST:15.104 seconds] | |
[k8s.io] Events | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:47.926: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] volume on tmpfs should have the correct mode [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:61 | |
STEP: Creating a pod to test emptydir volume type on tmpfs | |
Apr 21 22:56:48.002: INFO: Waiting up to 5m0s for pod pod-002da23c-084f-11e6-a789-42010af00007 status to be success or failure | |
Apr 21 22:56:48.006: INFO: No Status.Info for container 'test-container' in pod 'pod-002da23c-084f-11e6-a789-42010af00007' yet | |
Apr 21 22:56:48.006: INFO: Waiting for pod pod-002da23c-084f-11e6-a789-42010af00007 in namespace 'e2e-tests-emptydir-qt95d' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.314557ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-002da23c-084f-11e6-a789-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs | |
perms of file "/test-volume": -rwxrwxrwx | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:50.028: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-qt95d" for this suite. | |
• [SLOW TEST:17.122 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
volume on tmpfs should have the correct mode [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:61 | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:34.901: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should verify ResourceQuota with terminating scopes. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:481 | |
STEP: Creating a ResourceQuota with terminating scope | |
STEP: Ensuring ResourceQuota status is calculated | |
STEP: Creating a ResourceQuota with not terminating scope | |
STEP: Ensuring ResourceQuota status is calculated | |
STEP: Creating a long running pod | |
STEP: Ensuring resource quota with not terminating scope captures the pod usage | |
STEP: Ensuring resource quota with terminating scope ignored the pod usage | |
STEP: Deleting the pod | |
STEP: Ensuring resource quota status released the pod usage | |
STEP: Creating a terminating pod | |
STEP: Ensuring resource quota with terminating scope captures the pod usage | |
STEP: Ensuring resource quota with not terminating scope ignored the pod usage | |
STEP: Deleting the pod | |
STEP: Ensuring resource quota status released the pod usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:51.201: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-9knzd" for this suite. | |
• [SLOW TEST:31.318 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should verify ResourceQuota with terminating scopes. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:481 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:54.480: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable via environment variable [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:202 | |
STEP: Creating configMap e2e-tests-configmap-4ohbb/configmap-test-041348d9-084f-11e6-b06b-42010af00007 | |
STEP: Creating a pod to test consume configMaps | |
Apr 21 22:56:54.538: INFO: Waiting up to 5m0s for pod pod-configmaps-0415950f-084f-11e6-b06b-42010af00007 status to be success or failure | |
Apr 21 22:56:54.539: INFO: No Status.Info for container 'env-test' in pod 'pod-configmaps-0415950f-084f-11e6-b06b-42010af00007' yet | |
Apr 21 22:56:54.539: INFO: Waiting for pod pod-configmaps-0415950f-084f-11e6-b06b-42010af00007 in namespace 'e2e-tests-configmap-4ohbb' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.941179ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-fyts pod pod-configmaps-0415950f-084f-11e6-b06b-42010af00007 container env-test: <nil> | |
STEP: Successfully fetched pod logs:KUBERNETES_SERVICE_PORT=443 | |
KUBERNETES_PORT=tcp://10.0.0.1:443 | |
CONFIG_DATA_1=value-1 | |
HOSTNAME=pod-configmaps-0415950f-084f-11e6-b06b-42010af00007 | |
SHLVL=1 | |
HOME=/root | |
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 | |
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin | |
KUBERNETES_PORT_443_TCP_PORT=443 | |
KUBERNETES_PORT_443_TCP_PROTO=tcp | |
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 | |
KUBERNETES_SERVICE_PORT_HTTPS=443 | |
PWD=/ | |
KUBERNETES_SERVICE_HOST=10.0.0.1 | |
STEP: Cleaning up the configMap | |
[AfterEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:56.622: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-configmap-4ohbb" for this suite. | |
• [SLOW TEST:12.162 seconds] | |
[k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be consumable via environment variable [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:202 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:59.008: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:74 | |
Apr 21 22:56:59.045: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
[It] should check NodePort out-of-range | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:839 | |
STEP: creating service nodeport-range-test with type NodePort in namespace e2e-tests-services-02dy0 | |
STEP: changing service nodeport-range-test to out-of-range NodePort 4606 | |
STEP: deleting original service nodeport-range-test | |
STEP: creating service nodeport-range-test with out-of-range NodePort 4606 | |
[AfterEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:59.103: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-02dy0" for this suite. | |
• [SLOW TEST:10.115 seconds] | |
[k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should check NodePort out-of-range | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:839 | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:58.283: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] paused deployment should be ignored by the controller | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:73 | |
Apr 21 22:56:58.318: INFO: Creating paused deployment test-paused-deployment | |
Apr 21 22:56:58.355: INFO: Updating deployment test-paused-deployment | |
Apr 21 22:57:00.390: INFO: Updating deployment test-paused-deployment | |
STEP: deleting ReplicaSet test-paused-deployment-3184093366 in namespace e2e-tests-deployment-223w0 | |
Apr 21 22:57:04.420: INFO: Deleting RS test-paused-deployment-3184093366 took: 2.020909879s | |
Apr 21 22:57:04.423: INFO: Terminating ReplicaSet test-paused-deployment-3184093366 pods took: 3.14537ms | |
Apr 21 22:57:04.431: INFO: Deleting deployment test-paused-deployment | |
Apr 21 22:57:04.465: INFO: Ensuring deployment test-paused-deployment was deleted | |
Apr 21 22:57:04.467: INFO: Ensuring deployment test-paused-deployment's RSes were deleted | |
Apr 21 22:57:04.469: INFO: Ensuring deployment test-paused-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:04.471: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-223w0" for this suite. | |
• [SLOW TEST:11.205 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
paused deployment should be ignored by the controller | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:73 | |
------------------------------ | |
[BeforeEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:59.119: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should run a job to completion when tasks succeed | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:63 | |
STEP: Creating a job | |
STEP: Ensuring job reaches completions | |
[AfterEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:03.191: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-v1job-33h7p" for this suite. | |
• [SLOW TEST:14.107 seconds] | |
[k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should run a job to completion when tasks succeed | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:63 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:59.588: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Kubectl run deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1030 | |
[It] should create a deployment from an image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1062 | |
STEP: running the image gcr.io/google_containers/nginx:1.7.9 | |
Apr 21 22:56:59.676: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-vi3ip' | |
Apr 21 22:56:59.754: INFO: stderr: "" | |
Apr 21 22:56:59.754: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" created" | |
STEP: verifying the deployment e2e-test-nginx-deployment was created | |
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created | |
[AfterEach] [k8s.io] Kubectl run deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1034 | |
Apr 21 22:57:01.769: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-vi3ip' | |
Apr 21 22:57:03.909: INFO: stderr: "" | |
Apr 21 22:57:03.909: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" deleted" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:03.909: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-vi3ip" for this suite. | |
• [SLOW TEST:14.339 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl run deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create a deployment from an image [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1062 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:05.913: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should fail a job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198 | |
STEP: Creating a job | |
STEP: Ensuring job was failed | |
[AfterEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:32.151: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-job-1qpo0" for this suite. | |
• [SLOW TEST:71.270 seconds] | |
[k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should fail a job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kibana Logging Instances Is Alive | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:09.125: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kibana Logging Instances Is Alive | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:38 | |
[It] should check that the Kibana logging instance is alive | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42 | |
STEP: Checking the Kibana service exists. | |
STEP: Checking to make sure the Kibana pods are running | |
W0421 22:57:09.249408 17665 request.go:344] Field selector: v1 - pods - metadata.name - kibana-logging-v1-qva15: need to check if this is versioned correctly. | |
STEP: Checking to make sure we get a response from the Kibana UI. | |
[AfterEach] [k8s.io] Kibana Logging Instances Is Alive | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:09.289: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kibana-logging-m8a3k" for this suite. | |
• [SLOW TEST:10.182 seconds] | |
[k8s.io] Kibana Logging Instances Is Alive | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should check that the Kibana logging instance is alive | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42 | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:09.490: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] deployment should create new pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:55 | |
Apr 21 22:57:09.531: INFO: Creating simple deployment test-new-deployment | |
Apr 21 22:57:13.575: INFO: Deleting deployment test-new-deployment | |
Apr 21 22:57:15.629: INFO: Ensuring deployment test-new-deployment was deleted | |
Apr 21 22:57:15.631: INFO: Ensuring deployment test-new-deployment's RSes were deleted | |
Apr 21 22:57:15.632: INFO: Ensuring deployment test-new-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:15.634: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-e05ug" for this suite. | |
• [SLOW TEST:11.161 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
deployment should create new pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:55 | |
------------------------------ | |
[BeforeEach] [k8s.io] DNS | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:53.776: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide DNS for the cluster [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:281 | |
STEP: Waiting for DNS Service to be Running | |
W0421 22:56:53.825515 17620 request.go:344] Field selector: v1 - pods - metadata.name - kube-dns-v11-5kbrl: need to check if this is versioned correctly. | |
STEP: Running these commands on wheezy:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search kubernetes.default A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search kubernetes.default A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search google.com A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search google.com A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search metadata A)" && echo OK > /results/wheezy_udp@metadata;test -n "$$(dig +tcp +noall +answer +search metadata A)" && echo OK > /results/wheezy_tcp@metadata;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fcyfp.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done | |
STEP: Running these commands on jessie:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search kubernetes.default A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search kubernetes.default A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search google.com A)" && echo OK > /results/[email protected];test -n "$$(dig +tcp +noall +answer +search google.com A)" && echo OK > /results/[email protected];test -n "$$(dig +notcp +noall +answer +search metadata A)" && echo OK > /results/jessie_udp@metadata;test -n "$$(dig +tcp +noall +answer +search metadata A)" && echo OK > /results/jessie_tcp@metadata;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fcyfp.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done | |
STEP: creating a pod to probe DNS | |
STEP: submitting the pod to kubernetes | |
W0421 22:56:53.849152 17620 request.go:344] Field selector: v1 - pods - metadata.name - dns-test-03a9d54e-084f-11e6-9214-42010af00007: need to check if this is versioned correctly. | |
STEP: retrieving the pod | |
STEP: looking for the results for each expected name from probiers | |
Apr 21 22:57:14.082: INFO: DNS probes using dns-test-03a9d54e-084f-11e6-9214-42010af00007 succeeded | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] DNS | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:14.096: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-dns-fcyfp" for this suite. | |
• [SLOW TEST:30.340 seconds] | |
[k8s.io] DNS | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should provide DNS for the cluster [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:281 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:17.185: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should check if Kubernetes master services is included in cluster-info [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:612 | |
STEP: validating cluster-info | |
Apr 21 22:57:17.224: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config cluster-info' | |
Apr 21 22:57:17.294: INFO: stderr: "" | |
Apr 21 22:57:17.294: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://146.148.88.146\x1b[0m\n\x1b[0;32mGLBCDefaultBackend\x1b[0m is running at \x1b[0;33mhttps://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/default-http-backend\x1b[0m\n\x1b[0;32mElasticsearch\x1b[0m is running at \x1b[0;33mhttps://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging\x1b[0m\n\x1b[0;32mHeapster\x1b[0m is running at \x1b[0;33mhttps://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/heapster\x1b[0m\n\x1b[0;32mKibana\x1b[0m is running at \x1b[0;33mhttps://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/kibana-logging\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/kube-dns\x1b[0m\n\x1b[0;32mkubernetes-dashboard\x1b[0m is running at \x1b[0;33mhttps://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard\x1b[0m\n\x1b[0;32mGrafana\x1b[0m is running at \x1b[0;33mhttps://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana\x1b[0m\n\x1b[0;32mInfluxDB\x1b[0m is running at \x1b[0;33mhttps://146.148.88.146/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb\x1b[0m" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:17.294: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-bds3n" for this suite. | |
• [SLOW TEST:10.138 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl cluster-info | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should check if Kubernetes master services is included in cluster-info [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:612 | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:56.349: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be restarted with a /healthz http liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:742 | |
STEP: Creating pod liveness-http in namespace e2e-tests-pods-p8we0 | |
W0421 22:56:56.402343 17734 request.go:344] Field selector: v1 - pods - metadata.name - liveness-http: need to check if this is versioned correctly. | |
Apr 21 22:56:58.136: INFO: Started pod liveness-http in namespace e2e-tests-pods-p8we0 | |
STEP: checking the pod's current state and verifying that restartCount is present | |
Apr 21 22:56:58.151: INFO: Initial restart count of pod liveness-http is 0 | |
Apr 21 22:57:18.201: INFO: Restart count of pod e2e-tests-pods-p8we0/liveness-http is now 1 (20.050338708s elapsed) | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:18.213: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-p8we0" for this suite. | |
• [SLOW TEST:31.883 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be restarted with a /healthz http liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:742 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] KubeletManagedEtcHosts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:34.773: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should test kubelet managed /etc/hosts file | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:56 | |
STEP: Setting up the test | |
STEP: Creating hostNetwork=false pod | |
W0421 22:56:34.859926 17779 request.go:344] Field selector: v1 - pods - metadata.name - test-pod: need to check if this is versioned correctly. | |
STEP: Creating hostNetwork=true pod | |
W0421 22:56:37.621924 17779 request.go:344] Field selector: v1 - pods - metadata.name - test-host-network-pod: need to check if this is versioned correctly. | |
STEP: Running the test | |
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false | |
Apr 21 22:56:38.510: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-e2e-kubelet-etc-hosts-lheg3 test-pod -c busybox-1 cat /etc/hosts' | |
Apr 21 22:56:38.512: INFO: reading from `kubectl exec` command's stdout | |
Apr 21 22:56:38.886: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-e2e-kubelet-etc-hosts-lheg3 test-pod -c busybox-2 cat /etc/hosts' | |
Apr 21 22:56:38.888: INFO: reading from `kubectl exec` command's stdout | |
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount | |
Apr 21 22:56:39.221: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-e2e-kubelet-etc-hosts-lheg3 test-pod -c busybox-3 cat /etc/hosts' | |
Apr 21 22:56:39.222: INFO: reading from `kubectl exec` command's stdout | |
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true | |
Apr 21 22:56:39.573: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-e2e-kubelet-etc-hosts-lheg3 test-host-network-pod -c busybox-1 cat /etc/hosts' | |
Apr 21 22:56:39.574: INFO: reading from `kubectl exec` command's stdout | |
Apr 21 22:56:40.069: INFO: Asynchronously running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-e2e-kubelet-etc-hosts-lheg3 test-host-network-pod -c busybox-2 cat /etc/hosts' | |
Apr 21 22:56:40.072: INFO: reading from `kubectl exec` command's stdout | |
[AfterEach] [k8s.io] KubeletManagedEtcHosts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:56:40.630: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-lheg3" for this suite. | |
• [SLOW TEST:55.959 seconds] | |
[k8s.io] KubeletManagedEtcHosts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should test kubelet managed /etc/hosts file | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:56 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:13.228: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should run a job to completion when tasks sometimes fail and are not locally restarted | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:96 | |
STEP: Creating a job | |
STEP: Ensuring job reaches completions | |
[AfterEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:21.286: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-job-45eu4" for this suite. | |
• [SLOW TEST:18.075 seconds] | |
[k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should run a job to completion when tasks sometimes fail and are not locally restarted | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:96 | |
------------------------------ | |
[BeforeEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:31.304: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should run a job to completion when tasks sometimes fail and are locally restarted | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:82 | |
STEP: Creating a job | |
STEP: Ensuring job reaches completions | |
[AfterEach] [k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:39.360: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-v1job-0o48x" for this suite. | |
• [SLOW TEST:13.075 seconds] | |
[k8s.io] V1Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should run a job to completion when tasks sometimes fail and are locally restarted | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:82 | |
------------------------------ | |
[BeforeEach] [k8s.io] ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:27.327: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should serve a basic image on each replica with a private image | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:45 | |
STEP: Creating replication controller my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007 | |
Apr 21 22:57:27.389: INFO: Pod name my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007: Found 0 pods out of 2 | |
Apr 21 22:57:32.393: INFO: Pod name my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007: Found 2 pods out of 2 | |
STEP: Ensuring each pod is running | |
W0421 22:57:32.393318 17728 request.go:344] Field selector: v1 - pods - metadata.name - my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007-eqsb0: need to check if this is versioned correctly. | |
W0421 22:57:32.398254 17728 request.go:344] Field selector: v1 - pods - metadata.name - my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007-v8j7h: need to check if this is versioned correctly. | |
STEP: Trying to dial each unique pod | |
Apr 21 22:57:37.447: INFO: Controller my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007: Got expected result from replica 1 [my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007-eqsb0]: "my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007-eqsb0", 1 of 2 required successes so far | |
Apr 21 22:57:37.454: INFO: Controller my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007: Got expected result from replica 2 [my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007-v8j7h]: "my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007-v8j7h", 2 of 2 required successes so far | |
STEP: deleting replication controller my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007 in namespace e2e-tests-replication-controller-fosr3 | |
Apr 21 22:57:39.494: INFO: Deleting RC my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007 took: 2.037129753s | |
Apr 21 22:57:39.494: INFO: Terminating RC my-hostname-private-17a77dc4-084f-11e6-bb3d-42010af00007 pods took: 116.425µs | |
[AfterEach] [k8s.io] ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:39.494: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-replication-controller-fosr3" for this suite. | |
• [SLOW TEST:17.187 seconds] | |
[k8s.io] ReplicationController | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should serve a basic image on each replica with a private image | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:45 | |
------------------------------ | |
[BeforeEach] [k8s.io] DNS | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:19.309: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should provide DNS for pods for Hostname and Subdomain Annotation | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:368 | |
STEP: Waiting for DNS Service to be Running | |
W0421 22:57:19.383247 17665 request.go:344] Field selector: v1 - pods - metadata.name - kube-dns-v11-5kbrl: need to check if this is versioned correctly. | |
STEP: Creating a test headless service | |
STEP: Running these commands on wheezy:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local A)" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local;test -n "$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local A)" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local;test -n "$$(getent hosts dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-p0gew.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done | |
STEP: Running these commands on jessie:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local A)" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local;test -n "$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local A)" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local;test -n "$$(getent hosts dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.e2e-tests-dns-p0gew.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-p0gew.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done | |
STEP: creating a pod to probe DNS | |
STEP: submitting the pod to kubernetes | |
W0421 22:57:19.409326 17665 request.go:344] Field selector: v1 - pods - metadata.name - dns-test-12e7efa3-084f-11e6-a3f7-42010af00007: need to check if this is versioned correctly. | |
STEP: retrieving the pod | |
STEP: looking for the results for each expected name from probiers | |
Apr 21 22:57:42.000: INFO: DNS probes using dns-test-12e7efa3-084f-11e6-a3f7-42010af00007 succeeded | |
STEP: deleting the pod | |
STEP: deleting the test headless service | |
[AfterEach] [k8s.io] DNS | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:42.086: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-dns-p0gew" for this suite. | |
• [SLOW TEST:27.797 seconds] | |
[k8s.io] DNS | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should provide DNS for pods for Hostname and Subdomain Annotation | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:368 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:56:48.025: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should scale a job up | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:137 | |
STEP: Creating a job | |
STEP: Ensuring active pods == startParallelism | |
STEP: scale job up | |
STEP: Ensuring active pods == endParallelism | |
[AfterEach] [k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:02.114: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-job-ylmuo" for this suite. | |
• [SLOW TEST:64.106 seconds] | |
[k8s.io] Job | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should scale a job up | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:137 | |
------------------------------ | |
[BeforeEach] [k8s.io] PrivilegedPod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:05.050: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should test privileged pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:67 | |
W0421 22:57:05.101453 17602 request.go:344] Field selector: v1 - pods - metadata.name - hostexec: need to check if this is versioned correctly. | |
STEP: Creating a privileged pod | |
W0421 22:57:06.382499 17602 request.go:344] Field selector: v1 - pods - metadata.name - privileged-pod: need to check if this is versioned correctly. | |
STEP: Executing privileged command on privileged container | |
STEP: Exec-ing into container over http. Running command:curl -q 'http://10.245.4.6:8080/shell?shellCommand=ip+link+add+dummy1+type+dummy' | |
Apr 21 22:57:09.649: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-e2e-privilegedpod-qj4ib hostexec -- /bin/sh -c curl -q 'http://10.245.4.6:8080/shell?shellCommand=ip+link+add+dummy1+type+dummy'' | |
Apr 21 22:57:09.898: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 2 100 2 0 0 91 0 --:--:-- --:--:-- --:--:-- 95\n" | |
Apr 21 22:57:09.899: INFO: stdout: {} | |
Apr 21 22:57:09.899: INFO: Deserialized output is {} | |
STEP: Executing privileged command on non-privileged container | |
STEP: Exec-ing into container over http. Running command:curl -q 'http://10.245.4.6:9090/shell?shellCommand=ip+link+add+dummy1+type+dummy' | |
Apr 21 22:57:09.899: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-e2e-privilegedpod-qj4ib hostexec -- /bin/sh -c curl -q 'http://10.245.4.6:9090/shell?shellCommand=ip+link+add+dummy1+type+dummy'' | |
Apr 21 22:57:10.166: INFO: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 81 100 81 0 0 17300 0 --:--:-- --:--:-- --:--:-- 20250\n" | |
Apr 21 22:57:10.166: INFO: stdout: {"error":"exit status 2","output":"RTNETLINK answers: Operation not permitted\n"} | |
Apr 21 22:57:10.166: INFO: Deserialized output is {"error":"exit status 2","output":"RTNETLINK answers: Operation not permitted\n"} | |
[AfterEach] [k8s.io] PrivilegedPod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:10.167: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-e2e-privilegedpod-qj4ib" for this suite. | |
• [SLOW TEST:50.135 seconds] | |
[k8s.io] PrivilegedPod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should test privileged pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:67 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:55.187: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] volume on default medium should have the correct mode [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:89 | |
STEP: Creating a pod to test emptydir volume type on node default medium | |
Apr 21 22:57:55.239: INFO: Waiting up to 5m0s for pod pod-2841e56b-084f-11e6-a789-42010af00007 status to be success or failure | |
Apr 21 22:57:55.242: INFO: No Status.Info for container 'test-container' in pod 'pod-2841e56b-084f-11e6-a789-42010af00007' yet | |
Apr 21 22:57:55.242: INFO: Waiting for pod pod-2841e56b-084f-11e6-a789-42010af00007 in namespace 'e2e-tests-emptydir-kxhxv' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.362414ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-fyts pod pod-2841e56b-084f-11e6-a789-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267 | |
perms of file "/test-volume": -rwxrwxrwx | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:57.265: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-kxhxv" for this suite. | |
• [SLOW TEST:7.099 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
volume on default medium should have the correct mode [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:89 | |
------------------------------ | |
[BeforeEach] [k8s.io] Horizontal pod autoscaling (scale resource: CPU) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:04.550: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] Should scale from 1 pod to 2 pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:89 | |
STEP: Running consuming RC rc-light via replicationController with 1 replicas | |
STEP: creating replication controller rc-light in namespace e2e-tests-horizontal-pod-autoscaling-xu4ly | |
Apr 21 22:55:04.638: INFO: Created replication controller with name: rc-light, namespace: e2e-tests-horizontal-pod-autoscaling-xu4ly, replica count: 1 | |
Apr 21 22:55:14.638: INFO: rc-light Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:55:24.639: INFO: RC rc-light: consume 150 millicores in total | |
Apr 21 22:55:24.639: INFO: RC rc-light: consume 150 millicores in total | |
Apr 21 22:55:24.639: INFO: RC rc-light: consume 0 MB in total | |
Apr 21 22:55:24.639: INFO: RC rc-light: consume 0 MB in total | |
Apr 21 22:55:24.639: INFO: RC rc-light: consume custom metric 0 in total | |
Apr 21 22:55:24.639: INFO: RC rc-light: sending 7 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:55:24.639: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:55:24.639: INFO: RC rc-light: consume custom metric 0 in total | |
Apr 21 22:55:24.640: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:55:24.725: INFO: replicationController: current replicas number 1 waiting to be 2 | |
Apr 21 22:55:44.751: INFO: replicationController: current replicas number 1 waiting to be 2 | |
Apr 21 22:55:54.639: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:55:54.639: INFO: RC rc-light: sending 7 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:55:54.641: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:56:04.765: INFO: replicationController: current replicas number 1 waiting to be 2 | |
Apr 21 22:56:24.640: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:56:24.640: INFO: RC rc-light: sending 7 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:56:24.641: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:56:25.416: INFO: replicationController: current replicas number 1 waiting to be 2 | |
Apr 21 22:56:45.422: INFO: replicationController: current replicas number 1 waiting to be 2 | |
Apr 21 22:56:54.640: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:56:54.640: INFO: RC rc-light: sending 7 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:56:54.641: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:57:05.429: INFO: replicationController: current replicas number 1 waiting to be 2 | |
Apr 21 22:57:24.640: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:57:24.640: INFO: RC rc-light: sending 7 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:57:24.642: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:57:25.440: INFO: replicationController: current replicas number 1 waiting to be 2 | |
Apr 21 22:57:45.443: INFO: replicationController: current replicas number is equal to desired replicas number: 2 | |
STEP: Removing consuming RC rc-light | |
STEP: deleting replication controller rc-light in namespace e2e-tests-horizontal-pod-autoscaling-xu4ly | |
Apr 21 22:57:57.470: INFO: Deleting RC rc-light took: 2.022419266s | |
Apr 21 22:57:57.470: INFO: Terminating RC rc-light pods took: 96.468µs | |
[AfterEach] [k8s.io] Horizontal pod autoscaling (scale resource: CPU) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:57:57.481: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-horizontal-pod-autoscaling-xu4ly" for this suite. | |
• [SLOW TEST:177.967 seconds] | |
[k8s.io] Horizontal pod autoscaling (scale resource: CPU) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] ReplicationController light | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
Should scale from 1 pod to 2 pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:89 | |
------------------------------ | |
[BeforeEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:58:02.287: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:36 | |
[It] should use the image defaults if command and args are blank [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:42 | |
STEP: Creating a pod to test use defaults | |
Apr 21 22:58:02.343: INFO: Waiting up to 5m0s for pod client-containers-2c7dd993-084f-11e6-a789-42010af00007 status to be success or failure | |
Apr 21 22:58:02.347: INFO: No Status.Info for container 'test-container' in pod 'client-containers-2c7dd993-084f-11e6-a789-42010af00007' yet | |
Apr 21 22:58:02.347: INFO: Waiting for pod client-containers-2c7dd993-084f-11e6-a789-42010af00007 in namespace 'e2e-tests-containers-0ef7b' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.997993ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod client-containers-2c7dd993-084f-11e6-a789-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:[/ep default arguments] | |
[AfterEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:58:04.368: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-containers-0ef7b" for this suite. | |
• [SLOW TEST:7.101 seconds] | |
[k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should use the image defaults if command and args are blank [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:42 | |
------------------------------ | |
[BeforeEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:58:02.518: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support (root,0644,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:65 | |
STEP: Creating a pod to test emptydir 0644 on tmpfs | |
Apr 21 22:58:02.561: INFO: Waiting up to 5m0s for pod pod-2c9edccb-084f-11e6-a9ac-42010af00007 status to be success or failure | |
Apr 21 22:58:02.563: INFO: No Status.Info for container 'test-container' in pod 'pod-2c9edccb-084f-11e6-a9ac-42010af00007' yet | |
Apr 21 22:58:02.563: INFO: Waiting for pod pod-2c9edccb-084f-11e6-a9ac-42010af00007 in namespace 'e2e-tests-emptydir-vmvug' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.802898ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-2c9edccb-084f-11e6-a9ac-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs | |
content of file "/test-volume/test-file": mount-tester new file | |
perms of file "/test-volume/test-file": -rw-r--r-- | |
[AfterEach] [k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:58:04.586: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-emptydir-vmvug" for this suite. | |
• [SLOW TEST:7.093 seconds] | |
[k8s.io] EmptyDir volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support (root,0644,tmpfs) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:65 | |
------------------------------ | |
[BeforeEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:58:09.612: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:36 | |
[It] should be able to override the image's default arguments (docker cmd) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:51 | |
STEP: Creating a pod to test override arguments | |
Apr 21 22:58:09.665: INFO: Waiting up to 5m0s for pod client-containers-30db06e2-084f-11e6-a9ac-42010af00007 status to be success or failure | |
Apr 21 22:58:09.667: INFO: No Status.Info for container 'test-container' in pod 'client-containers-30db06e2-084f-11e6-a9ac-42010af00007' yet | |
Apr 21 22:58:09.667: INFO: Waiting for pod client-containers-30db06e2-084f-11e6-a9ac-42010af00007 in namespace 'e2e-tests-containers-2fqqh' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.964299ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod client-containers-30db06e2-084f-11e6-a9ac-42010af00007 container test-container: <nil> | |
STEP: Successfully fetched pod logs:[/ep override arguments] | |
[AfterEach] [k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:58:11.688: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-containers-2fqqh" for this suite. | |
• [SLOW TEST:7.092 seconds] | |
[k8s.io] Docker Containers | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be able to override the image's default arguments (docker cmd) [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:51 | |
------------------------------ | |
[BeforeEach] [k8s.io] Probing container | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:55:02.187: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Probing container | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:46 | |
[It] with readiness probe that fails should never be ready and never restart [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:111 | |
[AfterEach] [k8s.io] Probing container | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:58:02.262: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-container-probe-8zgt6" for this suite. | |
• [SLOW TEST:205.093 seconds] | |
[k8s.io] Probing container | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
with readiness probe that fails should never be ready and never restart [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:111 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Probing container | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:20.652: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Probing container | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:46 | |
[It] with readiness probe should not be ready before initial delay and never restart [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:86 | |
Apr 21 22:57:22.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:24.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:26.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:28.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:30.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:32.718: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:34.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:36.718: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:38.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:40.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:42.718: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:44.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:46.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:48.718: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:50.718: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:52.718: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:54.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:56.717: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:57:58.718: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:58:00.718: INFO: pod is not yet ready; pod has phase "Running". | |
Apr 21 22:58:02.721: INFO: Container started at 2016-04-21 22:57:21 -0700 PDT, pod became ready at 2016-04-21 22:58:00 -0700 PDT | |
[AfterEach] [k8s.io] Probing container | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:58:02.721: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-container-probe-43abb" for this suite. | |
• [SLOW TEST:67.086 seconds] | |
[k8s.io] Probing container | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
with readiness probe should not be ready before initial delay and never restart [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:86 | |
------------------------------ | |
[BeforeEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:58:27.740: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with mappings as non-root [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:53 | |
STEP: Creating configMap with name configmap-test-volume-map-3ba9c211-084f-11e6-87d2-42010af00007 | |
STEP: Creating a pod to test consume configMaps | |
Apr 21 22:58:27.798: INFO: Waiting up to 5m0s for pod pod-configmaps-3babfcc3-084f-11e6-87d2-42010af00007 status to be success or failure | |
Apr 21 22:58:27.801: INFO: No Status.Info for container 'configmap-volume-test' in pod 'pod-configmaps-3babfcc3-084f-11e6-87d2-42010af00007' yet | |
Apr 21 22:58:27.801: INFO: Waiting for pod pod-configmaps-3babfcc3-084f-11e6-87d2-42010af00007 in namespace 'e2e-tests-configmap-ahk4l' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.816651ms elapsed) | |
STEP: Saw pod success | |
STEP: Trying to get logs from node e2e-gce-master-1-minion-x3cg pod pod-configmaps-3babfcc3-084f-11e6-87d2-42010af00007 container configmap-volume-test: <nil> | |
STEP: Successfully fetched pod logs:content of file "/etc/configmap-volume/path/to/data-2": value-2 | |
STEP: Cleaning up the configMap | |
[AfterEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:58:29.828: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-configmap-ahk4l" for this suite. | |
• [SLOW TEST:7.105 seconds] | |
[k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be consumable from pods in volume with mappings as non-root [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:53 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:58:09.389: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Update Demo | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:150 | |
[It] should do a rolling update of a replication controller [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:181 | |
STEP: creating the initial replication controller | |
Apr 21 22:58:09.452: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:09.567: INFO: stderr: "" | |
Apr 21 22:58:09.567: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created" | |
STEP: waiting for all containers in name=update-demo pods to come up. | |
Apr 21 22:58:09.567: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:09.647: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:09.647: INFO: stdout: "update-demo-nautilus-5wy7z update-demo-nautilus-xfjwc" | |
Apr 21 22:58:09.647: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-5wy7z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:09.721: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:09.721: INFO: stdout: "" | |
Apr 21 22:58:09.721: INFO: update-demo-nautilus-5wy7z is created but not running | |
Apr 21 22:58:14.721: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:14.793: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:14.793: INFO: stdout: "update-demo-nautilus-5wy7z update-demo-nautilus-xfjwc" | |
Apr 21 22:58:14.793: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-5wy7z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:14.864: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:14.864: INFO: stdout: "true" | |
Apr 21 22:58:14.864: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-5wy7z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:14.937: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:14.937: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:58:14.937: INFO: validating pod update-demo-nautilus-5wy7z | |
Apr 21 22:58:14.956: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:58:14.956: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:58:14.956: INFO: update-demo-nautilus-5wy7z is verified up and running | |
Apr 21 22:58:14.956: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-xfjwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:15.029: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:15.030: INFO: stdout: "true" | |
Apr 21 22:58:15.030: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-nautilus-xfjwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:15.102: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:15.102: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus" | |
Apr 21 22:58:15.102: INFO: validating pod update-demo-nautilus-xfjwc | |
Apr 21 22:58:15.107: INFO: got data: { | |
"image": "nautilus.jpg" | |
} | |
Apr 21 22:58:15.107: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . | |
Apr 21 22:58:15.107: INFO: update-demo-nautilus-xfjwc is verified up and running | |
STEP: rolling-update to new replication controller | |
Apr 21 22:58:15.107: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/docs/user-guide/update-demo/kitten-rc.yaml --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:44.342: INFO: stderr: "" | |
Apr 21 22:58:44.342: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting update-demo-nautilus\nreplicationcontroller \"update-demo-nautilus\" rolling updated to \"update-demo-kitten\"" | |
STEP: waiting for all containers in name=update-demo pods to come up. | |
Apr 21 22:58:44.342: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:44.420: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:44.420: INFO: stdout: "update-demo-kitten-g2quh update-demo-kitten-np8az" | |
Apr 21 22:58:44.420: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-kitten-g2quh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:44.493: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:44.493: INFO: stdout: "true" | |
Apr 21 22:58:44.493: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-kitten-g2quh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:44.567: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:44.567: INFO: stdout: "gcr.io/google_containers/update-demo:kitten" | |
Apr 21 22:58:44.567: INFO: validating pod update-demo-kitten-g2quh | |
Apr 21 22:58:44.574: INFO: got data: { | |
"image": "kitten.jpg" | |
} | |
Apr 21 22:58:44.574: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . | |
Apr 21 22:58:44.574: INFO: update-demo-kitten-g2quh is verified up and running | |
Apr 21 22:58:44.574: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-kitten-np8az -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:44.644: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:44.644: INFO: stdout: "true" | |
Apr 21 22:58:44.644: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods update-demo-kitten-np8az -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ntwmo' | |
Apr 21 22:58:44.714: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n" | |
Apr 21 22:58:44.714: INFO: stdout: "gcr.io/google_containers/update-demo:kitten" | |
Apr 21 22:58:44.714: INFO: validating pod update-demo-kitten-np8az | |
Apr 21 22:58:44.720: INFO: got data: { | |
"image": "kitten.jpg" | |
} | |
Apr 21 22:58:44.720: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . | |
Apr 21 22:58:44.720: INFO: update-demo-kitten-np8az is verified up and running | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:58:44.721: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-ntwmo" for this suite. | |
• [SLOW TEST:60.348 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Update Demo | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should do a rolling update of a replication controller [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:181 | |
------------------------------ | |
[BeforeEach] [k8s.io] Dynamic provisioning | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:53:48.916: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Dynamic provisioning | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:50 | |
[It] should create and delete persistent volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:123 | |
STEP: creating a claim with a dynamic provisioning annotation | |
Apr 21 22:53:49.227: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-fd6jg to have phase Bound | |
Apr 21 22:53:49.232: INFO: PersistentVolumeClaim pvc-fd6jg found but phase is Pending instead of Bound. | |
Apr 21 22:53:51.258: INFO: PersistentVolumeClaim pvc-fd6jg found but phase is Pending instead of Bound. | |
Apr 21 22:53:53.262: INFO: PersistentVolumeClaim pvc-fd6jg found and phase=Bound (4.034292534s) | |
STEP: checking the claim | |
STEP: checking the created volume is writable | |
Apr 21 22:53:53.277: INFO: Waiting up to 15m0s for pod pvc-volume-tester-qlunb status to be success or failure | |
Apr 21 22:53:53.284: INFO: No Status.Info for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' yet | |
Apr 21 22:53:53.284: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.178421ms elapsed) | |
Apr 21 22:53:55.287: INFO: No Status.Info for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' yet | |
Apr 21 22:53:55.287: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.010175129s elapsed) | |
Apr 21 22:53:57.290: INFO: No Status.Info for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' yet | |
Apr 21 22:53:57.290: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.013066916s elapsed) | |
Apr 21 22:53:59.301: INFO: No Status.Info for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' yet | |
Apr 21 22:53:59.301: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.023649192s elapsed) | |
Apr 21 22:54:01.305: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:01.305: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.027269917s elapsed) | |
Apr 21 22:54:03.309: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:03.309: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.032110486s elapsed) | |
Apr 21 22:54:05.314: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:05.314: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.036761123s elapsed) | |
Apr 21 22:54:07.324: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:07.324: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.047076433s elapsed) | |
Apr 21 22:54:09.328: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:09.328: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.05087852s elapsed) | |
Apr 21 22:54:11.332: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:11.332: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (18.054546662s elapsed) | |
Apr 21 22:54:13.336: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:13.336: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (20.059014636s elapsed) | |
Apr 21 22:54:15.340: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:15.340: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (22.062508426s elapsed) | |
Apr 21 22:54:17.343: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:17.343: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (24.065887632s elapsed) | |
Apr 21 22:54:19.347: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:19.347: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.069826529s elapsed) | |
Apr 21 22:54:21.350: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-qlunb' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:21.350: INFO: Waiting for pod pvc-volume-tester-qlunb in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.073030189s elapsed) | |
STEP: Saw pod success | |
STEP: checking the created volume is readable and retains data | |
Apr 21 22:54:23.378: INFO: Waiting up to 15m0s for pod pvc-volume-tester-ttf50 status to be success or failure | |
Apr 21 22:54:23.383: INFO: No Status.Info for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' yet | |
Apr 21 22:54:23.383: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.807019ms elapsed) | |
Apr 21 22:54:25.387: INFO: No Status.Info for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' yet | |
Apr 21 22:54:25.387: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.009214013s elapsed) | |
Apr 21 22:54:27.406: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:27.406: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.028649539s elapsed) | |
Apr 21 22:54:29.410: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:29.410: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.032549785s elapsed) | |
Apr 21 22:54:31.414: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:31.414: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.036666956s elapsed) | |
Apr 21 22:54:33.418: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:33.418: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.040455392s elapsed) | |
Apr 21 22:54:35.423: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:35.423: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.045132598s elapsed) | |
Apr 21 22:54:37.427: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:37.427: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.049033542s elapsed) | |
Apr 21 22:54:39.431: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:39.431: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.053286643s elapsed) | |
Apr 21 22:54:41.438: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:41.438: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (18.060458522s elapsed) | |
Apr 21 22:54:43.442: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:43.442: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (20.064300962s elapsed) | |
Apr 21 22:54:45.523: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:45.523: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (22.145430707s elapsed) | |
Apr 21 22:54:47.530: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:47.530: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (24.151991514s elapsed) | |
Apr 21 22:54:49.534: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:49.534: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.156443003s elapsed) | |
Apr 21 22:54:51.541: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:51.541: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.163447705s elapsed) | |
Apr 21 22:54:53.546: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:53.546: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.167864784s elapsed) | |
Apr 21 22:54:55.550: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:55.550: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (32.172362376s elapsed) | |
Apr 21 22:54:57.554: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:57.554: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (34.176656203s elapsed) | |
Apr 21 22:54:59.558: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:54:59.558: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (36.180255205s elapsed) | |
Apr 21 22:55:01.564: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:01.564: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (38.186338382s elapsed) | |
Apr 21 22:55:03.568: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:03.568: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (40.190712016s elapsed) | |
Apr 21 22:55:05.573: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:05.573: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (42.195305874s elapsed) | |
Apr 21 22:55:07.577: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:07.577: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (44.199687885s elapsed) | |
Apr 21 22:55:09.582: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:09.582: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (46.204418136s elapsed) | |
Apr 21 22:55:11.586: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:11.586: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (48.208308346s elapsed) | |
Apr 21 22:55:13.605: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:13.605: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (50.227365012s elapsed) | |
Apr 21 22:55:15.609: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:15.609: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (52.231567209s elapsed) | |
Apr 21 22:55:17.614: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:17.614: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (54.235903784s elapsed) | |
Apr 21 22:55:19.617: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:19.617: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (56.239664441s elapsed) | |
Apr 21 22:55:21.621: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:21.621: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (58.243363299s elapsed) | |
Apr 21 22:55:23.625: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:23.625: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m0.247086495s elapsed) | |
Apr 21 22:55:25.629: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:25.629: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m2.250939782s elapsed) | |
Apr 21 22:55:27.641: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:27.641: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m4.263267024s elapsed) | |
Apr 21 22:55:29.645: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:29.645: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m6.267112154s elapsed) | |
Apr 21 22:55:31.649: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:31.649: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m8.271025798s elapsed) | |
Apr 21 22:55:33.653: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:33.653: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m10.275173019s elapsed) | |
Apr 21 22:55:35.656: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:35.656: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m12.278599641s elapsed) | |
Apr 21 22:55:37.660: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:37.660: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m14.282588794s elapsed) | |
Apr 21 22:55:39.757: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:39.757: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m16.379478751s elapsed) | |
Apr 21 22:55:41.833: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:41.833: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m18.455405739s elapsed) | |
Apr 21 22:55:43.841: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:43.841: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m20.462880677s elapsed) | |
Apr 21 22:55:45.865: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:45.865: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m22.487512581s elapsed) | |
Apr 21 22:55:47.881: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:47.881: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m24.503006126s elapsed) | |
Apr 21 22:55:49.911: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:49.911: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m26.533301991s elapsed) | |
Apr 21 22:55:51.923: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:51.923: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m28.544923518s elapsed) | |
Apr 21 22:55:53.927: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:53.927: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m30.549353328s elapsed) | |
Apr 21 22:55:55.939: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:55.939: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m32.561057536s elapsed) | |
Apr 21 22:55:57.943: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:57.943: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m34.565001854s elapsed) | |
Apr 21 22:55:59.964: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:55:59.964: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m36.58645626s elapsed) | |
Apr 21 22:56:01.969: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:56:01.969: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m38.5908942s elapsed) | |
Apr 21 22:56:03.994: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:56:03.994: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m40.616204592s elapsed) | |
Apr 21 22:56:06.018: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:56:06.018: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m42.640333292s elapsed) | |
Apr 21 22:56:08.024: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:56:08.024: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m44.646196095s elapsed) | |
Apr 21 22:56:10.030: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:56:10.030: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m46.65229906s elapsed) | |
Apr 21 22:56:12.034: INFO: Nil State.Terminated for container 'volume-tester' in pod 'pvc-volume-tester-ttf50' in namespace 'e2e-tests-volume-provisioning-crsde' so far | |
Apr 21 22:56:12.034: INFO: Waiting for pod pvc-volume-tester-ttf50 in namespace 'e2e-tests-volume-provisioning-crsde' status to be 'success or failure'(found phase: "Pending", readiness: false) (1m48.6561739s elapsed) | |
STEP: Saw pod success | |
STEP: Sleeping to let kubelet destroy all pods | |
STEP: deleting the claim | |
Apr 21 22:59:14.291: INFO: Waiting up to 20m0s for PersistentVolume pv-gce-1j9az to get deleted | |
Apr 21 22:59:14.293: INFO: PersistentVolume pv-gce-1j9az found and phase=Bound (2.224186ms) | |
Apr 21 22:59:19.297: INFO: PersistentVolume pv-gce-1j9az was removed | |
[AfterEach] [k8s.io] Dynamic provisioning | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:59:19.298: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-volume-provisioning-crsde" for this suite. | |
• [SLOW TEST:335.399 seconds] | |
[k8s.io] Dynamic provisioning | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] DynamicProvisioner | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should create and delete persistent volumes | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:123 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:24.601: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Kubectl logs | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:837 | |
STEP: creating an rc | |
Apr 21 22:54:24.645: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-7cj4h' | |
Apr 21 22:54:24.760: INFO: stderr: "" | |
Apr 21 22:54:24.760: INFO: stdout: "replicationcontroller \"redis-master\" created" | |
[It] should be able to retrieve and filter logs [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:883 | |
Apr 21 22:54:24.798: INFO: Selector matched 1 pods for map[app:redis] | |
Apr 21 22:54:24.798: INFO: ForEach: Found 0 pods from the filter. Now looping through them. | |
[AfterEach] [k8s.io] Kubectl logs | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:840 | |
STEP: using delete to clean up resources | |
Apr 21 22:54:24.798: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-7cj4h' | |
Apr 21 22:54:26.905: INFO: stderr: "" | |
Apr 21 22:54:26.906: INFO: stdout: "replicationcontroller \"redis-master\" deleted" | |
Apr 21 22:54:26.906: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-7cj4h' | |
Apr 21 22:54:26.979: INFO: stderr: "" | |
Apr 21 22:54:26.979: INFO: stdout: "" | |
Apr 21 22:54:26.979: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-7cj4h -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:54:27.058: INFO: stderr: "" | |
Apr 21 22:54:27.058: INFO: stdout: "" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:27.058: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-7cj4h" for this suite. | |
Apr 21 22:59:27.079: INFO: Pod e2e-tests-kubectl-7cj4h redis-master-tx7iw on node e2e-gce-master-1-minion-8eot remains, has deletion timestamp 2016-04-21T22:54:55-07:00 | |
Apr 21 22:59:27.080: INFO: Couldn't delete ns "e2e-tests-kubectl-7cj4h": namespace e2e-tests-kubectl-7cj4h was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-tx7iw] | |
• Failure in Spec Teardown (AfterEach) [302.479 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl logs [AfterEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be able to retrieve and filter logs [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:883 | |
Apr 21 22:59:27.080: Couldn't delete ns "e2e-tests-kubectl-7cj4h": namespace e2e-tests-kubectl-7cj4h was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-tx7iw] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:184 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:59:27.082: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[BeforeEach] [k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:212 | |
STEP: creating the pod from /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml | |
Apr 21 22:59:27.116: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-s5q6y' | |
Apr 21 22:59:27.225: INFO: stderr: "" | |
Apr 21 22:59:27.225: INFO: stdout: "pod \"nginx\" created" | |
Apr 21 22:59:27.226: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx] | |
Apr 21 22:59:27.226: INFO: Waiting up to 5m0s for pod nginx status to be running and ready | |
Apr 21 22:59:27.244: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-s5q6y' status to be 'running and ready'(found phase: "Pending", readiness: false) (18.367143ms elapsed) | |
Apr 21 22:59:29.249: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-s5q6y' status to be 'running and ready'(found phase: "Running", readiness: false) (2.022964942s elapsed) | |
Apr 21 22:59:31.253: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-s5q6y' status to be 'running and ready'(found phase: "Running", readiness: false) (4.027032695s elapsed) | |
Apr 21 22:59:33.257: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx] | |
[It] should support exec | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:247 | |
STEP: executing a command in the container | |
Apr 21 22:59:33.257: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-kubectl-s5q6y nginx echo running in container' | |
Apr 21 22:59:33.512: INFO: stderr: "" | |
Apr 21 22:59:33.512: INFO: stdout: "running in container" | |
STEP: executing a command in the container with noninteractive stdin | |
Apr 21 22:59:33.512: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-kubectl-s5q6y -i nginx cat' | |
Apr 21 22:59:33.764: INFO: stderr: "" | |
Apr 21 22:59:33.764: INFO: stdout: "abcd1234" | |
STEP: executing a command in the container with pseudo-interactive stdin | |
Apr 21 22:59:33.765: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-kubectl-s5q6y -i nginx bash' | |
Apr 21 22:59:34.008: INFO: stderr: "" | |
Apr 21 22:59:34.008: INFO: stdout: "hi" | |
[AfterEach] [k8s.io] Simple pod | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:215 | |
STEP: using delete to clean up resources | |
Apr 21 22:59:34.008: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config delete --grace-period=0 -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/test/e2e/testing-manifests/kubectl/pod-with-readiness-probe.yaml --namespace=e2e-tests-kubectl-s5q6y' | |
Apr 21 22:59:34.089: INFO: stderr: "" | |
Apr 21 22:59:34.089: INFO: stdout: "pod \"nginx\" deleted" | |
Apr 21 22:59:34.089: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-s5q6y' | |
Apr 21 22:59:34.158: INFO: stderr: "" | |
Apr 21 22:59:34.158: INFO: stdout: "" | |
Apr 21 22:59:34.158: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-s5q6y -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' | |
Apr 21 22:59:34.230: INFO: stderr: "" | |
Apr 21 22:59:34.230: INFO: stdout: "" | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:59:34.230: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-7cj4h" for this suite. | |
Apr 21 22:59:34.244: INFO: Couldn't delete ns "e2e-tests-kubectl-7cj4h": Operation cannot be fulfilled on namespaces "e2e-tests-kubectl-7cj4h": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system. | |
• Failure in Spec Teardown (AfterEach) [7.163 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Simple pod [AfterEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support exec | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:247 | |
Apr 21 22:59:34.244: Couldn't delete ns "e2e-tests-kubectl-7cj4h": Operation cannot be fulfilled on namespaces "e2e-tests-kubectl-7cj4h": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:184 | |
------------------------------ | |
[BeforeEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:59:34.246: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] updates should be reflected in volume [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:153 | |
STEP: Creating configMap with name configmap-test-upd-634deb69-084f-11e6-9598-42010af00007 | |
STEP: Creating the pod | |
W0421 22:59:34.307634 17573 request.go:344] Field selector: v1 - pods - metadata.name - pod-configmaps-6350684c-084f-11e6-9598-42010af00007: need to check if this is versioned correctly. | |
STEP: Updating configmap configmap-test-upd-634deb69-084f-11e6-9598-42010af00007 | |
STEP: waiting to observe update in volume | |
STEP: Deleting the pod | |
STEP: Cleaning up the configMap | |
[AfterEach] [k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:59:37.238: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-configmap-hdceb" for this suite. | |
• [SLOW TEST:8.013 seconds] | |
[k8s.io] ConfigMap | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
updates should be reflected in volume [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:153 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:54:51.341: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should add annotations for pods in rc [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:912 | |
STEP: creating Redis RC | |
Apr 21 22:54:51.411: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-ti6gx' | |
Apr 21 22:54:51.529: INFO: stderr: "" | |
Apr 21 22:54:51.529: INFO: stdout: "replicationcontroller \"redis-master\" created" | |
STEP: patching all pods | |
Apr 21 22:54:51.562: INFO: Selector matched 1 pods for map[app:redis] | |
Apr 21 22:54:51.562: INFO: ForEach: Found 0 pods from the filter. Now looping through them. | |
STEP: checking annotations | |
Apr 21 22:54:51.565: INFO: Selector matched 1 pods for map[app:redis] | |
Apr 21 22:54:51.565: INFO: ForEach: Found 0 pods from the filter. Now looping through them. | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:54:51.565: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-ti6gx" for this suite. | |
Apr 21 22:59:51.590: INFO: Pod e2e-tests-kubectl-ti6gx redis-master-i66u8 on node e2e-gce-master-1-minion-8eot remains, has deletion timestamp 2016-04-21T22:55:23-07:00 | |
Apr 21 22:59:51.590: INFO: Couldn't delete ns "e2e-tests-kubectl-ti6gx": namespace e2e-tests-kubectl-ti6gx was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-i66u8] | |
• Failure in Spec Teardown (AfterEach) [300.249 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl patch [AfterEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should add annotations for pods in rc [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:912 | |
Apr 21 22:59:51.590: Couldn't delete ns "e2e-tests-kubectl-ti6gx": namespace e2e-tests-kubectl-ti6gx was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-i66u8] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:184 | |
------------------------------ | |
[BeforeEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:59:42.263: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should verify ResourceQuota with best effort scope. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:555 | |
STEP: Creating a ResourceQuota with best effort scope | |
STEP: Ensuring ResourceQuota status is calculated | |
STEP: Creating a ResourceQuota with not best effort scope | |
STEP: Ensuring ResourceQuota status is calculated | |
STEP: Creating a best-effort pod | |
STEP: Ensuring resource quota with best effort scope captures the pod usage | |
STEP: Ensuring resource quota with not best effort ignored the pod usage | |
STEP: Deleting the pod | |
STEP: Ensuring resource quota status released the pod usage | |
STEP: Creating a not best-effort pod | |
STEP: Ensuring resource quota with not best effort scope captures the pod usage | |
STEP: Ensuring resource quota with best effort scope ignored the pod usage | |
STEP: Deleting the pod | |
STEP: Ensuring resource quota status released the pod usage | |
[AfterEach] [k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:59:58.384: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-resourcequota-limkq" for this suite. | |
• [SLOW TEST:21.140 seconds] | |
[k8s.io] ResourceQuota | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should verify ResourceQuota with best effort scope. | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:555 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] ServiceAccounts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 23:00:03.405: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should ensure a single API token exists | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:153 | |
STEP: waiting for a single token reference | |
Apr 21 23:00:03.964: INFO: default service account has a single secret reference | |
STEP: ensuring the single token reference persists | |
STEP: deleting the service account token | |
STEP: waiting for a new token reference | |
Apr 21 23:00:06.475: INFO: default service account has a new single secret reference | |
STEP: ensuring the single token reference persists | |
STEP: deleting the reference to the service account token | |
STEP: waiting for a new token to be created and added | |
Apr 21 23:00:08.988: INFO: default service account has a new single secret reference | |
STEP: ensuring the single token reference persists | |
[AfterEach] [k8s.io] ServiceAccounts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:00:10.991: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-svcaccounts-tb056" for this suite. | |
• [SLOW TEST:12.610 seconds] | |
[k8s.io] ServiceAccounts | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should ensure a single API token exists | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:153 | |
------------------------------ | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:59:51.592: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:74 | |
Apr 21 22:59:51.680: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
[It] should release NodePorts on delete | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:902 | |
STEP: creating service nodeport-reuse with type NodePort in namespace e2e-tests-services-x9di7 | |
STEP: deleting original service nodeport-reuse | |
W0421 22:59:51.740292 17560 request.go:344] Field selector: v1 - pods - metadata.name - hostexec: need to check if this is versioned correctly. | |
Apr 21 22:59:53.427: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config exec --namespace=e2e-tests-services-x9di7 hostexec -- /bin/sh -c ! ss -ant46 'sport = :30526' | tail -n +2 | grep LISTEN' | |
Apr 21 22:59:53.632: INFO: stderr: "" | |
STEP: creating service nodeport-reuse with same NodePort 30526 | |
STEP: deleting service nodeport-reuse in namespace e2e-tests-services-x9di7 | |
[AfterEach] [k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 22:59:53.668: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-services-x9di7" for this suite. | |
• [SLOW TEST:42.095 seconds] | |
[k8s.io] Services | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should release NodePorts on delete | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:902 | |
------------------------------ | |
[BeforeEach] [k8s.io] Horizontal pod autoscaling (scale resource: CPU) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:57:03.096: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] Should scale from 2 pods to 1 pod using HPA version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:102 | |
STEP: Running consuming RC rc-light via replicationController with 2 replicas | |
STEP: creating replication controller rc-light in namespace e2e-tests-horizontal-pod-autoscaling-9wnbo | |
Apr 21 22:57:03.168: INFO: Created replication controller with name: rc-light, namespace: e2e-tests-horizontal-pod-autoscaling-9wnbo, replica count: 2 | |
Apr 21 22:57:13.169: INFO: rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:57:23.169: INFO: RC rc-light: consume 50 millicores in total | |
Apr 21 22:57:23.169: INFO: RC rc-light: consume 0 MB in total | |
Apr 21 22:57:23.169: INFO: RC rc-light: consume 0 MB in total | |
Apr 21 22:57:23.169: INFO: RC rc-light: consume custom metric 0 in total | |
Apr 21 22:57:23.169: INFO: RC rc-light: consume custom metric 0 in total | |
Apr 21 22:57:23.169: INFO: RC rc-light: consume 50 millicores in total | |
Apr 21 22:57:23.170: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:57:23.170: INFO: RC rc-light: sending 2 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:57:23.170: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:57:23.262: INFO: replicationController: current replicas number 2 waiting to be 1 | |
Apr 21 22:57:43.269: INFO: replicationController: current replicas number 2 waiting to be 1 | |
Apr 21 22:57:53.170: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:57:53.170: INFO: RC rc-light: sending 2 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:57:53.171: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:58:03.276: INFO: replicationController: current replicas number 2 waiting to be 1 | |
Apr 21 22:58:23.170: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:58:23.173: INFO: RC rc-light: sending 2 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:58:23.173: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:58:23.282: INFO: replicationController: current replicas number 2 waiting to be 1 | |
Apr 21 22:58:43.289: INFO: replicationController: current replicas number 2 waiting to be 1 | |
Apr 21 22:58:53.170: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:58:53.173: INFO: RC rc-light: sending 2 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:58:53.173: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:59:03.296: INFO: replicationController: current replicas number 2 waiting to be 1 | |
Apr 21 22:59:23.171: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:59:23.173: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 22:59:23.173: INFO: RC rc-light: sending 2 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:59:23.302: INFO: replicationController: current replicas number 2 waiting to be 1 | |
Apr 21 22:59:43.308: INFO: replicationController: current replicas number 2 waiting to be 1 | |
Apr 21 22:59:53.171: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 22:59:53.173: INFO: RC rc-light: sending 2 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 22:59:53.173: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 23:00:03.321: INFO: replicationController: current replicas number 2 waiting to be 1 | |
Apr 21 23:00:23.171: INFO: RC rc-light: sending 0 requests to consume 100 MB each and 1 request to consume 0 MB | |
Apr 21 23:00:23.174: INFO: RC rc-light: sending 0 requests to consume 10 custom metric each and 1 request to consume 0 | |
Apr 21 23:00:23.174: INFO: RC rc-light: sending 2 requests to consume 20 millicores each and 1 request to consume 10 millicores | |
Apr 21 23:00:23.324: INFO: replicationController: current replicas number is equal to desired replicas number: 1 | |
STEP: Removing consuming RC rc-light | |
STEP: deleting replication controller rc-light in namespace e2e-tests-horizontal-pod-autoscaling-9wnbo | |
Apr 21 23:00:35.353: INFO: Deleting RC rc-light took: 2.023790922s | |
Apr 21 23:00:35.353: INFO: Terminating RC rc-light pods took: 115.166µs | |
[AfterEach] [k8s.io] Horizontal pod autoscaling (scale resource: CPU) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:00:35.371: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-horizontal-pod-autoscaling-9wnbo" for this suite. | |
• [SLOW TEST:217.295 seconds] | |
[k8s.io] Horizontal pod autoscaling (scale resource: CPU) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] ReplicationController light | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
Should scale from 2 pods to 1 pod using HPA version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:102 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:58:27.283: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should proxy through a service and a pod [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:248 | |
STEP: creating replication controller proxy-service-yxge0 in namespace e2e-tests-proxy-g4coq | |
Apr 21 22:58:27.389: INFO: Created replication controller with name: proxy-service-yxge0, namespace: e2e-tests-proxy-g4coq, replica count: 1 | |
Apr 21 22:58:28.390: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:58:29.390: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:58:30.390: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Apr 21 22:58:31.391: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Apr 21 22:58:32.391: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Apr 21 22:58:33.392: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Apr 21 22:58:34.392: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Apr 21 22:58:35.392: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Apr 21 22:58:36.393: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Apr 21 22:58:37.393: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady | |
Apr 21 22:58:38.393: INFO: proxy-service-yxge0 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady | |
Apr 21 22:58:38.414: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.617217ms) | |
Apr 21 22:58:38.616: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 5.219789ms) | |
Apr 21 22:58:38.815: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.71654ms) | |
Apr 21 22:58:39.021: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 9.079281ms) | |
Apr 21 22:58:39.217: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 5.125256ms) | |
Apr 21 22:58:39.416: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.83467ms) | |
Apr 21 22:58:39.616: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.72599ms) | |
Apr 21 22:58:39.821: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 8.748329ms) | |
Apr 21 22:58:40.017: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 4.262356ms) | |
Apr 21 22:58:40.218: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 5.012507ms) | |
Apr 21 22:58:40.418: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.892212ms) | |
Apr 21 22:58:40.619: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.468214ms) | |
Apr 21 22:58:40.819: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.388674ms) | |
Apr 21 22:58:41.018: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.379937ms) | |
Apr 21 22:58:41.220: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 4.436988ms) | |
Apr 21 22:58:41.420: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.945561ms) | |
Apr 21 22:58:41.620: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.523161ms) | |
Apr 21 22:58:41.821: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 4.141289ms) | |
Apr 21 22:58:42.020: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.840642ms) | |
Apr 21 22:58:42.221: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 4.064769ms) | |
Apr 21 22:58:42.421: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.948761ms) | |
Apr 21 22:58:42.624: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 6.360199ms) | |
Apr 21 22:58:42.821: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.747171ms) | |
Apr 21 22:58:43.022: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.642349ms) | |
Apr 21 22:58:43.222: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.744474ms) | |
Apr 21 22:58:43.422: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.165853ms) | |
Apr 21 22:58:43.623: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 4.223717ms) | |
Apr 21 22:58:43.823: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.039204ms) | |
Apr 21 22:58:44.023: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.314071ms) | |
Apr 21 22:58:44.222: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.202761ms) | |
Apr 21 22:58:44.423: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.578808ms) | |
Apr 21 22:58:44.623: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.331988ms) | |
Apr 21 22:58:44.823: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 3.320643ms) | |
Apr 21 22:58:45.028: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 7.805192ms) | |
Apr 21 22:58:45.223: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 3.073929ms) | |
Apr 21 22:58:45.424: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.058049ms) | |
Apr 21 22:58:45.624: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.347691ms) | |
Apr 21 22:58:45.824: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 3.305284ms) | |
Apr 21 22:58:46.024: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 2.821996ms) | |
Apr 21 22:58:46.225: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 3.548919ms) | |
Apr 21 22:58:46.425: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 2.872581ms) | |
Apr 21 22:58:46.626: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.852099ms) | |
Apr 21 22:58:46.825: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.129373ms) | |
Apr 21 22:58:47.025: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 2.880329ms) | |
Apr 21 22:58:47.226: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 3.274948ms) | |
Apr 21 22:58:47.426: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 2.993435ms) | |
Apr 21 22:58:47.627: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 3.347063ms) | |
Apr 21 22:58:47.827: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.071616ms) | |
Apr 21 22:58:48.027: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 2.756169ms) | |
Apr 21 22:58:48.227: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 3.392328ms) | |
Apr 21 22:58:48.428: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.300089ms) | |
Apr 21 22:58:48.628: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.234681ms) | |
Apr 21 22:58:48.828: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.011598ms) | |
Apr 21 22:58:49.028: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.413909ms) | |
Apr 21 22:58:49.228: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.132439ms) | |
Apr 21 22:58:49.429: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.347984ms) | |
Apr 21 22:58:49.629: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 3.536189ms) | |
Apr 21 22:58:49.829: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.324371ms) | |
Apr 21 22:58:50.029: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 2.908884ms) | |
Apr 21 22:58:50.229: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 2.768844ms) | |
Apr 21 22:58:50.430: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.366279ms) | |
Apr 21 22:58:50.630: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 3.333646ms) | |
Apr 21 22:58:50.830: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 2.89155ms) | |
Apr 21 22:58:51.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.195647ms) | |
Apr 21 22:58:51.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.233935ms) | |
Apr 21 22:58:51.431: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.147005ms) | |
Apr 21 22:58:51.631: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.195573ms) | |
Apr 21 22:58:51.832: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.904623ms) | |
Apr 21 22:58:52.032: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.573036ms) | |
Apr 21 22:58:52.232: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.159557ms) | |
Apr 21 22:58:52.433: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.935875ms) | |
Apr 21 22:58:52.633: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.759832ms) | |
Apr 21 22:58:52.833: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 3.527837ms) | |
Apr 21 22:58:53.033: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.449428ms) | |
Apr 21 22:58:53.234: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 3.780263ms) | |
Apr 21 22:58:53.434: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.251449ms) | |
Apr 21 22:58:53.634: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.182193ms) | |
Apr 21 22:58:53.835: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.551213ms) | |
Apr 21 22:58:54.035: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.241816ms) | |
Apr 21 22:58:54.235: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 3.328336ms) | |
Apr 21 22:58:54.436: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 3.861407ms) | |
Apr 21 22:58:54.635: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 2.911501ms) | |
Apr 21 22:58:54.836: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 3.614958ms) | |
Apr 21 22:58:55.036: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 3.302575ms) | |
Apr 21 22:58:55.236: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 3.401941ms) | |
Apr 21 22:58:55.436: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.165943ms) | |
Apr 21 22:58:55.637: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.691818ms) | |
Apr 21 22:58:55.837: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.292289ms) | |
Apr 21 22:58:56.037: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 2.963676ms) | |
Apr 21 22:58:56.238: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.675741ms) | |
Apr 21 22:58:56.437: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 2.800038ms) | |
Apr 21 22:58:56.638: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 3.142565ms) | |
Apr 21 22:58:56.838: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.317117ms) | |
Apr 21 22:58:57.038: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.135095ms) | |
Apr 21 22:58:57.239: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.60609ms) | |
Apr 21 22:58:57.439: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.307637ms) | |
Apr 21 22:58:57.639: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.74293ms) | |
Apr 21 22:58:57.839: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 2.883685ms) | |
Apr 21 22:58:58.040: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.493146ms) | |
Apr 21 22:58:58.240: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.762309ms) | |
Apr 21 22:58:58.441: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.874758ms) | |
Apr 21 22:58:58.641: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 3.694375ms) | |
Apr 21 22:58:58.841: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.603669ms) | |
Apr 21 22:58:59.041: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.687923ms) | |
Apr 21 22:58:59.242: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.826046ms) | |
Apr 21 22:58:59.442: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.305179ms) | |
Apr 21 22:58:59.642: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.795325ms) | |
Apr 21 22:58:59.843: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.386828ms) | |
Apr 21 22:59:00.043: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 3.896918ms) | |
Apr 21 22:59:00.243: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.809847ms) | |
Apr 21 22:59:00.443: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.284491ms) | |
Apr 21 22:59:00.642: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.153166ms) | |
Apr 21 22:59:00.843: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.531074ms) | |
Apr 21 22:59:01.044: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.725677ms) | |
Apr 21 22:59:01.244: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 4.451952ms) | |
Apr 21 22:59:01.445: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.166417ms) | |
Apr 21 22:59:01.645: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.522314ms) | |
Apr 21 22:59:01.845: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 4.590519ms) | |
Apr 21 22:59:02.045: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.483489ms) | |
Apr 21 22:59:02.246: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 4.252272ms) | |
Apr 21 22:59:02.446: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 3.995006ms) | |
Apr 21 22:59:02.646: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 4.179429ms) | |
Apr 21 22:59:02.847: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 4.340523ms) | |
Apr 21 22:59:03.046: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 4.042046ms) | |
Apr 21 22:59:03.247: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 4.138592ms) | |
Apr 21 22:59:03.447: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 4.271046ms) | |
Apr 21 22:59:03.647: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 4.317111ms) | |
Apr 21 22:59:03.848: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.544271ms) | |
Apr 21 22:59:04.048: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.167729ms) | |
Apr 21 22:59:04.248: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.907517ms) | |
Apr 21 22:59:04.448: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 4.123468ms) | |
Apr 21 22:59:04.649: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 4.088252ms) | |
Apr 21 22:59:04.849: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 3.81722ms) | |
Apr 21 22:59:05.049: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 4.279002ms) | |
Apr 21 22:59:05.249: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.616857ms) | |
Apr 21 22:59:05.449: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.655115ms) | |
Apr 21 22:59:05.650: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.047276ms) | |
Apr 21 22:59:05.850: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 3.63962ms) | |
Apr 21 22:59:06.049: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.247082ms) | |
Apr 21 22:59:06.251: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.987042ms) | |
Apr 21 22:59:06.450: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.344594ms) | |
Apr 21 22:59:06.651: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 4.134113ms) | |
Apr 21 22:59:06.851: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.08163ms) | |
Apr 21 22:59:07.051: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.60569ms) | |
Apr 21 22:59:07.251: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 3.715068ms) | |
Apr 21 22:59:07.452: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.58392ms) | |
Apr 21 22:59:07.651: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.195891ms) | |
Apr 21 22:59:07.852: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.459347ms) | |
Apr 21 22:59:08.051: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 2.799359ms) | |
Apr 21 22:59:08.253: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 4.257849ms) | |
Apr 21 22:59:08.454: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 4.464585ms) | |
Apr 21 22:59:08.653: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.658001ms) | |
Apr 21 22:59:08.855: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 5.088315ms) | |
Apr 21 22:59:09.054: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.867553ms) | |
Apr 21 22:59:09.254: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 3.914317ms) | |
Apr 21 22:59:09.454: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.641542ms) | |
Apr 21 22:59:09.654: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.466057ms) | |
Apr 21 22:59:09.854: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.564918ms) | |
Apr 21 22:59:10.055: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.607433ms) | |
Apr 21 22:59:10.255: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 3.476982ms) | |
Apr 21 22:59:10.455: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.293202ms) | |
Apr 21 22:59:10.655: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 3.718943ms) | |
Apr 21 22:59:10.855: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.292595ms) | |
Apr 21 22:59:11.056: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.482187ms) | |
Apr 21 22:59:11.256: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.53459ms) | |
Apr 21 22:59:11.456: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.46853ms) | |
Apr 21 22:59:11.657: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 3.961233ms) | |
Apr 21 22:59:11.857: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 3.694552ms) | |
Apr 21 22:59:12.057: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.350318ms) | |
Apr 21 22:59:12.257: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.664271ms) | |
Apr 21 22:59:12.457: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.389796ms) | |
Apr 21 22:59:12.658: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 3.640813ms) | |
Apr 21 22:59:12.858: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 3.604674ms) | |
Apr 21 22:59:13.058: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.116603ms) | |
Apr 21 22:59:13.258: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 2.8537ms) | |
Apr 21 22:59:13.461: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 5.615161ms) | |
Apr 21 22:59:13.659: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 3.480926ms) | |
Apr 21 22:59:13.859: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 3.751076ms) | |
Apr 21 22:59:14.059: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.295519ms) | |
Apr 21 22:59:14.260: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.714253ms) | |
Apr 21 22:59:14.460: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.431523ms) | |
Apr 21 22:59:14.660: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.033389ms) | |
Apr 21 22:59:14.861: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 3.741166ms) | |
Apr 21 22:59:15.061: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.263924ms) | |
Apr 21 22:59:15.261: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.02896ms) | |
Apr 21 22:59:15.461: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.326496ms) | |
Apr 21 22:59:15.663: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 4.059978ms) | |
Apr 21 22:59:15.863: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.916008ms) | |
Apr 21 22:59:16.063: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.675696ms) | |
Apr 21 22:59:16.263: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 3.850377ms) | |
Apr 21 22:59:16.463: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.947859ms) | |
Apr 21 22:59:16.663: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.502826ms) | |
Apr 21 22:59:16.864: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.807681ms) | |
Apr 21 22:59:17.064: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.812674ms) | |
Apr 21 22:59:17.265: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.593178ms) | |
Apr 21 22:59:17.465: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.809484ms) | |
Apr 21 22:59:17.665: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.766526ms) | |
Apr 21 22:59:17.865: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.732442ms) | |
Apr 21 22:59:18.067: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 5.027099ms) | |
Apr 21 22:59:18.266: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.338129ms) | |
Apr 21 22:59:18.466: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.89321ms) | |
Apr 21 22:59:18.667: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.183221ms) | |
Apr 21 22:59:18.867: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 4.210901ms) | |
Apr 21 22:59:19.067: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.996831ms) | |
Apr 21 22:59:19.267: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.037669ms) | |
Apr 21 22:59:19.467: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 3.790372ms) | |
Apr 21 22:59:19.667: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.314898ms) | |
Apr 21 22:59:19.868: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.46645ms) | |
Apr 21 22:59:20.069: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.033598ms) | |
Apr 21 22:59:20.268: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.265255ms) | |
Apr 21 22:59:20.469: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.719108ms) | |
Apr 21 22:59:20.669: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.277619ms) | |
Apr 21 22:59:20.869: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.416078ms) | |
Apr 21 22:59:21.069: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.542453ms) | |
Apr 21 22:59:21.270: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.997582ms) | |
Apr 21 22:59:21.470: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 3.953099ms) | |
Apr 21 22:59:21.671: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.717481ms) | |
Apr 21 22:59:21.870: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.468411ms) | |
Apr 21 22:59:22.070: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.379399ms) | |
Apr 21 22:59:22.271: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.798237ms) | |
Apr 21 22:59:22.471: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 3.643208ms) | |
Apr 21 22:59:22.672: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.908869ms) | |
Apr 21 22:59:22.873: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 4.29872ms) | |
Apr 21 22:59:23.072: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.334343ms) | |
Apr 21 22:59:23.272: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.624943ms) | |
Apr 21 22:59:23.472: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.330216ms) | |
Apr 21 22:59:23.673: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.483538ms) | |
Apr 21 22:59:23.873: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 3.529123ms) | |
Apr 21 22:59:24.073: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 2.684559ms) | |
Apr 21 22:59:24.274: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.223509ms) | |
Apr 21 22:59:24.474: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 3.499346ms) | |
Apr 21 22:59:24.674: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.618806ms) | |
Apr 21 22:59:24.874: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 3.186244ms) | |
Apr 21 22:59:25.074: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 2.870724ms) | |
Apr 21 22:59:25.275: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 2.971908ms) | |
Apr 21 22:59:25.475: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.148123ms) | |
Apr 21 22:59:25.675: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 2.950114ms) | |
Apr 21 22:59:25.876: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 3.204336ms) | |
Apr 21 22:59:26.076: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.273763ms) | |
Apr 21 22:59:26.276: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 3.325443ms) | |
Apr 21 22:59:26.477: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.751884ms) | |
Apr 21 22:59:26.677: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.154955ms) | |
Apr 21 22:59:26.877: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 2.827597ms) | |
Apr 21 22:59:27.077: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 2.652164ms) | |
Apr 21 22:59:27.277: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.118531ms) | |
Apr 21 22:59:27.478: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.473463ms) | |
Apr 21 22:59:27.678: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.009457ms) | |
Apr 21 22:59:27.878: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.115736ms) | |
Apr 21 22:59:28.079: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.159532ms) | |
Apr 21 22:59:28.279: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 3.583686ms) | |
Apr 21 22:59:28.480: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.65313ms) | |
Apr 21 22:59:28.680: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.244848ms) | |
Apr 21 22:59:28.880: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.60295ms) | |
Apr 21 22:59:29.081: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.930859ms) | |
Apr 21 22:59:29.281: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.108949ms) | |
Apr 21 22:59:29.480: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.054997ms) | |
Apr 21 22:59:29.681: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 3.744079ms) | |
Apr 21 22:59:29.881: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.537655ms) | |
Apr 21 22:59:30.081: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.210183ms) | |
Apr 21 22:59:30.282: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.506541ms) | |
Apr 21 22:59:30.482: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.274888ms) | |
Apr 21 22:59:30.682: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 3.554235ms) | |
Apr 21 22:59:30.883: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 3.509149ms) | |
Apr 21 22:59:31.083: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.340442ms) | |
Apr 21 22:59:31.283: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.702967ms) | |
Apr 21 22:59:31.483: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 3.445035ms) | |
Apr 21 22:59:31.684: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 4.076033ms) | |
Apr 21 22:59:31.884: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.923714ms) | |
Apr 21 22:59:32.085: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.959729ms) | |
Apr 21 22:59:32.285: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.81729ms) | |
Apr 21 22:59:32.485: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.534022ms) | |
Apr 21 22:59:32.686: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.428774ms) | |
Apr 21 22:59:32.885: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.007162ms) | |
Apr 21 22:59:33.086: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.987602ms) | |
Apr 21 22:59:33.289: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 6.650349ms) | |
Apr 21 22:59:33.486: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.93998ms) | |
Apr 21 22:59:33.687: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.136478ms) | |
Apr 21 22:59:33.887: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.599634ms) | |
Apr 21 22:59:34.091: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 7.469017ms) | |
Apr 21 22:59:34.288: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 4.020106ms) | |
Apr 21 22:59:34.488: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.589444ms) | |
Apr 21 22:59:34.688: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.52414ms) | |
Apr 21 22:59:34.889: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 4.054203ms) | |
Apr 21 22:59:35.089: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 3.703271ms) | |
Apr 21 22:59:35.290: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.16287ms) | |
Apr 21 22:59:35.490: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.906828ms) | |
Apr 21 22:59:35.690: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 3.588426ms) | |
Apr 21 22:59:35.891: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 3.920443ms) | |
Apr 21 22:59:36.091: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.083477ms) | |
Apr 21 22:59:36.291: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 4.009166ms) | |
Apr 21 22:59:36.492: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 4.399155ms) | |
Apr 21 22:59:36.692: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.742029ms) | |
Apr 21 22:59:36.893: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 4.263557ms) | |
Apr 21 22:59:37.092: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.416283ms) | |
Apr 21 22:59:37.293: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 4.402693ms) | |
Apr 21 22:59:37.494: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.390683ms) | |
Apr 21 22:59:37.694: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.093588ms) | |
Apr 21 22:59:37.894: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.800782ms) | |
Apr 21 22:59:38.093: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 2.795715ms) | |
Apr 21 22:59:38.295: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.990266ms) | |
Apr 21 22:59:38.495: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.465303ms) | |
Apr 21 22:59:38.696: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 4.190316ms) | |
Apr 21 22:59:38.896: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.983789ms) | |
Apr 21 22:59:39.096: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 4.132369ms) | |
Apr 21 22:59:39.297: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 4.290813ms) | |
Apr 21 22:59:39.497: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.015911ms) | |
Apr 21 22:59:39.697: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 4.269743ms) | |
Apr 21 22:59:39.898: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 4.246497ms) | |
Apr 21 22:59:40.099: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 4.854647ms) | |
Apr 21 22:59:40.299: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.575699ms) | |
Apr 21 22:59:40.498: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.741716ms) | |
Apr 21 22:59:40.699: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.351756ms) | |
Apr 21 22:59:40.899: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.622856ms) | |
Apr 21 22:59:41.100: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.418075ms) | |
Apr 21 22:59:41.300: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 4.206406ms) | |
Apr 21 22:59:41.500: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.598457ms) | |
Apr 21 22:59:41.701: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.015112ms) | |
Apr 21 22:59:41.901: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.193213ms) | |
Apr 21 22:59:42.101: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.39483ms) | |
Apr 21 22:59:42.301: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.604241ms) | |
Apr 21 22:59:42.502: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.384092ms) | |
Apr 21 22:59:42.703: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 4.506387ms) | |
Apr 21 22:59:42.903: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 4.326114ms) | |
Apr 21 22:59:43.103: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 4.104274ms) | |
Apr 21 22:59:43.303: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.390433ms) | |
Apr 21 22:59:43.504: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.943767ms) | |
Apr 21 22:59:43.704: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.479504ms) | |
Apr 21 22:59:43.905: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 4.348952ms) | |
Apr 21 22:59:44.105: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.233637ms) | |
Apr 21 22:59:44.305: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.044593ms) | |
Apr 21 22:59:44.505: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.277448ms) | |
Apr 21 22:59:44.705: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.809437ms) | |
Apr 21 22:59:44.906: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 4.138542ms) | |
Apr 21 22:59:45.106: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 4.064118ms) | |
Apr 21 22:59:45.307: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 4.296177ms) | |
Apr 21 22:59:45.507: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.870286ms) | |
Apr 21 22:59:45.707: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.681644ms) | |
Apr 21 22:59:45.908: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 4.388999ms) | |
Apr 21 22:59:46.108: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.086812ms) | |
Apr 21 22:59:46.309: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 4.194973ms) | |
Apr 21 22:59:46.509: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.975653ms) | |
Apr 21 22:59:46.709: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.827086ms) | |
Apr 21 22:59:46.908: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.170789ms) | |
Apr 21 22:59:47.110: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.060668ms) | |
Apr 21 22:59:47.309: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.360129ms) | |
Apr 21 22:59:47.510: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.624235ms) | |
Apr 21 22:59:47.711: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 3.886182ms) | |
Apr 21 22:59:47.911: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.439824ms) | |
Apr 21 22:59:48.111: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 3.537609ms) | |
Apr 21 22:59:48.311: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.462479ms) | |
Apr 21 22:59:48.512: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.551865ms) | |
Apr 21 22:59:48.712: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.52015ms) | |
Apr 21 22:59:48.912: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.321657ms) | |
Apr 21 22:59:49.113: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.032016ms) | |
Apr 21 22:59:49.313: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 3.924922ms) | |
Apr 21 22:59:49.514: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 4.67913ms) | |
Apr 21 22:59:49.713: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.367706ms) | |
Apr 21 22:59:49.914: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.635652ms) | |
Apr 21 22:59:50.114: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 3.812609ms) | |
Apr 21 22:59:50.314: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.416934ms) | |
Apr 21 22:59:50.515: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.737405ms) | |
Apr 21 22:59:50.715: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.465013ms) | |
Apr 21 22:59:50.916: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.950626ms) | |
Apr 21 22:59:51.116: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.565444ms) | |
Apr 21 22:59:51.316: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.411763ms) | |
Apr 21 22:59:51.517: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.537687ms) | |
Apr 21 22:59:51.720: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 6.509047ms) | |
Apr 21 22:59:51.917: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.961012ms) | |
Apr 21 22:59:52.117: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.681645ms) | |
Apr 21 22:59:52.319: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 4.410531ms) | |
Apr 21 22:59:52.519: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.235049ms) | |
Apr 21 22:59:52.719: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 4.323249ms) | |
Apr 21 22:59:52.919: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.869557ms) | |
Apr 21 22:59:53.119: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.981844ms) | |
Apr 21 22:59:53.319: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.442738ms) | |
Apr 21 22:59:53.519: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.240411ms) | |
Apr 21 22:59:53.720: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.56666ms) | |
Apr 21 22:59:53.920: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.738095ms) | |
Apr 21 22:59:54.122: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 4.630881ms) | |
Apr 21 22:59:54.321: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.800146ms) | |
Apr 21 22:59:54.522: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.413645ms) | |
Apr 21 22:59:54.722: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.038677ms) | |
Apr 21 22:59:54.922: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.728653ms) | |
Apr 21 22:59:55.122: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.342035ms) | |
Apr 21 22:59:55.323: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.823124ms) | |
Apr 21 22:59:55.523: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 3.556067ms) | |
Apr 21 22:59:55.723: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.245393ms) | |
Apr 21 22:59:55.923: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.106019ms) | |
Apr 21 22:59:56.124: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.684114ms) | |
Apr 21 22:59:56.324: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.652966ms) | |
Apr 21 22:59:56.525: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 3.876878ms) | |
Apr 21 22:59:56.724: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 2.983211ms) | |
Apr 21 22:59:56.927: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 5.434089ms) | |
Apr 21 22:59:57.125: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.018172ms) | |
Apr 21 22:59:57.326: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.314995ms) | |
Apr 21 22:59:57.526: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.241877ms) | |
Apr 21 22:59:57.729: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 5.665729ms) | |
Apr 21 22:59:57.927: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.159152ms) | |
Apr 21 22:59:58.126: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 2.833056ms) | |
Apr 21 22:59:58.327: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 2.987666ms) | |
Apr 21 22:59:58.528: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.839195ms) | |
Apr 21 22:59:58.728: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 3.706448ms) | |
Apr 21 22:59:58.929: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 3.859381ms) | |
Apr 21 22:59:59.129: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.561705ms) | |
Apr 21 22:59:59.329: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.713298ms) | |
Apr 21 22:59:59.529: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.639975ms) | |
Apr 21 22:59:59.729: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.467236ms) | |
Apr 21 22:59:59.931: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.315229ms) | |
Apr 21 23:00:00.130: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.671933ms) | |
Apr 21 23:00:00.331: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.774507ms) | |
Apr 21 23:00:00.532: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 4.271668ms) | |
Apr 21 23:00:00.731: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.609282ms) | |
Apr 21 23:00:00.933: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 5.132872ms) | |
Apr 21 23:00:01.133: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.483041ms) | |
Apr 21 23:00:01.333: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.716897ms) | |
Apr 21 23:00:01.535: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 5.88182ms) | |
Apr 21 23:00:01.734: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 4.467046ms) | |
Apr 21 23:00:01.934: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.176148ms) | |
Apr 21 23:00:02.134: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 4.073613ms) | |
Apr 21 23:00:02.334: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 4.182368ms) | |
Apr 21 23:00:02.535: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 4.784373ms) | |
Apr 21 23:00:02.735: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 4.273601ms) | |
Apr 21 23:00:02.935: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 4.024837ms) | |
Apr 21 23:00:03.135: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.747779ms) | |
Apr 21 23:00:03.336: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 3.800469ms) | |
Apr 21 23:00:03.536: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.181844ms) | |
Apr 21 23:00:03.759: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 26.514599ms) | |
Apr 21 23:00:03.936: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.747561ms) | |
Apr 21 23:00:04.137: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.559644ms) | |
Apr 21 23:00:04.338: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.128952ms) | |
Apr 21 23:00:04.537: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.66051ms) | |
Apr 21 23:00:04.738: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.474382ms) | |
Apr 21 23:00:04.938: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.812249ms) | |
Apr 21 23:00:05.139: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.917569ms) | |
Apr 21 23:00:05.339: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.917386ms) | |
Apr 21 23:00:05.539: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.958107ms) | |
Apr 21 23:00:05.740: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 4.018498ms) | |
Apr 21 23:00:05.939: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.329136ms) | |
Apr 21 23:00:06.140: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.753634ms) | |
Apr 21 23:00:06.341: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.347427ms) | |
Apr 21 23:00:06.541: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.689758ms) | |
Apr 21 23:00:06.741: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.147523ms) | |
Apr 21 23:00:06.941: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.13685ms) | |
Apr 21 23:00:07.141: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.66632ms) | |
Apr 21 23:00:07.342: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.260363ms) | |
Apr 21 23:00:07.542: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.585111ms) | |
Apr 21 23:00:07.742: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.04745ms) | |
Apr 21 23:00:07.943: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 4.497323ms) | |
Apr 21 23:00:08.143: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.102999ms) | |
Apr 21 23:00:08.343: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.458134ms) | |
Apr 21 23:00:08.543: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.719758ms) | |
Apr 21 23:00:08.743: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.745313ms) | |
Apr 21 23:00:08.945: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 5.427472ms) | |
Apr 21 23:00:09.144: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.17221ms) | |
Apr 21 23:00:09.344: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 4.007664ms) | |
Apr 21 23:00:09.544: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.77684ms) | |
Apr 21 23:00:09.745: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 4.00204ms) | |
Apr 21 23:00:09.945: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.737759ms) | |
Apr 21 23:00:10.145: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.663694ms) | |
Apr 21 23:00:10.345: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.020937ms) | |
Apr 21 23:00:10.546: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.112601ms) | |
Apr 21 23:00:10.746: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.78562ms) | |
Apr 21 23:00:10.946: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.955574ms) | |
Apr 21 23:00:11.147: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 4.224819ms) | |
Apr 21 23:00:11.346: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.808779ms) | |
Apr 21 23:00:11.547: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.830269ms) | |
Apr 21 23:00:11.747: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 4.02705ms) | |
Apr 21 23:00:11.947: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.652992ms) | |
Apr 21 23:00:12.147: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.82414ms) | |
Apr 21 23:00:12.347: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.616431ms) | |
Apr 21 23:00:12.548: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 4.100581ms) | |
Apr 21 23:00:12.748: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 3.8814ms) | |
Apr 21 23:00:12.949: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 4.088934ms) | |
Apr 21 23:00:13.149: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.769317ms) | |
Apr 21 23:00:13.349: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.80808ms) | |
Apr 21 23:00:13.549: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.746029ms) | |
Apr 21 23:00:13.750: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.028664ms) | |
Apr 21 23:00:13.949: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.52993ms) | |
Apr 21 23:00:14.150: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.170722ms) | |
Apr 21 23:00:14.350: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.62834ms) | |
Apr 21 23:00:14.551: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.128362ms) | |
Apr 21 23:00:14.751: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.682605ms) | |
Apr 21 23:00:14.952: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.21465ms) | |
Apr 21 23:00:15.153: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.564498ms) | |
Apr 21 23:00:15.352: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.541969ms) | |
Apr 21 23:00:15.553: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.930068ms) | |
Apr 21 23:00:15.753: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.900397ms) | |
Apr 21 23:00:15.953: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 3.857746ms) | |
Apr 21 23:00:16.153: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 4.053616ms) | |
Apr 21 23:00:16.353: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.712799ms) | |
Apr 21 23:00:16.554: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 4.119864ms) | |
Apr 21 23:00:16.754: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.655442ms) | |
Apr 21 23:00:16.954: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.652891ms) | |
Apr 21 23:00:17.155: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.582795ms) | |
Apr 21 23:00:17.355: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.20179ms) | |
Apr 21 23:00:17.555: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.243326ms) | |
Apr 21 23:00:17.755: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.90547ms) | |
Apr 21 23:00:17.955: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.571756ms) | |
Apr 21 23:00:18.156: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.755682ms) | |
Apr 21 23:00:18.356: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.839713ms) | |
Apr 21 23:00:18.556: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.777638ms) | |
Apr 21 23:00:18.756: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.410509ms) | |
Apr 21 23:00:18.956: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.272155ms) | |
Apr 21 23:00:19.157: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.176508ms) | |
Apr 21 23:00:19.357: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.661746ms) | |
Apr 21 23:00:19.558: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.274965ms) | |
Apr 21 23:00:19.758: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.998015ms) | |
Apr 21 23:00:19.958: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.64354ms) | |
Apr 21 23:00:20.159: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.840306ms) | |
Apr 21 23:00:20.359: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.606891ms) | |
Apr 21 23:00:20.559: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 4.151794ms) | |
Apr 21 23:00:20.759: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.705772ms) | |
Apr 21 23:00:20.960: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 3.812675ms) | |
Apr 21 23:00:21.160: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.849644ms) | |
Apr 21 23:00:21.360: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.561957ms) | |
Apr 21 23:00:21.560: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.600891ms) | |
Apr 21 23:00:21.760: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 3.672017ms) | |
Apr 21 23:00:21.960: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.341986ms) | |
Apr 21 23:00:22.160: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.265332ms) | |
Apr 21 23:00:22.361: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 3.803576ms) | |
Apr 21 23:00:22.561: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.351842ms) | |
Apr 21 23:00:22.761: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.159541ms) | |
Apr 21 23:00:22.962: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 3.811198ms) | |
Apr 21 23:00:23.162: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.164515ms) | |
Apr 21 23:00:23.362: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 3.785367ms) | |
Apr 21 23:00:23.562: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.433461ms) | |
Apr 21 23:00:23.762: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.206413ms) | |
Apr 21 23:00:23.963: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 4.080608ms) | |
Apr 21 23:00:24.163: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.533158ms) | |
Apr 21 23:00:24.364: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 3.965318ms) | |
Apr 21 23:00:24.564: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.119404ms) | |
Apr 21 23:00:24.765: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.94627ms) | |
Apr 21 23:00:24.964: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.421275ms) | |
Apr 21 23:00:25.165: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.374861ms) | |
Apr 21 23:00:25.365: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 3.734077ms) | |
Apr 21 23:00:25.566: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.690302ms) | |
Apr 21 23:00:25.766: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.29673ms) | |
Apr 21 23:00:25.966: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.689567ms) | |
Apr 21 23:00:26.167: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.810241ms) | |
Apr 21 23:00:26.367: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.512562ms) | |
Apr 21 23:00:26.568: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.966942ms) | |
Apr 21 23:00:26.768: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 3.810136ms) | |
Apr 21 23:00:26.968: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.861867ms) | |
Apr 21 23:00:27.168: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.34579ms) | |
Apr 21 23:00:27.368: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.380543ms) | |
Apr 21 23:00:27.569: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.733815ms) | |
Apr 21 23:00:27.769: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 3.785414ms) | |
Apr 21 23:00:27.969: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.446255ms) | |
Apr 21 23:00:28.170: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 3.824861ms) | |
Apr 21 23:00:28.370: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.380553ms) | |
Apr 21 23:00:28.570: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.734537ms) | |
Apr 21 23:00:28.770: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.55237ms) | |
Apr 21 23:00:28.970: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.206643ms) | |
Apr 21 23:00:29.171: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 3.838889ms) | |
Apr 21 23:00:29.372: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.136805ms) | |
Apr 21 23:00:29.572: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.120909ms) | |
Apr 21 23:00:29.773: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 4.117962ms) | |
Apr 21 23:00:29.972: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.339536ms) | |
Apr 21 23:00:30.173: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 3.797588ms) | |
Apr 21 23:00:30.373: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.266052ms) | |
Apr 21 23:00:30.574: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.929674ms) | |
Apr 21 23:00:30.774: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.401887ms) | |
Apr 21 23:00:30.974: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.644854ms) | |
Apr 21 23:00:31.175: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.765829ms) | |
Apr 21 23:00:31.375: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 3.982728ms) | |
Apr 21 23:00:31.576: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.053871ms) | |
Apr 21 23:00:31.776: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.645138ms) | |
Apr 21 23:00:31.976: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.709986ms) | |
Apr 21 23:00:32.177: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.659244ms) | |
Apr 21 23:00:32.377: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.944003ms) | |
Apr 21 23:00:32.579: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 5.686716ms) | |
Apr 21 23:00:32.778: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.926572ms) | |
Apr 21 23:00:32.978: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 4.197289ms) | |
Apr 21 23:00:33.178: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.874955ms) | |
Apr 21 23:00:33.378: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.561315ms) | |
Apr 21 23:00:33.579: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.816968ms) | |
Apr 21 23:00:33.780: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 4.435224ms) | |
Apr 21 23:00:33.980: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 4.240295ms) | |
Apr 21 23:00:34.180: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.088487ms) | |
Apr 21 23:00:34.380: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 3.384259ms) | |
Apr 21 23:00:34.580: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.889275ms) | |
Apr 21 23:00:34.781: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 4.072121ms) | |
Apr 21 23:00:34.981: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 3.861359ms) | |
Apr 21 23:00:35.180: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.048659ms) | |
Apr 21 23:00:35.383: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 5.377368ms) | |
Apr 21 23:00:35.582: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 4.656118ms) | |
Apr 21 23:00:35.782: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 4.099088ms) | |
Apr 21 23:00:35.982: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 4.15479ms) | |
Apr 21 23:00:36.182: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 4.029579ms) | |
Apr 21 23:00:36.383: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.171713ms) | |
Apr 21 23:00:36.583: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.36404ms) | |
Apr 21 23:00:36.789: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 9.400102ms) | |
Apr 21 23:00:36.984: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 4.326144ms) | |
Apr 21 23:00:37.184: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.955322ms) | |
Apr 21 23:00:37.384: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.397598ms) | |
Apr 21 23:00:37.584: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 4.197007ms) | |
Apr 21 23:00:37.784: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.970734ms) | |
Apr 21 23:00:37.985: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 4.283005ms) | |
Apr 21 23:00:38.185: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 4.64046ms) | |
Apr 21 23:00:38.385: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 4.155294ms) | |
Apr 21 23:00:38.586: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 4.189665ms) | |
Apr 21 23:00:38.786: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.432261ms) | |
Apr 21 23:00:38.986: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.918989ms) | |
Apr 21 23:00:39.187: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.347194ms) | |
Apr 21 23:00:39.387: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.175308ms) | |
Apr 21 23:00:39.586: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.785468ms) | |
Apr 21 23:00:39.787: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.462858ms) | |
Apr 21 23:00:39.987: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 4.101299ms) | |
Apr 21 23:00:40.187: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.771559ms) | |
Apr 21 23:00:40.388: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 4.084113ms) | |
Apr 21 23:00:40.588: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.781657ms) | |
Apr 21 23:00:40.788: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.075124ms) | |
Apr 21 23:00:40.988: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.889138ms) | |
Apr 21 23:00:41.189: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 4.392709ms) | |
Apr 21 23:00:41.389: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 4.008248ms) | |
Apr 21 23:00:41.589: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.910489ms) | |
Apr 21 23:00:41.789: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.037851ms) | |
Apr 21 23:00:41.990: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 4.119509ms) | |
Apr 21 23:00:42.190: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 4.117742ms) | |
Apr 21 23:00:42.389: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.538277ms) | |
Apr 21 23:00:42.591: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 4.394863ms) | |
Apr 21 23:00:42.791: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 4.2788ms) | |
Apr 21 23:00:43.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 44.084889ms) | |
Apr 21 23:00:43.191: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 4.06305ms) | |
Apr 21 23:00:43.392: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.261794ms) | |
Apr 21 23:00:43.591: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.912633ms) | |
Apr 21 23:00:43.792: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 4.087613ms) | |
Apr 21 23:00:43.992: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 4.021763ms) | |
Apr 21 23:00:44.192: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.125886ms) | |
Apr 21 23:00:44.392: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.808743ms) | |
Apr 21 23:00:44.593: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 4.066287ms) | |
Apr 21 23:00:44.793: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 4.030753ms) | |
Apr 21 23:00:44.994: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 4.278662ms) | |
Apr 21 23:00:45.194: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.358107ms) | |
Apr 21 23:00:45.394: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.529138ms) | |
Apr 21 23:00:45.595: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 4.528547ms) | |
Apr 21 23:00:45.794: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.76788ms) | |
Apr 21 23:00:45.995: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 3.998953ms) | |
Apr 21 23:00:46.195: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 4.155025ms) | |
Apr 21 23:00:46.396: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.434538ms) | |
Apr 21 23:00:46.596: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.30177ms) | |
Apr 21 23:00:46.795: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.814471ms) | |
Apr 21 23:00:46.996: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.691287ms) | |
Apr 21 23:00:47.196: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.868878ms) | |
Apr 21 23:00:47.397: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.358813ms) | |
Apr 21 23:00:47.597: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 4.348305ms) | |
Apr 21 23:00:47.798: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/proxy/re... (200; 4.707889ms) | |
Apr 21 23:00:47.997: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/: foo (200; 3.803812ms) | |
Apr 21 23:00:48.197: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:81/: bar (200; 3.973741ms) | |
Apr 21 23:00:48.397: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/proxy/: bar (200; 3.756596ms) | |
Apr 21 23:00:48.598: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/proxy/: tls qux (200; 3.66207ms) | |
Apr 21 23:00:48.798: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:80/re... (200; 3.738981ms) | |
Apr 21 23:00:48.999: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/: bar (200; 4.082223ms) | |
Apr 21 23:00:49.198: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5/proxy/rewriteme"... (200; 3.726358ms) | |
Apr 21 23:00:49.399: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.057754ms) | |
Apr 21 23:00:49.599: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname1/proxy/: foo (200; 3.623605ms) | |
Apr 21 23:00:49.799: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/proxy/: bar (200; 3.525765ms) | |
Apr 21 23:00:49.999: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/proxy/: tls baz (200; 3.430845ms) | |
Apr 21 23:00:50.200: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/rewrite... (200; 3.721326ms) | |
Apr 21 23:00:50.400: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:460/proxy/: tls baz (200; 3.793009ms) | |
Apr 21 23:00:50.600: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname2/: tls qux (200; 3.4546ms) | |
Apr 21 23:00:50.800: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:160/: foo (200; 3.690363ms) | |
Apr 21 23:00:51.000: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/: bar (200; 3.814754ms) | |
Apr 21 23:00:51.201: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:80/proxy/rewrite... (200; 4.219153ms) | |
Apr 21 23:00:51.401: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 4.167634ms) | |
Apr 21 23:00:51.601: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:162/proxy/: bar (200; 3.612238ms) | |
Apr 21 23:00:51.804: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:462/proxy/: tls qux (200; 5.982236ms) | |
Apr 21 23:00:52.002: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/proxy/: foo (200; 4.379612ms) | |
Apr 21 23:00:52.203: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:80/: foo (200; 4.533064ms) | |
Apr 21 23:00:52.402: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/: foo (200; 3.601039ms) | |
Apr 21 23:00:52.603: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:443/: tls baz (200; 3.971341ms) | |
Apr 21 23:00:52.803: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname1/proxy/: foo (200; 3.773484ms) | |
Apr 21 23:00:53.003: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:tlsportname1/: tls baz (200; 3.710964ms) | |
Apr 21 23:00:53.203: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/pods/http:proxy-service-yxge0-9m0v5:160/: foo (200; 3.463975ms) | |
Apr 21 23:00:53.403: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:portname2/: bar (200; 3.535024ms) | |
Apr 21 23:00:53.604: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/proxy-service-yxge0:81/: bar (200; 4.468901ms) | |
Apr 21 23:00:53.804: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:80/: foo (200; 4.237444ms) | |
Apr 21 23:00:54.004: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/http:proxy-service-yxge0:portname2/: bar (200; 3.705374ms) | |
Apr 21 23:00:54.205: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-g4coq/services/https:proxy-service-yxge0:444/: tls qux (200; 4.277573ms) | |
Apr 21 23:00:54.406: INFO: /api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-g4coq/pods/https:proxy-service-yxge0-9m0v5:443/proxy/... (200; 4.688785ms) | |
STEP: deleting replication controller proxy-service-yxge0 in namespace e2e-tests-proxy-g4coq | |
Apr 21 23:00:56.632: INFO: Deleting RC proxy-service-yxge0 took: 2.026888147s | |
Apr 21 23:00:56.632: INFO: Terminating RC proxy-service-yxge0 pods took: 120.858µs | |
[AfterEach] version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:00:56.649: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-proxy-g4coq" for this suite. | |
• [SLOW TEST:164.386 seconds] | |
[k8s.io] Proxy | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
version v1 | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40 | |
should proxy through a service and a pod [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:248 | |
------------------------------ | |
[BeforeEach] [k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 23:01:11.675: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:42 | |
Apr 21 23:01:11.710: INFO: Only supported for providers [mesos/docker] (not gce) | |
[AfterEach] [k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:01:11.710: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-nz7bf" for this suite. | |
S [SKIPPING] in Spec Setup (BeforeEach) [5.065 seconds] | |
[k8s.io] Mesos | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
starts static pods on every node in the mesos cluster [BeforeEach] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:74 | |
Apr 21 23:01:11.710: Only supported for providers [mesos/docker] (not gce) | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:276 | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 22:59:24.318: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:714 | |
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-wern2 | |
W0421 22:59:24.368783 17566 request.go:344] Field selector: v1 - pods - metadata.name - liveness-exec: need to check if this is versioned correctly. | |
Apr 21 22:59:25.927: INFO: Started pod liveness-exec in namespace e2e-tests-pods-wern2 | |
STEP: checking the pod's current state and verifying that restartCount is present | |
Apr 21 22:59:25.946: INFO: Initial restart count of pod liveness-exec is 0 | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:01:26.202: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-wern2" for this suite. | |
• [SLOW TEST:126.905 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:714 | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 23:01:16.741: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] RollingUpdateDeployment should scale up and down in the right order | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:61 | |
Apr 21 23:01:16.800: INFO: Pod name sample-pod-2: Found 0 pods out of 1 | |
Apr 21 23:01:21.805: INFO: Pod name sample-pod-2: Found 1 pods out of 1 | |
STEP: ensuring each pod is running | |
W0421 23:01:21.805632 17555 request.go:344] Field selector: v1 - pods - metadata.name - test-rolling-scale-controller-h0q5q: need to check if this is versioned correctly. | |
STEP: trying to dial each unique pod | |
Apr 21 23:01:21.832: INFO: Controller sample-pod-2: Got non-empty result from replica 1 [test-rolling-scale-controller-h0q5q]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 1 of 1 required successes so far | |
Apr 21 23:01:21.832: INFO: Creating deployment test-rolling-scale-deployment | |
Apr 21 23:01:27.872: INFO: Deleting deployment test-rolling-scale-deployment | |
Apr 21 23:01:31.949: INFO: Ensuring deployment test-rolling-scale-deployment was deleted | |
Apr 21 23:01:31.952: INFO: Ensuring deployment test-rolling-scale-deployment's RSes were deleted | |
Apr 21 23:01:31.954: INFO: Ensuring deployment test-rolling-scale-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:01:31.956: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-bre4l" for this suite. | |
• [SLOW TEST:20.232 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
RollingUpdateDeployment should scale up and down in the right order | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:61 | |
------------------------------ | |
S | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 23:01:36.975: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should support remote command execution over websockets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:885 | |
Apr 21 23:01:37.008: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: creating the pod | |
STEP: submitting the pod to kubernetes | |
W0421 23:01:37.035368 17555 request.go:344] Field selector: v1 - pods - metadata.name - pod-exec-websocket-ac73f2d2-084f-11e6-8c05-42010af00007: need to check if this is versioned correctly. | |
STEP: deleting the pod | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:01:38.165: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-3eko8" for this suite. | |
• [SLOW TEST:6.207 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should support remote command execution over websockets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:885 | |
------------------------------ | |
[BeforeEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 23:01:31.225: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] should be schedule with cpu and memory limits [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:264 | |
STEP: creating the pod | |
W0421 23:01:31.279381 17566 request.go:344] Field selector: v1 - pods - metadata.name - pod-update-a906ca12-084f-11e6-8fe8-42010af00007: need to check if this is versioned correctly. | |
[AfterEach] [k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:01:32.965: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-pods-0rqy1" for this suite. | |
• [SLOW TEST:21.761 seconds] | |
[k8s.io] Pods | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should be schedule with cpu and memory limits [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:264 | |
------------------------------ | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 23:01:52.987: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142 | |
[It] should reuse nodePort when apply to an existing SVC | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:595 | |
STEP: creating Redis SVC | |
Apr 21 23:01:53.042: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config create -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook-go/redis-master-service.json --namespace=e2e-tests-kubectl-b8gcn' | |
Apr 21 23:01:53.157: INFO: stderr: "" | |
Apr 21 23:01:53.157: INFO: stdout: "service \"redis-master\" created" | |
STEP: getting the original nodePort | |
Apr 21 23:01:53.157: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-b8gcn -o jsonpath={.spec.ports[0].nodePort}' | |
Apr 21 23:01:53.226: INFO: stderr: "" | |
Apr 21 23:01:53.226: INFO: stdout: "0" | |
STEP: applying the same configuration | |
Apr 21 23:01:53.226: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config apply -f /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/examples/guestbook-go/redis-master-service.json --namespace=e2e-tests-kubectl-b8gcn' | |
Apr 21 23:01:53.340: INFO: stderr: "" | |
Apr 21 23:01:53.341: INFO: stdout: "service \"redis-master\" configured" | |
STEP: getting the nodePort after applying configuration | |
Apr 21 23:01:53.341: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://146.148.88.146 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-b8gcn -o jsonpath={.spec.ports[0].nodePort}' | |
Apr 21 23:01:53.415: INFO: stderr: "" | |
Apr 21 23:01:53.415: INFO: stdout: "0" | |
STEP: checking the result | |
[AfterEach] [k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:01:53.415: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-kubectl-b8gcn" for this suite. | |
• [SLOW TEST:5.462 seconds] | |
[k8s.io] Kubectl client | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
[k8s.io] Kubectl apply | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should reuse nodePort when apply to an existing SVC | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:595 | |
------------------------------ | |
[BeforeEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 23:01:58.452: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[It] deployment should delete old replica sets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:67 | |
Apr 21 23:01:58.513: INFO: Pod name cleanup-pod: Found 0 pods out of 1 | |
Apr 21 23:02:03.517: INFO: Pod name cleanup-pod: Found 1 pods out of 1 | |
STEP: ensuring each pod is running | |
W0421 23:02:03.517652 17566 request.go:344] Field selector: v1 - pods - metadata.name - test-cleanup-controller-fwi10: need to check if this is versioned correctly. | |
STEP: trying to dial each unique pod | |
Apr 21 23:02:03.546: INFO: Controller cleanup-pod: Got non-empty result from replica 1 [test-cleanup-controller-fwi10]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 1 of 1 required successes so far | |
Apr 21 23:02:03.546: INFO: Creating deployment test-cleanup-deployment | |
Apr 21 23:02:05.580: INFO: Deleting deployment test-cleanup-deployment | |
Apr 21 23:02:07.640: INFO: Ensuring deployment test-cleanup-deployment was deleted | |
Apr 21 23:02:07.642: INFO: Ensuring deployment test-cleanup-deployment's RSes were deleted | |
Apr 21 23:02:07.645: INFO: Ensuring deployment test-cleanup-deployment's Pods were deleted | |
[AfterEach] [k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:02:07.646: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-deployment-8idkd" for this suite. | |
• [SLOW TEST:14.211 seconds] | |
[k8s.io] Deployment | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
deployment should delete old replica sets | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:67 | |
------------------------------ | |
SS | |
------------------------------ | |
[BeforeEach] [k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:106 | |
STEP: Creating a kubernetes client | |
Apr 21 23:02:12.666: INFO: >>> TestContext.KubeConfig: /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
STEP: Building a namespace api object | |
STEP: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:50 | |
STEP: Executing a successful http request from the external internet | |
[It] should provide unchanging, static URL paths for kubernetes api services [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:76 | |
STEP: testing: /validate | |
STEP: testing: /healthz | |
[AfterEach] [k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:107 | |
Apr 21 23:02:12.764: INFO: Waiting up to 1m0s for all nodes to be ready | |
STEP: Destroying namespace "e2e-tests-nettest-crv9k" for this suite. | |
• [SLOW TEST:5.117 seconds] | |
[k8s.io] Networking | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:426 | |
should provide unchanging, static URL paths for kubernetes api services [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:76 | |
------------------------------ | |
Dumping master and node logs to /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts | |
Warning: Permanently added '146.148.88.146' (ECDSA) to the list of known hosts. | |
ERROR: (gcloud.compute.copy-files) Could not fetch instance: | |
- The resource 'projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/https' was not found | |
ERROR: (gcloud.compute.copy-files) Could not fetch instance: | |
- The resource 'projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/https' was not found | |
ERROR: (gcloud.compute.copy-files) Could not fetch instance: | |
- The resource 'projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/https' was not found | |
ERROR: (gcloud.compute.copy-files) Could not fetch instance: | |
- The resource 'projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/https' was not found | |
ERROR: (gcloud.compute.copy-files) Could not fetch instance: | |
- The resource 'projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/https' was not found | |
ERROR: (gcloud.compute.copy-files) Could not fetch instance: | |
- The resource 'projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/https' was not found | |
Summarizing 3 Failures: | |
[Fail] [k8s.io] Kubectl client [AfterEach] [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:184 | |
[Fail] [k8s.io] Kubectl client [AfterEach] [k8s.io] Simple pod should support exec | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:184 | |
[Fail] [k8s.io] Kubectl client [AfterEach] [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] | |
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:184 | |
Ran 166 of 278 Specs in 517.745 seconds | |
FAIL! -- 163 Passed | 3 Failed | 0 Pending | 112 Skipped | |
Ginkgo ran 1 suite in 8m38.2023064s | |
Test Suite Failed | |
!!! Error in /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/hack/ginkgo-e2e.sh:92 | |
'"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-service-account="${GCE_SERVICE_ACCOUNT:-}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" ${KUBE_OS_DISTRIBUTION:+"--os-distro=${KUBE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_CLEAN_START:+"--clean-start=true"} ${E2E_MIN_STARTUP_PODS:+"--minStartupPods=${E2E_MIN_STARTUP_PODS}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1 | |
Call stack: | |
1: /jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/hack/ginkgo-e2e.sh:92 main(...) | |
Exiting with status 1 | |
2016/04/21 23:02:26 e2e.go:200: Error running Ginkgo tests: exit status 1 | |
2016/04/21 23:02:26 e2e.go:196: Step 'Ginkgo tests' finished in 8m39.263199393s | |
exit status 1 | |
+ exitcode=1 | |
+ [[ '' == \t\r\u\e ]] | |
+ [[ '' == \t\r\u\e ]] | |
+ [[ true == \t\r\u\e ]] | |
+ sleep 30 | |
+ go run ./hack/e2e.go -v --down | |
2016/04/21 23:02:57 e2e.go:194: Running: teardown | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
Shutting down test cluster in background. | |
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-minion-e2e-gce-master-1-http-alt]. | |
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-minion-e2e-gce-master-1-nodeports]. | |
Bringing down cluster using provider: gce | |
All components are up to date. | |
All components are up to date. | |
All components are up to date. | |
Project: kubernetes-jenkins-pull | |
Zone: us-central1-f | |
INSTANCE_GROUPS=e2e-gce-master-1-minion-group | |
NODE_NAMES=https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-6ch0 https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-8eot https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-asea https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-fyts https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-hlmm https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-minion-x3cg | |
Bringing down cluster | |
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instanceGroupManagers/e2e-gce-master-1-minion-group]. | |
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/instanceTemplates/e2e-gce-master-1-minion-template]. | |
Updated [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-master]. | |
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/zones/us-central1-f/instances/e2e-gce-master-1-master]. | |
Listed 0 items. | |
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-master-https]. | |
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/firewalls/e2e-gce-master-1-minion-all]. | |
Deleting routes e2e-gce-master-1-3740ef8d-084e-11e6-94fd-42010af00002 | |
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/global/routes/e2e-gce-master-1-3740ef8d-084e-11e6-94fd-42010af00002]. | |
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-jenkins-pull/regions/us-central1/addresses/e2e-gce-master-1-master-ip]. | |
property "clusters.kubernetes-jenkins-pull_e2e-gce-master-1" unset. | |
property "users.kubernetes-jenkins-pull_e2e-gce-master-1" unset. | |
property "users.kubernetes-jenkins-pull_e2e-gce-master-1-basic-auth" unset. | |
property "contexts.kubernetes-jenkins-pull_e2e-gce-master-1" unset. | |
property "current-context" unset. | |
Cleared config for kubernetes-jenkins-pull_e2e-gce-master-1 from /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config | |
Done | |
2016/04/21 23:07:23 e2e.go:196: Step 'teardown' finished in 4m26.846259363s | |
+ [[ true == \t\r\u\e ]] | |
+ ./cluster/gce/list-resources.sh | |
Listed 0 items. | |
Listed 0 items. | |
Listed 0 items. | |
Listed 0 items. | |
+ [[ true == \t\r\u\e ]] | |
+ [[ -f /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-before.txt ]] | |
+ [[ -f /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-after.txt ]] | |
++ diff -sw -U0 '-F^\[.*\]$' /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-before.txt /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-after.txt | |
+ difference='Files /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-before.txt and /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-after.txt are identical' | |
++ echo 'Files /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-before.txt and /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts/gcp-resources-after.txt are identical' | |
++ tail -n +3 | |
++ grep -E '^\+' | |
+ [[ -n '' ]] | |
+ chmod -R o+r /var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/_artifacts | |
+ rc=0 | |
+ [[ 0 -ne 0 ]] | |
+ [[ 0 -eq 124 ]] | |
+ [[ 0 -eq 137 ]] | |
+ [[ 0 -ne 0 ]] | |
+ echo 'Exiting with code: 0' | |
Exiting with code: 0 | |
+ exit 0 | |
[workspace] $ /bin/bash -xe /tmp/hudson3417302778431982268.sh | |
+ make clean | |
build/make-clean.sh | |
+++ [0421 23:07:55] Verifying Prerequisites.... | |
+++ [0421 23:07:55] Cleaning out _output/dockerized/bin/ via docker build image | |
+++ [0421 23:07:55] Running build command.... | |
+++ [0421 23:07:58] Removing data container | |
+++ [0421 23:08:01] Cleaning out local _output directory | |
+++ [0421 23:08:02] Deleting docker image kube-build:build-2be8cc7bdc | |
Untagged: kube-build:build-2be8cc7bdc | |
Deleted: ee5268dd468dd152fcbf283b74175e57b76fcb7eabaf5b701a2def2084948c87 | |
Deleted: fded3655dfa8259a1a01d541012ab5cbc864a8b06e4f11764d83aeb403194d0c | |
Deleted: 8915406db84bc8afd18a5c7a7c53e9590fbfad6b2860a9ee967259cd8c9b1d09 | |
Deleted: 1f8f70e4ff1cf4ce31a81ad1f664e0d082043926742782a0f5b795a03465bfc0 | |
+++ [0421 23:08:03] Cleaning all other untagged docker images | |
rm -rf _output | |
rm -rf Godeps/_workspace/pkg | |
Recording test results | |
Build step 'Publish JUnit test result report' changed build result to UNSTABLE | |
[PostBuildScript] - Execution post build scripts. | |
[workspace] $ /bin/bash -xe /tmp/hudson2506177417607298768.sh | |
+ [[ -x ./hack/jenkins/upload-to-gcs.sh ]] | |
+ ./hack/jenkins/upload-to-gcs.sh | |
Called without JENKINS_BUILD_STARTED or JENKINS_BUILD_FINISHED set. | |
Assuming a legacy invocation. | |
Run finished at Thu Apr 21 23:08:03 PDT 2016 | |
Uploading to gs://kubernetes-jenkins/pr-logs/pull/24502/kubernetes-pull-build-test-e2e-gce/36553 (attempt 1) | |
Uploading build result: [UNSET] | |
Uploading artifacts | |
Marking build 36553 as the latest completed build | |
*** View logs and artifacts at https://console.cloud.google.com/storage/browser/kubernetes-jenkins/pr-logs/pull/24502/kubernetes-pull-build-test-e2e-gce/36553 *** | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment