Skip to content

Instantly share code, notes, and snippets.

@yifan-gu
Created October 24, 2015 00:21
Show Gist options
  • Select an option

  • Save yifan-gu/3d7b46e6340e3b62684b to your computer and use it in GitHub Desktop.

Select an option

Save yifan-gu/3d7b46e6340e3b62684b to your computer and use it in GitHub Desktop.
coreos-jenkins
Started by user [email protected]
Building remotely on kubernetes-e2e-coreos-gce (gce kubernetes-e2e) in workspace /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/kubernetes/kubernetes/ # timeout=10
Fetching upstream changes from https://github.com/kubernetes/kubernetes/
> git --version # timeout=10
> git -c core.askpass=true fetch --tags --progress https://github.com/kubernetes/kubernetes/ +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision f93f77766dd24bdffc4d046df0bc0360978e2f2a (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f f93f77766dd24bdffc4d046df0bc0360978e2f2a
> git rev-list 5c903dbcacb423158e3f363bcbb27eef58f95218 # timeout=10
[kubernetes-e2e-gce-coreos-docker] $ /bin/sh -xe /tmp/hudson6347487644246758023.sh
+ /bin/bash -c /home/jenkins/rune2e.sh
build/make-clean.sh
+++ [1023 22:33:25] Verifying Prerequisites....
+++ [1023 22:33:25] Cleaning out _output/dockerized/bin/ via docker build image
+++ [1023 22:33:25] Running build command....
+++ [1023 22:33:27] Removing data container
+++ [1023 22:33:28] Cleaning out local _output directory
+++ [1023 22:33:28] Deleting docker image kube-build:build-f913e38b85
Untagged: kube-build:build-f913e38b85
Deleted: a1241b066e5dbd3d58cfad1b5f6c05c457768b847698114216cbc20f5fb6db16
Deleted: 6856be70268d7d49451d5bdbdd63246c9fdde1697ab51bdc956cc3627871b351
Deleted: 33f247fae067d99b018da6ebccaa96f24a8fc5023b0046abe5746531fc3b90fc
Deleted: c5ffb1fdcb89fa1e72a5e79fab726a2992fd1fd84ed3f779ac247568b71ad3e1
Deleted: 25e8f6c074c76a971604e5688b60687d4c55b84d24e594d411fbe1071b32bd86
Deleted: db3f9a565e286cc315b47cb2a6743911800de45c9e6c1403e02ecd74cf1fae83
Deleted: da5975c9103e7f0908b256381fb8bac1d25db368b8448fd4b5ce279faee2dde4
Deleted: 5984c648cdb9decd2e322dca78405a42f4de462c03b50fdb2500a521cc0cdfa2
Deleted: 3dbfa74a1998cc54127bb48523b339b34be09eed8a0df85d02755493f128ccaf
Deleted: c7e05286dd50ffd3f3d7cff5d4d51bc9b84d20a8d6eb82a700474c227cfeca3d
Deleted: 57a8a3934b3da0ba2b599e6f14b3387a807e339588cc85e66b1cb76c2256680a
Deleted: cbb6ae4c53e61321cfec31a80de67fd705200aa30f46c3675f1db139f88f026b
Deleted: 2fa64f39fef79c404bf0ed883c7b3130a6b396557c3b55418fad3e863551a800
Deleted: d0c9ce67658e65db33413fc97cff9cb672a392ba0329d6260a74ae2445a83566
Deleted: 103daa135ef41a43f575b91dd5ba3c18cbaeff74d11e38231aa46fe182ab7e0c
Deleted: c0ac0976f1932fa985cddff423a00f610a65e2cc37644720f3da68851b56998c
+++ [1023 22:33:36] Cleaning all other untagged docker images
rm -rf _output
rm -rf Godeps/_workspace/pkg
KUBE_RELEASE_RUN_TESTS=n build/release.sh
+++ [1023 22:33:37] Verifying Prerequisites....
+++ [1023 22:33:37] Building Docker image kube-build:cross.
+++ [1023 22:33:39] Building Docker image kube-build:build-f913e38b85.
+++ [1023 22:34:46] Running build command....
+++ [1023 22:34:46] Creating data container
+++ [1023 22:34:47] Building go targets for linux/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/kubelet
cmd/kubemark
cmd/hyperkube
cmd/linkcheck
plugin/cmd/kube-scheduler
+++ [1023 22:36:40] Multiple platforms requested, but available 7G < threshold 11G, building platforms in serial
+++ [1023 22:36:40] Building go targets for linux/amd64:
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genman
cmd/mungedocs
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
cmd/genswaggertypedocs
examples/k8petstore/web-server/src
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [1023 22:37:13] Building go targets for linux/386:
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genman
cmd/mungedocs
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
cmd/genswaggertypedocs
examples/k8petstore/web-server/src
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [1023 22:38:21] Building go targets for linux/arm:
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genman
cmd/mungedocs
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
cmd/genswaggertypedocs
examples/k8petstore/web-server/src
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [1023 22:39:31] Building go targets for darwin/amd64:
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genman
cmd/mungedocs
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
cmd/genswaggertypedocs
examples/k8petstore/web-server/src
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [1023 22:40:40] Building go targets for darwin/386:
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genman
cmd/mungedocs
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
cmd/genswaggertypedocs
examples/k8petstore/web-server/src
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [1023 22:41:49] Building go targets for windows/amd64:
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genman
cmd/mungedocs
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
cmd/genswaggertypedocs
examples/k8petstore/web-server/src
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [1023 22:42:58] Placing binaries
+++ [1023 22:43:34] Running build command....
+++ [1023 22:43:35] Output directory is local. No need to copy results out.
+++ [1023 22:43:35] Building tarball: salt
+++ [1023 22:43:35] Building tarball: server linux-amd64
+++ [1023 22:43:35] Starting tarball: client darwin-386
+++ [1023 22:43:35] Starting tarball: client darwin-amd64
+++ [1023 22:43:35] Starting tarball: client linux-386
+++ [1023 22:43:35] Starting tarball: client linux-amd64
+++ [1023 22:43:35] Starting tarball: client linux-arm
+++ [1023 22:43:35] Starting tarball: client windows-amd64
+++ [1023 22:43:35] Waiting on tarballs
+++ [1023 22:43:37] Starting Docker build for image: kube-apiserver
+++ [1023 22:43:37] Starting Docker build for image: kube-controller-manager
+++ [1023 22:43:37] Starting Docker build for image: kube-scheduler
+++ [1023 22:43:52] Deleting docker image gcr.io/google_containers/kube-scheduler:d16378077f1dc5f940c4714a2baefe3c
Untagged: gcr.io/google_containers/kube-scheduler:d16378077f1dc5f940c4714a2baefe3c
Deleted: 6104e0f8ca082620d2f09497e8c7e79adfbd31075cb3c2b1125a300e720e0b94
+++ [1023 22:43:53] Deleting docker image gcr.io/google_containers/kube-apiserver:edaef83d47dbff8d6bea025fbbd5f031
+++ [1023 22:43:53] Deleting docker image gcr.io/google_containers/kube-controller-manager:362e1b1ce965527f4877cc42e4992142
Untagged: gcr.io/google_containers/kube-apiserver:edaef83d47dbff8d6bea025fbbd5f031
Deleted: dad8dfc506b95258e9bd38f1cbfcbddf4c37337818c6f5e6b4542d9ee76242bf
Untagged: gcr.io/google_containers/kube-controller-manager:362e1b1ce965527f4877cc42e4992142
Deleted: 60cd9347e41289ffb2f1e62b7cd94a2d30a890ada8d5addbe64595df180fe64c
+++ [1023 22:43:55] Docker builds done
+++ [1023 22:43:55] Pulling and writing Docker image for addon: beta.gcr.io/google_containers/pause:2.0
+++ [1023 22:43:55] Pulling and writing Docker image for addon: gcr.io/google_containers/kube-registry-proxy:0.3
Pulling repository gcr.io/google_containers/kube-registry-proxy
Pulling repository beta.gcr.io/google_containers/pause
9b9342c134bb: Pulling image (0.3) from gcr.io/google_containers/kube-registry-proxy
9b9342c134bb: Pulling image (0.3) from gcr.io/google_containers/kube-registry-proxy, endpoint: https://gcr.io/v1/
9981ca1bbdb5: Pulling image (2.0) from beta.gcr.io/google_containers/pause
9981ca1bbdb5: Pulling image (2.0) from beta.gcr.io/google_containers/pause, endpoint: https://beta.gcr.io/v1/
9b9342c134bb: Pulling dependent layers
4c8cbfd2973e: Download complete
60c52dbe9d91: Download complete
597ba085d527: Download complete
6123f5d4d4ca: Download complete
43d38e8f6e1d: Download complete
291fa36bf11b: Download complete
234cd2e70045: Download complete
fc640eeacf3c: Download complete
7d416b297daa: Download complete
09ef194b1da2: Download complete
0d1347411f62: Download complete
93e4b6c31f6f: Download complete
aa4a274da4bc: Download complete
4db1d8acab63: Download complete
9b9342c134bb: Download complete
9b9342c134bb: Download complete
Status: Image is up to date for gcr.io/google_containers/kube-registry-proxy:0.3
9981ca1bbdb5: Pulling dependent layers
6995a49b90f2: Download complete
9981ca1bbdb5: Download complete
9981ca1bbdb5: Download complete
Status: Image is up to date for beta.gcr.io/google_containers/pause:2.0
+++ [1023 22:44:07] Addon images done
+++ [1023 22:44:33] Building tarball: full
+++ [1023 22:44:33] Building tarball: test
+ echo --------------------------------------------------------------------------------
--------------------------------------------------------------------------------
+ echo 'Initial Environment:'
Initial Environment:
+ printenv
+ sort
BUILD_DISPLAY_NAME=#56
BUILD_ID=56
BUILD_NUMBER=56
BUILD_TAG=jenkins-kubernetes-e2e-gce-coreos-docker-56
BUILD_URL=https://jenkins.coreos.systems/job/kubernetes-e2e-gce-coreos-docker/56/
E2E_DOWN=true
E2E_NETWORK=e2e
E2E_TEST=true
E2E_UP=true
E2E_ZONE=us-east1-b
EXECUTOR_NUMBER=0
GIT_BRANCH=origin/master
GIT_COMMIT=f93f77766dd24bdffc4d046df0bc0360978e2f2a
GIT_PREVIOUS_COMMIT=5c903dbcacb423158e3f363bcbb27eef58f95218
GIT_PREVIOUS_SUCCESSFUL_COMMIT=5c903dbcacb423158e3f363bcbb27eef58f95218
GIT_URL=https://github.com/kubernetes/kubernetes/
GOROOT=/usr/local/go
HOME=/home/jenkins
HUDSON_COOKIE=7254ea09-b400-436f-bf62-6c88fe9fa59d
HUDSON_HOME=/var/jenkins_home
HUDSON_SERVER_COOKIE=984c99441cf217d3
HUDSON_URL=https://jenkins.coreos.systems/
JENKINS_HOME=/var/jenkins_home
JENKINS_SERVER_COOKIE=984c99441cf217d3
JENKINS_URL=https://jenkins.coreos.systems/
JOB_NAME=kubernetes-pull-build-test-e2e-gce
JOB_URL=https://jenkins.coreos.systems/job/kubernetes-e2e-gce-coreos-docker/
KUBE_GCE_MINION_IMAGE=coreos-stable-766-4-0-v20150929
KUBE_GCE_MINION_PROJECT=coreos-cloud
KUBE_OS_DISTRIBUTION=coreos
KUBE_RUN_FROM_OUTPUT=y
LANG=en_US.UTF-8
LOGNAME=jenkins
MAIL=/var/mail/jenkins
NODE_LABELS=gce kubernetes-e2e kubernetes-e2e-coreos-gce
NODE_NAME=kubernetes-e2e-coreos-gce
PATH=/home/jenkins/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/go/bin
PROJECT=coreos-gce-testing
PWD=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker
SHELL=/bin/bash
SHLVL=4
SSH_CLIENT=130.211.186.16 42462 22
SSH_CONNECTION=130.211.186.16 42462 10.240.0.2 22
USER=jenkins
_=/usr/bin/printenv
WORKSPACE=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker
XDG_RUNTIME_DIR=/run/user/1012
XDG_SESSION_ID=25
+ echo --------------------------------------------------------------------------------
--------------------------------------------------------------------------------
+ [[ '' == \t\r\u\e ]]
+ export HOME=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker
+ HOME=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker
+ E2E_OPT=
+ [[ kubernetes-pull-build-test-e2e-gce =~ ^kubernetes-.*-gce ]]
+ KUBERNETES_PROVIDER=gce
+ : 1
+ : us-east1-b
+ : 6
+ [[ gce == \a\w\s ]]
+ REBOOT_SKIP_TESTS=("Autoscaling\sSuite" "Skipped" "Restart\sshould\srestart\sall\snodes" "Example")
+ GCE_DEFAULT_SKIP_TESTS=("${REBOOT_SKIP_TESTS[@]}" "Reboot" "ServiceLoadBalancer")
+ GKE_REQUIRED_SKIP_TESTS=("Nodes" "Etcd\sFailure" "MasterCerts" "Daemon\sset\sshould\srun\sand\sstop\scomplex\sdaemon" "Deployment" "experimental\sresource\susage\stracking" "Shell")
+ AWS_REQUIRED_SKIP_TESTS=("experimental\sresource\susage\stracking")
+ DISRUPTIVE_TESTS=("DaemonRestart" "Etcd\sfailure" "Nodes\sResize" "Reboot" "Services.*restarting")
+ GCE_FLAKY_TESTS=("DaemonRestart\sController\sManager" "Daemon\sset\sshould" "Jobs\sare\slocally\srestarted" "Resource\susage\sof\ssystem\scontainers" "should\sbe\sable\sto\schange\sthe\stype\sand\snodeport\ssettings\sof\sa\sservice" "allows\sscheduling\sof\spods\son\sa\sminion\safter\sit\srejoins\sthe\scluster" "should\srelease\sthe\sload\sbalancer\swhen\sType\sgoes\sfrom\sLoadBalancer" "should\scorrectly\sserve\sidentically\snamed\sservices\sin\sdifferent\snamespaces\son\sdifferent\sexternal\sIP\saddresses" "should\sbe\sable\sto\screate\sa\sfunctioning\sexternal\sload\sbalancer" "pod\sw/two\sRW\sPDs\sboth\smounted\sto\sone\scontainer,\swrite\sto\sPD" "pod\sw/\sa\sreadonly\sPD\son\stwo\shosts,\sthen\sremove\sboth" "deployment.*\sin\sthe\sright\sorder")
+ GCE_SLOW_TESTS=("SchedulerPredicates\svalidates\sMaxPods\slimit " "Nodes\sResize" "resource\susage\stracking" "monotonically\sincreasing\srestart\scount" "Garbage\scollector\sshould" "KubeProxy\sshould\stest\skube-proxy" "cap\sback-off\sat\sMaxContainerBackOff")
+ GCE_PARALLEL_SKIP_TESTS=("Nodes\sNetwork" "MaxPods" "Resource\susage\sof\ssystem\scontainers" "SchedulerPredicates" "resource\susage\stracking" "${DISRUPTIVE_TESTS[@]}")
+ GCE_PARALLEL_FLAKY_TESTS=("DaemonRestart" "Elasticsearch" "Namespaces.*should\sdelete\sfast" "PD" "ServiceAccounts" "Services.*change\sthe\stype" "Services.*functioning\sexternal\sload\sbalancer" "Services.*identically\snamed" "Services.*release.*load\sbalancer" "Services.*endpoint" "Services.*up\sand\sdown" "Networking\sshould\sfunction\sfor\sintra-pod\scommunication")
+ GCE_SOAK_CONTINUOUS_SKIP_TESTS=("Density.*30\spods" "Elasticsearch" "external\sload\sbalancer" "identically\snamed\sservices" "network\spartition" "Services.*Type\sgoes\sfrom" "${DISRUPTIVE_TESTS[@]}")
+ GCE_RELEASE_SKIP_TESTS=()
+ case ${JOB_NAME} in
+ : jenkins-pull-gce-e2e-0
+ : e2e
+ : y
++ join_regex_allow_empty 'Autoscaling\sSuite' Skipped 'Restart\sshould\srestart\sall\snodes' Example Reboot ServiceLoadBalancer 'Nodes\sNetwork' MaxPods 'Resource\susage\sof\ssystem\scontainers' SchedulerPredicates 'resource\susage\stracking' DaemonRestart 'Etcd\sfailure' 'Nodes\sResize' Reboot 'Services.*restarting' 'DaemonRestart\sController\sManager' 'Daemon\sset\sshould' 'Jobs\sare\slocally\srestarted' 'Resource\susage\sof\ssystem\scontainers' 'should\sbe\sable\sto\schange\sthe\stype\sand\snodeport\ssettings\sof\sa\sservice' 'allows\sscheduling\sof\spods\son\sa\sminion\safter\sit\srejoins\sthe\scluster' 'should\srelease\sthe\sload\sbalancer\swhen\sType\sgoes\sfrom\sLoadBalancer' 'should\scorrectly\sserve\sidentically\snamed\sservices\sin\sdifferent\snamespaces\son\sdifferent\sexternal\sIP\saddresses' 'should\sbe\sable\sto\screate\sa\sfunctioning\sexternal\sload\sbalancer' 'pod\sw/two\sRW\sPDs\sboth\smounted\sto\sone\scontainer,\swrite\sto\sPD' 'pod\sw/\sa\sreadonly\sPD\son\stwo\shosts,\sthen\sremove\sboth' 'deployment.*\sin\sthe\sright\sorder' DaemonRestart Elasticsearch 'Namespaces.*should\sdelete\sfast' PD ServiceAccounts 'Services.*change\sthe\stype' 'Services.*functioning\sexternal\sload\sbalancer' 'Services.*identically\snamed' 'Services.*release.*load\sbalancer' 'Services.*endpoint' 'Services.*up\sand\sdown' 'Networking\sshould\sfunction\sfor\sintra-pod\scommunication' 'SchedulerPredicates\svalidates\sMaxPods\slimit' 'Nodes\sResize' 'resource\susage\stracking' 'monotonically\sincreasing\srestart\scount' 'Garbage\scollector\sshould' 'KubeProxy\sshould\stest\skube-proxy' 'cap\sback-off\sat\sMaxContainerBackOff'
++ local 'IFS=|'
++ echo 'Autoscaling\sSuite|Skipped|Restart\sshould\srestart\sall\snodes|Example|Reboot|ServiceLoadBalancer|Nodes\sNetwork|MaxPods|Resource\susage\sof\ssystem\scontainers|SchedulerPredicates|resource\susage\stracking|DaemonRestart|Etcd\sfailure|Nodes\sResize|Reboot|Services.*restarting|DaemonRestart\sController\sManager|Daemon\sset\sshould|Jobs\sare\slocally\srestarted|Resource\susage\sof\ssystem\scontainers|should\sbe\sable\sto\schange\sthe\stype\sand\snodeport\ssettings\sof\sa\sservice|allows\sscheduling\sof\spods\son\sa\sminion\safter\sit\srejoins\sthe\scluster|should\srelease\sthe\sload\sbalancer\swhen\sType\sgoes\sfrom\sLoadBalancer|should\scorrectly\sserve\sidentically\snamed\sservices\sin\sdifferent\snamespaces\son\sdifferent\sexternal\sIP\saddresses|should\sbe\sable\sto\screate\sa\sfunctioning\sexternal\sload\sbalancer|pod\sw/two\sRW\sPDs\sboth\smounted\sto\sone\scontainer,\swrite\sto\sPD|pod\sw/\sa\sreadonly\sPD\son\stwo\shosts,\sthen\sremove\sboth|deployment.*\sin\sthe\sright\sorder|DaemonRestart|Elasticsearch|Namespaces.*should\sdelete\sfast|PD|ServiceAccounts|Services.*change\sthe\stype|Services.*functioning\sexternal\sload\sbalancer|Services.*identically\snamed|Services.*release.*load\sbalancer|Services.*endpoint|Services.*up\sand\sdown|Networking\sshould\sfunction\sfor\sintra-pod\scommunication|SchedulerPredicates\svalidates\sMaxPods\slimit|Nodes\sResize|resource\susage\stracking|monotonically\sincreasing\srestart\scount|Garbage\scollector\sshould|KubeProxy\sshould\stest\skube-proxy|cap\sback-off\sat\sMaxContainerBackOff'
+ : '--ginkgo.skip=Autoscaling\sSuite|Skipped|Restart\sshould\srestart\sall\snodes|Example|Reboot|ServiceLoadBalancer|Nodes\sNetwork|MaxPods|Resource\susage\sof\ssystem\scontainers|SchedulerPredicates|resource\susage\stracking|DaemonRestart|Etcd\sfailure|Nodes\sResize|Reboot|Services.*restarting|DaemonRestart\sController\sManager|Daemon\sset\sshould|Jobs\sare\slocally\srestarted|Resource\susage\sof\ssystem\scontainers|should\sbe\sable\sto\schange\sthe\stype\sand\snodeport\ssettings\sof\sa\sservice|allows\sscheduling\sof\spods\son\sa\sminion\safter\sit\srejoins\sthe\scluster|should\srelease\sthe\sload\sbalancer\swhen\sType\sgoes\sfrom\sLoadBalancer|should\scorrectly\sserve\sidentically\snamed\sservices\sin\sdifferent\snamespaces\son\sdifferent\sexternal\sIP\saddresses|should\sbe\sable\sto\screate\sa\sfunctioning\sexternal\sload\sbalancer|pod\sw/two\sRW\sPDs\sboth\smounted\sto\sone\scontainer,\swrite\sto\sPD|pod\sw/\sa\sreadonly\sPD\son\stwo\shosts,\sthen\sremove\sboth|deployment.*\sin\sthe\sright\sorder|DaemonRestart|Elasticsearch|Namespaces.*should\sdelete\sfast|PD|ServiceAccounts|Services.*change\sthe\stype|Services.*functioning\sexternal\sload\sbalancer|Services.*identically\snamed|Services.*release.*load\sbalancer|Services.*endpoint|Services.*up\sand\sdown|Networking\sshould\sfunction\sfor\sintra-pod\scommunication|SchedulerPredicates\svalidates\sMaxPods\slimit|Nodes\sResize|resource\susage\stracking|monotonically\sincreasing\srestart\scount|Garbage\scollector\sshould|KubeProxy\sshould\stest\skube-proxy|cap\sback-off\sat\sMaxContainerBackOff'
+ : pull-e2e-0
+ : -0
+ : coreos-gce-testing
+ : true
+ NUM_MINIONS=6
+ export KUBE_AWS_INSTANCE_PREFIX=jenkins-pull-gce-e2e-0
+ KUBE_AWS_INSTANCE_PREFIX=jenkins-pull-gce-e2e-0
+ export KUBE_AWS_ZONE=us-east1-b
+ KUBE_AWS_ZONE=us-east1-b
+ export INSTANCE_PREFIX=jenkins-pull-gce-e2e-0
+ INSTANCE_PREFIX=jenkins-pull-gce-e2e-0
+ export KUBE_GCE_ZONE=us-east1-b
+ KUBE_GCE_ZONE=us-east1-b
+ export KUBE_GCE_NETWORK=e2e
+ KUBE_GCE_NETWORK=e2e
+ export KUBE_GCE_INSTANCE_PREFIX=pull-e2e-0
+ KUBE_GCE_INSTANCE_PREFIX=pull-e2e-0
+ export KUBE_GCS_STAGING_PATH_SUFFIX=-0
+ KUBE_GCS_STAGING_PATH_SUFFIX=-0
+ export CLUSTER_NAME=jenkins-pull-gce-e2e-0
+ CLUSTER_NAME=jenkins-pull-gce-e2e-0
+ export ZONE=us-east1-b
+ ZONE=us-east1-b
+ export KUBE_GKE_NETWORK=e2e
+ KUBE_GKE_NETWORK=e2e
+ export E2E_SET_CLUSTER_API_VERSION=
+ E2E_SET_CLUSTER_API_VERSION=
+ export DOGFOOD_GCLOUD=
+ DOGFOOD_GCLOUD=
+ export CMD_GROUP=
+ CMD_GROUP=
+ [[ ! -z '' ]]
+ export E2E_MIN_STARTUP_PODS=1
+ E2E_MIN_STARTUP_PODS=1
+ export KUBE_ENABLE_CLUSTER_MONITORING=
+ KUBE_ENABLE_CLUSTER_MONITORING=
+ export KUBE_ENABLE_DEPLOYMENTS=true
+ KUBE_ENABLE_DEPLOYMENTS=true
+ export KUBE_ENABLE_EXPERIMENTAL_API=
+ KUBE_ENABLE_EXPERIMENTAL_API=
+ export MASTER_SIZE=
+ MASTER_SIZE=
+ export MINION_SIZE=
+ MINION_SIZE=
+ export MINION_DISK_SIZE=
+ MINION_DISK_SIZE=
+ export NUM_MINIONS=6
+ NUM_MINIONS=6
+ export TEST_CLUSTER_LOG_LEVEL=
+ TEST_CLUSTER_LOG_LEVEL=
+ export TEST_CLUSTER_RESYNC_PERIOD=
+ TEST_CLUSTER_RESYNC_PERIOD=
+ export PROJECT=coreos-gce-testing
+ PROJECT=coreos-gce-testing
+ export JENKINS_PUBLISHED_VERSION=ci/latest
+ JENKINS_PUBLISHED_VERSION=ci/latest
+ export KUBE_ADMISSION_CONTROL=
+ KUBE_ADMISSION_CONTROL=
+ export KUBERNETES_PROVIDER=gce
+ KUBERNETES_PROVIDER=gce
+ export PATH=/home/jenkins/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/go/bin:/usr/local/go/bin
+ PATH=/home/jenkins/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/go/bin:/usr/local/go/bin
+ export KUBE_SKIP_CONFIRMATIONS=y
+ KUBE_SKIP_CONFIRMATIONS=y
+ export E2E_UP=true
+ E2E_UP=true
+ export E2E_TEST=true
+ E2E_TEST=true
+ export E2E_DOWN=true
+ E2E_DOWN=true
+ export GINKGO_PARALLEL=y
+ GINKGO_PARALLEL=y
+ echo --------------------------------------------------------------------------------
--------------------------------------------------------------------------------
+ echo 'Test Environment:'
Test Environment:
+ sort
+ printenv
BUILD_DISPLAY_NAME=#56
BUILD_ID=56
BUILD_NUMBER=56
BUILD_TAG=jenkins-kubernetes-e2e-gce-coreos-docker-56
BUILD_URL=https://jenkins.coreos.systems/job/kubernetes-e2e-gce-coreos-docker/56/
CLUSTER_NAME=jenkins-pull-gce-e2e-0
CMD_GROUP=
DOGFOOD_GCLOUD=
E2E_DOWN=true
E2E_MIN_STARTUP_PODS=1
E2E_NETWORK=e2e
E2E_SET_CLUSTER_API_VERSION=
E2E_TEST=true
E2E_UP=true
E2E_ZONE=us-east1-b
EXECUTOR_NUMBER=0
GINKGO_PARALLEL=y
GIT_BRANCH=origin/master
GIT_COMMIT=f93f77766dd24bdffc4d046df0bc0360978e2f2a
GIT_PREVIOUS_COMMIT=5c903dbcacb423158e3f363bcbb27eef58f95218
GIT_PREVIOUS_SUCCESSFUL_COMMIT=5c903dbcacb423158e3f363bcbb27eef58f95218
GIT_URL=https://github.com/kubernetes/kubernetes/
GOROOT=/usr/local/go
HOME=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker
HUDSON_COOKIE=7254ea09-b400-436f-bf62-6c88fe9fa59d
HUDSON_HOME=/var/jenkins_home
HUDSON_SERVER_COOKIE=984c99441cf217d3
HUDSON_URL=https://jenkins.coreos.systems/
INSTANCE_PREFIX=jenkins-pull-gce-e2e-0
JENKINS_HOME=/var/jenkins_home
JENKINS_PUBLISHED_VERSION=ci/latest
JENKINS_SERVER_COOKIE=984c99441cf217d3
JENKINS_URL=https://jenkins.coreos.systems/
JOB_NAME=kubernetes-pull-build-test-e2e-gce
JOB_URL=https://jenkins.coreos.systems/job/kubernetes-e2e-gce-coreos-docker/
KUBE_ADMISSION_CONTROL=
KUBE_AWS_INSTANCE_PREFIX=jenkins-pull-gce-e2e-0
KUBE_AWS_ZONE=us-east1-b
KUBE_ENABLE_CLUSTER_MONITORING=
KUBE_ENABLE_DEPLOYMENTS=true
KUBE_ENABLE_EXPERIMENTAL_API=
KUBE_GCE_INSTANCE_PREFIX=pull-e2e-0
KUBE_GCE_MINION_IMAGE=coreos-stable-766-4-0-v20150929
KUBE_GCE_MINION_PROJECT=coreos-cloud
KUBE_GCE_NETWORK=e2e
KUBE_GCE_ZONE=us-east1-b
KUBE_GCS_STAGING_PATH_SUFFIX=-0
KUBE_GKE_NETWORK=e2e
KUBE_OS_DISTRIBUTION=coreos
KUBERNETES_PROVIDER=gce
KUBE_RUN_FROM_OUTPUT=y
KUBE_SKIP_CONFIRMATIONS=y
LANG=en_US.UTF-8
LOGNAME=jenkins
MAIL=/var/mail/jenkins
MASTER_SIZE=
MINION_DISK_SIZE=
MINION_SIZE=
NODE_LABELS=gce kubernetes-e2e kubernetes-e2e-coreos-gce
NODE_NAME=kubernetes-e2e-coreos-gce
NUM_MINIONS=6
PATH=/home/jenkins/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/go/bin:/usr/local/go/bin
PROJECT=coreos-gce-testing
PWD=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker
SHELL=/bin/bash
SHLVL=4
SSH_CLIENT=130.211.186.16 42462 22
SSH_CONNECTION=130.211.186.16 42462 10.240.0.2 22
TEST_CLUSTER_LOG_LEVEL=
TEST_CLUSTER_RESYNC_PERIOD=
USER=jenkins
_=/usr/bin/printenv
WORKSPACE=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker
XDG_RUNTIME_DIR=/run/user/1012
XDG_SESSION_ID=25
ZONE=us-east1-b
+ echo --------------------------------------------------------------------------------
--------------------------------------------------------------------------------
+ [[ true == \t\r\u\e ]]
+ [[ y =~ ^[yY]$ ]]
+ echo 'Found KUBE_RUN_FROM_OUTPUT=y; will use binaries from _output'
Found KUBE_RUN_FROM_OUTPUT=y; will use binaries from _output
+ cp _output/release-tars/kubernetes-client-darwin-386.tar.gz _output/release-tars/kubernetes-client-darwin-amd64.tar.gz _output/release-tars/kubernetes-client-linux-386.tar.gz _output/release-tars/kubernetes-client-linux-amd64.tar.gz _output/release-tars/kubernetes-client-linux-arm.tar.gz _output/release-tars/kubernetes-client-windows-amd64.tar.gz _output/release-tars/kubernetes-salt.tar.gz _output/release-tars/kubernetes-server-linux-amd64.tar.gz _output/release-tars/kubernetes.tar.gz _output/release-tars/kubernetes-test.tar.gz .
+ [[ ! '' == \t\r\u\e ]]
+ [[ gce == \a\w\s ]]
+ mkdir -p /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.ssh/
+ cp /var/lib/jenkins/gce_keys/google_compute_engine /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.ssh/
+ cp /var/lib/jenkins/gce_keys/google_compute_engine.pub /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.ssh/
+ md5sum kubernetes-client-darwin-386.tar.gz kubernetes-client-darwin-amd64.tar.gz kubernetes-client-linux-386.tar.gz kubernetes-client-linux-amd64.tar.gz kubernetes-client-linux-arm.tar.gz kubernetes-client-windows-amd64.tar.gz kubernetes-salt.tar.gz kubernetes-server-linux-amd64.tar.gz kubernetes.tar.gz kubernetes-test.tar.gz
119ab3b14aed9ae968fe2a88206fdee6 kubernetes-client-darwin-386.tar.gz
b98e94fda6b4b39caed0930a63fdb4f0 kubernetes-client-darwin-amd64.tar.gz
4d388f59612936007eb8e695418105a3 kubernetes-client-linux-386.tar.gz
3181d4c4b2321e06fe42b1852a8a2759 kubernetes-client-linux-amd64.tar.gz
3cfa285b6a06a5ff992f93b551fe3b69 kubernetes-client-linux-arm.tar.gz
e43cbca4382f7e0757bf0b99aadf72fe kubernetes-client-windows-amd64.tar.gz
b3f6268f75980127b8a75090e3722dc3 kubernetes-salt.tar.gz
5e917bf5959a6dcfd0f47a8ff1afe604 kubernetes-server-linux-amd64.tar.gz
34a3d1089c720fbc873de4b9ce257d36 kubernetes.tar.gz
bb8e0f19f4731b3da0e57737a54a7055 kubernetes-test.tar.gz
+ tar -xzf kubernetes.tar.gz
+ tar -xzf kubernetes-test.tar.gz
+ cd kubernetes
+ ARTIFACTS=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts
+ mkdir -p /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts
+ export E2E_REPORT_DIR=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts
+ E2E_REPORT_DIR=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts
+ declare -r gcp_list_resources_script=./cluster/gce/list-resources.sh
+ declare -r gcp_resources_before=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts/gcp-resources-before.txt
+ declare -r gcp_resources_cluster_up=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts/gcp-resources-cluster-up.txt
+ declare -r gcp_resources_after=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts/gcp-resources-after.txt
+ [[ gce == \g\c\e ]]
+ [[ -x ./cluster/gce/list-resources.sh ]]
+ gcp_list_resources=true
+ [[ ! -z '' ]]
+ [[ true == \t\r\u\e ]]
+ go run ./hack/e2e.go -v --down
2015/10/23 22:46:44 e2e.go:303: Running: teardown
Project: coreos-gce-testing
Zone: us-east1-b
Shutting down test cluster in background.
ERROR: (gcloud.compute.firewall-rules.delete) Some requests did not succeed:
- The resource 'projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-pull-e2e-0-http-alt' was not found
ERROR: (gcloud.compute.firewall-rules.delete) Some requests did not succeed:
- The resource 'projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-pull-e2e-0-nodeports' was not found
Bringing down cluster using provider: gce
WARNING: Component [preview] no longer exists.
All components are up to date.
All components are up to date.
All components are up to date.
Project: coreos-gce-testing
Zone: us-east1-b
Bringing down cluster
ERROR: (gcloud.compute.instance-groups.managed.describe) Could not fetch resource:
- The resource 'projects/coreos-gce-testing/zones/us-east1-b/instanceGroupManagers/pull-e2e-0-minion-group' was not found
property "clusters.coreos-gce-testing_pull-e2e-0" unset.
property "users.coreos-gce-testing_pull-e2e-0" unset.
property "users.coreos-gce-testing_pull-e2e-0-basic-auth" unset.
property "contexts.coreos-gce-testing_pull-e2e-0" unset.
Cleared config for coreos-gce-testing_pull-e2e-0 from /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Done
2015/10/23 22:46:53 e2e.go:305: Step 'teardown' finished in 9.04165899s
+ [[ true == \t\r\u\e ]]
+ ./cluster/gce/list-resources.sh
+ go run ./hack/e2e.go -v --up
2015/10/23 22:46:59 e2e.go:303: Running: get status
Project: coreos-gce-testing
Zone: us-east1-b
Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.2.246+f93f77766dd24b-dirty", GitCommit:"f93f77766dd24bdffc4d046df0bc0360978e2f2a", GitTreeState:"dirty"}
error: couldn't read version from server: Get http://localhost:8080/version: dial tcp 127.0.0.1:8080: connection refused
2015/10/23 22:46:59 e2e.go:309: Error running get status: exit status 1
2015/10/23 22:46:59 e2e.go:305: Step 'get status' finished in 50.566611ms
2015/10/23 22:46:59 e2e.go:303: Running: up
Project: coreos-gce-testing
Zone: us-east1-b
... Starting cluster using provider: gce
... calling verify-prereqs
WARNING: Component [preview] no longer exists.
All components are up to date.
All components are up to date.
All components are up to date.
... calling kube-up
Project: coreos-gce-testing
Zone: us-east1-b
+++ Staging server tars to Google Storage: gs://kubernetes-staging-9c9cb47be7/devel-0
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = ee1a0614128eaee0693df705bc29e6763e4c8ad6)
+++ kubernetes-salt.tar.gz uploaded (sha1 = 9eb08acde028228b54dd80ad269afee48acf261a)
Starting master and configuring firewalls
Created [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/zones/us-east1-b/disks/pull-e2e-0-master-pd].
NAME ZONE SIZE_GB TYPE STATUS
pull-e2e-0-master-pd us-east1-b 20 pd-ssd READY
Created [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-master-https].
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
pull-e2e-0-master-https e2e 0.0.0.0/0 tcp:443 pull-e2e-0-master
Created [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/regions/us-east1/addresses/pull-e2e-0-master-ip].
+++ Logging using Fluentd to elasticsearch
Created [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-all].
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
pull-e2e-0-minion-all e2e 10.245.0.0/16 tcp,udp,icmp,esp,ah,sctp pull-e2e-0-minion
Created [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/zones/us-east1-b/instances/pull-e2e-0-master].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
pull-e2e-0-master us-east1-b n1-standard-2 10.240.0.2 104.196.0.155 RUNNING
Creating minions.
Attempt 1 to create pull-e2e-0-minion-template
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks/persistent-disks#pdperformance.
Created [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/instanceTemplates/pull-e2e-0-minion-template].
NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP
pull-e2e-0-minion-template n1-standard-2 2015-10-23T15:47:56.171-07:00
Created [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/zones/us-east1-b/instanceGroupManagers/pull-e2e-0-minion-group].
NAME ZONE BASE_INSTANCE_NAME SIZE TARGET_SIZE GROUP INSTANCE_TEMPLATE AUTOSCALED
pull-e2e-0-minion-group us-east1-b pull-e2e-0-minion 6 pull-e2e-0-minion-group pull-e2e-0-minion-template
Waiting for group to become stable, current operations: creating: 6
Waiting for group to become stable, current operations: creating: 6
Waiting for group to become stable, current operations: creating: 6
Waiting for group to become stable, current operations: creating: 6
Group is stable
MINION_NAMES=pull-e2e-0-minion-1dli pull-e2e-0-minion-djcb pull-e2e-0-minion-dp0i pull-e2e-0-minion-l2bc pull-e2e-0-minion-n5ko pull-e2e-0-minion-zr43
Using master: pull-e2e-0-master (external IP: 104.196.0.155)
Waiting up to 300 seconds for cluster initialization.
This will continually check to see if the API for kubernetes is reachable.
This may time out if there was some uncaught error during start up.
..........Kubernetes cluster created.
cluster "coreos-gce-testing_pull-e2e-0" set.
user "coreos-gce-testing_pull-e2e-0" set.
context "coreos-gce-testing_pull-e2e-0" set.
switched to context "coreos-gce-testing_pull-e2e-0".
user "coreos-gce-testing_pull-e2e-0-basic-auth" set.
Wrote config for coreos-gce-testing_pull-e2e-0 to /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Kubernetes cluster is running. The master is running at:
 https://104.196.0.155
The user name and password to use is located in /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config.
... calling validate-cluster
Waiting for 6 ready nodes. 0 ready nodes, 1 registered. Retrying.
Waiting for 6 ready nodes. 0 ready nodes, 6 registered. Retrying.
Waiting for 6 ready nodes. 1 ready nodes, 6 registered. Retrying.
Waiting for 6 ready nodes. 5 ready nodes, 6 registered. Retrying.
Found 6 node(s).
NAME LABELS STATUS AGE
pull-e2e-0-minion-1dli kubernetes.io/hostname=pull-e2e-0-minion-1dli Ready 1m
pull-e2e-0-minion-djcb kubernetes.io/hostname=pull-e2e-0-minion-djcb Ready 58s
pull-e2e-0-minion-dp0i kubernetes.io/hostname=pull-e2e-0-minion-dp0i Ready 56s
pull-e2e-0-minion-l2bc kubernetes.io/hostname=pull-e2e-0-minion-l2bc Ready 58s
pull-e2e-0-minion-n5ko kubernetes.io/hostname=pull-e2e-0-minion-n5ko Ready 1m
pull-e2e-0-minion-zr43 kubernetes.io/hostname=pull-e2e-0-minion-zr43 Ready 1m
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok nil
scheduler Healthy ok nil
etcd-0 Healthy {"health": "true"} nil
etcd-1 Healthy {"health": "true"} nil
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://104.196.0.155
Elasticsearch is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
pull-e2e-0-minion-pull-e2e-0-http-alt e2e 0.0.0.0/0 tcp:80,tcp:8080 pull-e2e-0-minion
allowed:
- IPProtocol: tcp
ports:
- '80'
- IPProtocol: tcp
ports:
- '8080'
creationTimestamp: '2015-10-23T15:50:15.482-07:00'
description: ''
id: '8728155936185059144'
kind: compute#firewall
name: pull-e2e-0-minion-pull-e2e-0-http-alt
network: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/networks/e2e
selfLink: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-pull-e2e-0-http-alt
sourceRanges:
- 0.0.0.0/0
targetTags:
- pull-e2e-0-minion
allowed:
- IPProtocol: tcp
ports:
- '80'
- IPProtocol: tcp
ports:
- '8080'
creationTimestamp: '2015-10-23T15:50:15.482-07:00'
description: ''
id: '8728155936185059144'
kind: compute#firewall
name: pull-e2e-0-minion-pull-e2e-0-http-alt
network: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/networks/e2e
selfLink: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-pull-e2e-0-http-alt
sourceRanges:
- 0.0.0.0/0
targetTags:
- pull-e2e-0-minion
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
pull-e2e-0-minion-pull-e2e-0-nodeports e2e 0.0.0.0/0 tcp:30000-32767,udp:30000-32767 pull-e2e-0-minion
allowed:
- IPProtocol: tcp
ports:
- 30000-32767
- IPProtocol: udp
ports:
- 30000-32767
creationTimestamp: '2015-10-23T15:50:48.211-07:00'
description: ''
id: '2163732529478567719'
kind: compute#firewall
name: pull-e2e-0-minion-pull-e2e-0-nodeports
network: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/networks/e2e
selfLink: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-pull-e2e-0-nodeports
sourceRanges:
- 0.0.0.0/0
targetTags:
- pull-e2e-0-minion
allowed:
- IPProtocol: tcp
ports:
- 30000-32767
- IPProtocol: udp
ports:
- 30000-32767
creationTimestamp: '2015-10-23T15:50:48.211-07:00'
description: ''
id: '2163732529478567719'
kind: compute#firewall
name: pull-e2e-0-minion-pull-e2e-0-nodeports
network: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/networks/e2e
selfLink: https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-pull-e2e-0-nodeports
sourceRanges:
- 0.0.0.0/0
targetTags:
- pull-e2e-0-minion
2015/10/23 22:51:22 e2e.go:305: Step 'up' finished in 4m22.90601152s
+ go run ./hack/e2e.go -v '--ctl=version --match-server-version=false'
2015/10/23 22:51:22 e2e.go:303: Running: 'kubectl version --match-server-version=false'
Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.2.246+f93f77766dd24b-dirty", GitCommit:"f93f77766dd24bdffc4d046df0bc0360978e2f2a", GitTreeState:"dirty"}
Server Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.2.246+f93f77766dd24b-dirty", GitCommit:"f93f77766dd24bdffc4d046df0bc0360978e2f2a", GitTreeState:"dirty"}
2015/10/23 22:51:22 e2e.go:305: Step ''kubectl version --match-server-version=false'' finished in 198.16147ms
+ [[ true == \t\r\u\e ]]
+ ./cluster/gce/list-resources.sh
+ [[ true == \t\r\u\e ]]
+ go run ./hack/e2e.go -v --test '--test_args=--ginkgo.skip=Autoscaling\sSuite|Skipped|Restart\sshould\srestart\sall\snodes|Example|Reboot|ServiceLoadBalancer|Nodes\sNetwork|MaxPods|Resource\susage\sof\ssystem\scontainers|SchedulerPredicates|resource\susage\stracking|DaemonRestart|Etcd\sfailure|Nodes\sResize|Reboot|Services.*restarting|DaemonRestart\sController\sManager|Daemon\sset\sshould|Jobs\sare\slocally\srestarted|Resource\susage\sof\ssystem\scontainers|should\sbe\sable\sto\schange\sthe\stype\sand\snodeport\ssettings\sof\sa\sservice|allows\sscheduling\sof\spods\son\sa\sminion\safter\sit\srejoins\sthe\scluster|should\srelease\sthe\sload\sbalancer\swhen\sType\sgoes\sfrom\sLoadBalancer|should\scorrectly\sserve\sidentically\snamed\sservices\sin\sdifferent\snamespaces\son\sdifferent\sexternal\sIP\saddresses|should\sbe\sable\sto\screate\sa\sfunctioning\sexternal\sload\sbalancer|pod\sw/two\sRW\sPDs\sboth\smounted\sto\sone\scontainer,\swrite\sto\sPD|pod\sw/\sa\sreadonly\sPD\son\stwo\shosts,\sthen\sremove\sboth|deployment.*\sin\sthe\sright\sorder|DaemonRestart|Elasticsearch|Namespaces.*should\sdelete\sfast|PD|ServiceAccounts|Services.*change\sthe\stype|Services.*functioning\sexternal\sload\sbalancer|Services.*identically\snamed|Services.*release.*load\sbalancer|Services.*endpoint|Services.*up\sand\sdown|Networking\sshould\sfunction\sfor\sintra-pod\scommunication|SchedulerPredicates\svalidates\sMaxPods\slimit|Nodes\sResize|resource\susage\stracking|monotonically\sincreasing\srestart\scount|Garbage\scollector\sshould|KubeProxy\sshould\stest\skube-proxy|cap\sback-off\sat\sMaxContainerBackOff'
2015/10/23 22:51:33 e2e.go:303: Running: get status
Project: coreos-gce-testing
Zone: us-east1-b
Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.2.246+f93f77766dd24b-dirty", GitCommit:"f93f77766dd24bdffc4d046df0bc0360978e2f2a", GitTreeState:"dirty"}
Server Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.2.246+f93f77766dd24b-dirty", GitCommit:"f93f77766dd24bdffc4d046df0bc0360978e2f2a", GitTreeState:"dirty"}
2015/10/23 22:51:33 e2e.go:305: Step 'get status' finished in 198.814231ms
Project: coreos-gce-testing
Zone: us-east1-b
2015/10/23 22:51:34 e2e.go:303: Running: Ginkgo tests
Setting up for KUBERNETES_PROVIDER="gce".
Project: coreos-gce-testing
Zone: us-east1-b
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1445640695 - Will randomize all specs
Will run 104 of 190 specs
Running in parallel across 2 nodes
S
------------------------------
[BeforeEach] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:51:35.690: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-0oxb1
Oct 23 22:51:35.727: INFO: Service account default in ns e2e-tests-secrets-0oxb1 had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:51:37.730: INFO: Service account default in ns e2e-tests-secrets-0oxb1 with secrets found. (2.039845439s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:51:37.730: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-0oxb1
Oct 23 22:51:37.732: INFO: Service account default in ns e2e-tests-secrets-0oxb1 with secrets found. (1.754997ms)
[It] should be consumable from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
STEP: Creating secret with name secret-test-9e1dd5dc-79d8-11e5-9772-42010af00002
STEP: Creating a pod to test consume secrets
Oct 23 22:51:37.786: INFO: Waiting up to 5m0s for pod pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002 status to be success or failure
Oct 23 22:51:37.813: INFO: No Status.Info for container 'secret-test' in pod 'pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002' yet
Oct 23 22:51:37.813: INFO: Waiting for pod pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-secrets-0oxb1' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.23153ms elapsed)
Oct 23 22:51:39.816: INFO: Nil State.Terminated for container 'secret-test' in pod 'pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-secrets-0oxb1' so far
Oct 23 22:51:39.816: INFO: Waiting for pod pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-secrets-0oxb1' status to be 'success or failure'(found phase: "Running", readiness: true) (2.029752523s elapsed)
Oct 23 22:51:41.820: INFO: Nil State.Terminated for container 'secret-test' in pod 'pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-secrets-0oxb1' so far
Oct 23 22:51:41.820: INFO: Waiting for pod pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-secrets-0oxb1' status to be 'success or failure'(found phase: "Running", readiness: true) (4.033462066s elapsed)
Oct 23 22:51:43.823: INFO: Nil State.Terminated for container 'secret-test' in pod 'pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-secrets-0oxb1' so far
Oct 23 22:51:43.823: INFO: Waiting for pod pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-secrets-0oxb1' status to be 'success or failure'(found phase: "Running", readiness: true) (6.036827248s elapsed)
Oct 23 22:51:45.826: INFO: Nil State.Terminated for container 'secret-test' in pod 'pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-secrets-0oxb1' so far
Oct 23 22:51:45.826: INFO: Waiting for pod pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-secrets-0oxb1' status to be 'success or failure'(found phase: "Running", readiness: true) (8.039979935s elapsed)
Oct 23 22:51:47.830: INFO: Nil State.Terminated for container 'secret-test' in pod 'pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-secrets-0oxb1' so far
Oct 23 22:51:47.830: INFO: Waiting for pod pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-secrets-0oxb1' status to be 'success or failure'(found phase: "Running", readiness: true) (10.043252591s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod pod-secrets-9e1e89e6-79d8-11e5-9772-42010af00002 container secret-test: <nil>
STEP: Successfully fetched pod logs:mode of file "/etc/secret-volume/data-1": -r--r--r--
content of file "/etc/secret-volume/data-1": value-1
STEP: Cleaning up the secret
[AfterEach] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:51:50.115: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:51:50.121: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:51:50.121: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:51:50.121: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:51:50.121: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:51:50.121: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:51:50.121: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:51:50.121: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:51:50.121: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:51:50.121: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:51:50.121: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:51:50.121: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:51:50.121: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-secrets-0oxb1" for this suite.
• [SLOW TEST:19.449 seconds]
Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:100
should be consumable from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
------------------------------
SS
------------------------------
[BeforeEach] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:51:55.142: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-b7c6i
Oct 23 22:51:55.171: INFO: Service account default in ns e2e-tests-deployment-b7c6i with secrets found. (29.36818ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:51:55.171: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-b7c6i
Oct 23 22:51:55.173: INFO: Service account default in ns e2e-tests-deployment-b7c6i with secrets found. (1.624553ms)
[It] deployment should create new pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:34
Oct 23 22:51:55.173: INFO: Creating simple deployment nginx-deployment
Oct 23 22:51:55.207: INFO: Pod name nginx: Found 0 pods out of 1
Oct 23 22:52:00.210: INFO: Pod name nginx: Found 0 pods out of 1
Oct 23 22:52:05.213: INFO: Pod name nginx: Found 0 pods out of 1
Oct 23 22:52:10.216: INFO: Pod name nginx: Found 0 pods out of 1
Oct 23 22:52:15.219: INFO: Pod name nginx: Found 0 pods out of 1
Oct 23 22:52:20.222: INFO: Pod name nginx: Found 0 pods out of 1
Oct 23 22:52:25.224: INFO: Pod name nginx: Found 0 pods out of 1
Oct 23 22:52:30.228: INFO: Pod name nginx: Found 1 pods out of 1
STEP: ensuring each pod is running
Oct 23 22:52:30.228: INFO: Waiting up to 5m0s for pod deploymentrc-3394721597-y0en4 status to be running
Oct 23 22:52:30.230: INFO: Waiting for pod deploymentrc-3394721597-y0en4 in namespace 'e2e-tests-deployment-b7c6i' status to be 'running'(found phase: "Pending", readiness: false) (2.689451ms elapsed)
Oct 23 22:52:32.233: INFO: Waiting for pod deploymentrc-3394721597-y0en4 in namespace 'e2e-tests-deployment-b7c6i' status to be 'running'(found phase: "Pending", readiness: false) (2.005540178s elapsed)
Oct 23 22:52:34.237: INFO: Waiting for pod deploymentrc-3394721597-y0en4 in namespace 'e2e-tests-deployment-b7c6i' status to be 'running'(found phase: "Pending", readiness: false) (4.008947071s elapsed)
Oct 23 22:52:36.240: INFO: Waiting for pod deploymentrc-3394721597-y0en4 in namespace 'e2e-tests-deployment-b7c6i' status to be 'running'(found phase: "Pending", readiness: false) (6.012193153s elapsed)
Oct 23 22:52:38.243: INFO: Waiting for pod deploymentrc-3394721597-y0en4 in namespace 'e2e-tests-deployment-b7c6i' status to be 'running'(found phase: "Pending", readiness: false) (8.015888089s elapsed)
Oct 23 22:52:40.247: INFO: Found pod 'deploymentrc-3394721597-y0en4' on node 'pull-e2e-0-minion-dp0i'
STEP: trying to dial each unique pod
Oct 23 22:52:40.255: INFO: Controller nginx: Got non-empty result from replica 1 [deploymentrc-3394721597-y0en4]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 1 of 1 required successes so far
Oct 23 22:52:40.257: INFO: deleting deployment nginx-deployment
[AfterEach] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:52:40.262: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:52:40.265: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:52:40.265: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:52:40.265: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:52:40.265: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:52:40.266: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:52:40.266: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:52:40.266: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:52:40.266: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:52:40.266: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:52:40.266: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:52:40.266: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:52:40.266: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-deployment-b7c6i" for this suite.
• [SLOW TEST:50.141 seconds]
Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:41
deployment should create new pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:34
------------------------------
S
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:52:45.285: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-2zrn3
Oct 23 22:52:45.315: INFO: Service account default in ns e2e-tests-emptydir-2zrn3 with secrets found. (29.223314ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:52:45.315: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-2zrn3
Oct 23 22:52:45.316: INFO: Service account default in ns e2e-tests-emptydir-2zrn3 with secrets found. (1.73435ms)
[It] should support (non-root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:58
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 23 22:52:45.321: INFO: Waiting up to 5m0s for pod pod-c666734a-79d8-11e5-9772-42010af00002 status to be success or failure
Oct 23 22:52:45.352: INFO: No Status.Info for container 'test-container' in pod 'pod-c666734a-79d8-11e5-9772-42010af00002' yet
Oct 23 22:52:45.352: INFO: Waiting for pod pod-c666734a-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-2zrn3' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.467136ms elapsed)
Oct 23 22:52:47.355: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-c666734a-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-2zrn3' so far
Oct 23 22:52:47.355: INFO: Waiting for pod pod-c666734a-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-2zrn3' status to be 'success or failure'(found phase: "Running", readiness: true) (2.033544917s elapsed)
Oct 23 22:52:49.361: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-c666734a-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-2zrn3' so far
Oct 23 22:52:49.361: INFO: Waiting for pod pod-c666734a-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-2zrn3' status to be 'success or failure'(found phase: "Running", readiness: true) (4.039788592s elapsed)
Oct 23 22:52:51.365: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-c666734a-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-2zrn3' so far
Oct 23 22:52:51.365: INFO: Waiting for pod pod-c666734a-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-2zrn3' status to be 'success or failure'(found phase: "Running", readiness: true) (6.043765224s elapsed)
Oct 23 22:52:53.372: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-c666734a-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-2zrn3' so far
Oct 23 22:52:53.372: INFO: Waiting for pod pod-c666734a-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-2zrn3' status to be 'success or failure'(found phase: "Running", readiness: true) (8.050965571s elapsed)
Oct 23 22:52:55.376: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-c666734a-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-2zrn3' so far
Oct 23 22:52:55.376: INFO: Waiting for pod pod-c666734a-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-2zrn3' status to be 'success or failure'(found phase: "Running", readiness: true) (10.054882193s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod pod-c666734a-79d8-11e5-9772-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:52:57.399: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:52:57.429: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:52:57.429: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:52:57.429: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:52:57.429: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:52:57.429: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:52:57.429: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:52:57.429: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:52:57.429: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:52:57.429: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:52:57.429: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:52:57.429: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:52:57.429: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-2zrn3" for this suite.
• [SLOW TEST:17.161 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:58
------------------------------
[BeforeEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:53:02.450: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-hfjei
Oct 23 22:53:02.477: INFO: Service account default in ns e2e-tests-var-expansion-hfjei had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:53:04.488: INFO: Service account default in ns e2e-tests-var-expansion-hfjei with secrets found. (2.038741694s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:53:04.488: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-hfjei
Oct 23 22:53:04.531: INFO: Service account default in ns e2e-tests-var-expansion-hfjei with secrets found. (42.276297ms)
[It] should allow substituting values in a container's args [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:128
STEP: Creating a pod to test substitution in container's args
Oct 23 22:53:04.536: INFO: Waiting up to 5m0s for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 status to be success or failure
Oct 23 22:53:04.563: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' yet
Oct 23 22:53:04.563: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (27.570104ms elapsed)
Oct 23 22:53:06.566: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' yet
Oct 23 22:53:06.566: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.030630532s elapsed)
Oct 23 22:53:08.570: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' yet
Oct 23 22:53:08.570: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.034166162s elapsed)
Oct 23 22:53:10.586: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' yet
Oct 23 22:53:10.586: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.05036494s elapsed)
Oct 23 22:53:12.628: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' yet
Oct 23 22:53:12.628: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.092648665s elapsed)
Oct 23 22:53:14.632: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' yet
Oct 23 22:53:14.632: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.096118004s elapsed)
Oct 23 22:53:16.635: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:16.635: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.099411237s elapsed)
Oct 23 22:53:18.638: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:18.638: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.102320307s elapsed)
Oct 23 22:53:20.641: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:20.641: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.105493301s elapsed)
Oct 23 22:53:22.645: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:22.645: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (18.108913134s elapsed)
Oct 23 22:53:24.648: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:24.648: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (20.111940059s elapsed)
Oct 23 22:53:26.651: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:26.651: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (22.115453944s elapsed)
Oct 23 22:53:28.655: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:28.655: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (24.11883666s elapsed)
Oct 23 22:53:30.658: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:30.658: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.122218122s elapsed)
Oct 23 22:53:32.662: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:32.662: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.126010086s elapsed)
Oct 23 22:53:34.665: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:34.665: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.12887672s elapsed)
Oct 23 22:53:36.668: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:36.668: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Running", readiness: true) (32.132326s elapsed)
Oct 23 22:53:38.672: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:38.672: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Running", readiness: true) (34.13586973s elapsed)
Oct 23 22:53:40.675: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:40.675: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Running", readiness: true) (36.13903807s elapsed)
Oct 23 22:53:42.678: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:42.678: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Running", readiness: true) (38.142555741s elapsed)
Oct 23 22:53:44.682: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-d1da538b-79d8-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-hfjei' so far
Oct 23 22:53:44.682: INFO: Waiting for pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-hfjei' status to be 'success or failure'(found phase: "Running", readiness: true) (40.145958958s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod var-expansion-d1da538b-79d8-11e5-9772-42010af00002 container dapi-container: <nil>
STEP: Successfully fetched pod logs:test-value
[AfterEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:53:46.968: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:53:47.000: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:53:47.000: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:53:47.000: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:53:47.000: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:53:47.000: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:53:47.000: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:53:47.000: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:53:47.000: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:53:47.000: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:53:47.000: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:53:47.000: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:53:47.000: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-var-expansion-hfjei" for this suite.
• [SLOW TEST:49.569 seconds]
Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:129
should allow substituting values in a container's args [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:128
------------------------------
[BeforeEach] Kibana Logging Instances Is Alive
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:53:52.019: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kibana-logging-69c3c
Oct 23 22:53:52.048: INFO: Service account default in ns e2e-tests-kibana-logging-69c3c had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:53:54.051: INFO: Service account default in ns e2e-tests-kibana-logging-69c3c with secrets found. (2.031883493s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:53:54.051: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kibana-logging-69c3c
Oct 23 22:53:54.054: INFO: Service account default in ns e2e-tests-kibana-logging-69c3c with secrets found. (3.215775ms)
[BeforeEach] Kibana Logging Instances Is Alive
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:38
[It] should check that the Kibana logging instance is alive
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
STEP: Checking the Kibana service exists.
STEP: Checking to make sure the Kibana pods are running
Oct 23 22:53:54.060: INFO: Waiting up to 5m0s for pod kibana-logging-v1-skk6z status to be running
Oct 23 22:53:54.062: INFO: Found pod 'kibana-logging-v1-skk6z' on node 'pull-e2e-0-minion-djcb'
STEP: Checking to make sure we get a response from the Kibana UI.
[AfterEach] Kibana Logging Instances Is Alive
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:53:54.079: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:53:54.083: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:53:54.083: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:53:54.083: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:53:54.083: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:53:54.083: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:53:54.083: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:53:54.083: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:53:54.083: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:53:54.083: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:53:54.083: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:53:54.083: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:53:54.083: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-kibana-logging-69c3c" for this suite.
• [SLOW TEST:7.083 seconds]
Kibana Logging Instances Is Alive
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:43
should check that the Kibana logging instance is alive
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 22:53:59.104: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-q5xqq
Oct 23 22:53:59.137: INFO: Service account default in ns e2e-tests-kubectl-q5xqq with secrets found. (33.070949ms)
[BeforeEach] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:164
STEP: creating the pod
Oct 23 22:53:59.137: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-q5xqq'
Oct 23 22:53:59.388: INFO: pod "nginx" created
Oct 23 22:53:59.388: INFO: Waiting up to 5m0s for the following 1 pods to be running and ready: [nginx]
Oct 23 22:53:59.388: INFO: Waiting up to 5m0s for pod nginx status to be running and ready
Oct 23 22:53:59.391: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-q5xqq' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.789151ms elapsed)
Oct 23 22:54:01.394: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-q5xqq' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.0061442s elapsed)
Oct 23 22:54:03.397: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-q5xqq' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.009463356s elapsed)
Oct 23 22:54:05.401: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-q5xqq' status to be 'running and ready'(found phase: "Pending", readiness: false) (6.013058607s elapsed)
Oct 23 22:54:07.404: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-q5xqq' status to be 'running and ready'(found phase: "Pending", readiness: false) (8.016452858s elapsed)
Oct 23 22:54:09.408: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx]
[It] should support inline execution and attach
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:417
STEP: executing a command with run and attach with stdin
Oct 23 22:54:09.410: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config --namespace=e2e-tests-kubectl-q5xqq run run-test --image=busybox --restart=Never --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Oct 23 22:54:11.845: INFO: Waiting for pod e2e-tests-kubectl-q5xqq/run-test to be running, status is Pending, pod ready: false
abcd1234stdin closed
STEP: executing a command with run and attach without stdin
Oct 23 22:54:11.856: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config --namespace=e2e-tests-kubectl-q5xqq run run-test-2 --image=busybox --restart=Never --attach=true --leave-stdin-open=true -- sh -c cat && echo 'stdin closed''
Oct 23 22:54:14.487: INFO: Waiting for pod e2e-tests-kubectl-q5xqq/run-test-2 to be running, status is Pending, pod ready: false
Error attaching, falling back to logs: error executing remote command: Error executing command in container: container not found ("run-test-2")
stdin closed
STEP: executing a command with run and attach with stdin with open stdin should remain running
Oct 23 22:54:14.529: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config --namespace=e2e-tests-kubectl-q5xqq run run-test-3 --image=busybox --restart=Never --attach=true --leave-stdin-open=true --stdin -- sh -c cat && echo 'stdin closed''
Oct 23 22:54:16.928: INFO: Waiting for pod e2e-tests-kubectl-q5xqq/run-test-3 to be running, status is Pending, pod ready: false
abcd1234
Oct 23 22:54:16.928: INFO: Waiting up to 1m0s for the following 1 pods to be running and ready: [run-test-3]
Oct 23 22:54:16.928: INFO: Waiting up to 1m0s for pod run-test-3 status to be running and ready
Oct 23 22:54:16.955: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [run-test-3]
Oct 23 22:54:16.955: INFO: Waiting up to 1s for the following 1 pods to be running and ready: [run-test-3]
Oct 23 22:54:16.955: INFO: Waiting up to 1s for pod run-test-3 status to be running and ready
Oct 23 22:54:16.957: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [run-test-3]
Oct 23 22:54:16.957: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config --namespace=e2e-tests-kubectl-q5xqq logs run-test-3'
Oct 23 22:54:17.134: INFO: abcd1234
[AfterEach] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:167
STEP: using delete to clean up resources
Oct 23 22:54:17.146: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-q5xqq'
Oct 23 22:54:17.343: INFO: pod "nginx" deleted
Oct 23 22:54:17.343: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-q5xqq'
Oct 23 22:54:17.522: INFO:
Oct 23 22:54:17.522: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-q5xqq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 22:54:17.700: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-q5xqq
• [SLOW TEST:23.646 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:435
should support inline execution and attach
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:417
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:54:22.754: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-zcbgp
Oct 23 22:54:22.781: INFO: Service account default in ns e2e-tests-pods-zcbgp had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:54:24.783: INFO: Service account default in ns e2e-tests-pods-zcbgp with secrets found. (2.029158902s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:54:24.783: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-zcbgp
Oct 23 22:54:24.785: INFO: Service account default in ns e2e-tests-pods-zcbgp with secrets found. (1.726808ms)
[It] should be submitted and removed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:372
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:54:34.403: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:54:34.437: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:54:34.437: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:54:34.437: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:54:34.437: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:54:34.437: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:54:34.437: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:54:34.437: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:54:34.437: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:54:34.437: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:54:34.437: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:54:34.437: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:54:34.437: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-zcbgp" for this suite.
• [SLOW TEST:16.703 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should be submitted and removed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:372
------------------------------
S
------------------------------
[BeforeEach] Service endpoints latency
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:54:39.457: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-2g2kl
Oct 23 22:54:39.489: INFO: Service account default in ns e2e-tests-svc-latency-2g2kl had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:54:41.492: INFO: Service account default in ns e2e-tests-svc-latency-2g2kl with secrets found. (2.034974882s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:54:41.492: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-2g2kl
Oct 23 22:54:41.494: INFO: Service account default in ns e2e-tests-svc-latency-2g2kl with secrets found. (2.051247ms)
[It] should not be very high [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-2g2kl
Oct 23 22:54:41.499: INFO: Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-2g2kl, replica count: 1
Oct 23 22:54:42.499: INFO: svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:54:42.633: INFO: Created: latency-svc-4i3q7
Oct 23 22:54:42.640: INFO: Got endpoints: latency-svc-4i3q7 [41.246854ms]
Oct 23 22:54:42.761: INFO: Created: latency-svc-66eho
Oct 23 22:54:42.852: INFO: Created: latency-svc-y8a0s
Oct 23 22:54:42.852: INFO: Got endpoints: latency-svc-66eho [170.114752ms]
Oct 23 22:54:42.853: INFO: Got endpoints: latency-svc-y8a0s [169.681484ms]
Oct 23 22:54:42.892: INFO: Created: latency-svc-cwcji
Oct 23 22:54:42.892: INFO: Created: latency-svc-xdhpy
Oct 23 22:54:42.894: INFO: Got endpoints: latency-svc-cwcji [211.528137ms]
Oct 23 22:54:42.894: INFO: Got endpoints: latency-svc-xdhpy [210.71226ms]
Oct 23 22:54:42.896: INFO: Created: latency-svc-nuq05
Oct 23 22:54:42.925: INFO: Created: latency-svc-i4qcg
Oct 23 22:54:42.943: INFO: Got endpoints: latency-svc-nuq05 [259.188129ms]
Oct 23 22:54:42.966: INFO: Got endpoints: latency-svc-i4qcg [282.709433ms]
Oct 23 22:54:42.973: INFO: Created: latency-svc-svjgk
Oct 23 22:54:43.000: INFO: Created: latency-svc-aeyvd
Oct 23 22:54:43.020: INFO: Created: latency-svc-khamt
Oct 23 22:54:43.021: INFO: Got endpoints: latency-svc-svjgk [338.061959ms]
Oct 23 22:54:43.031: INFO: Got endpoints: latency-svc-aeyvd [347.927223ms]
Oct 23 22:54:43.051: INFO: Created: latency-svc-0o1wc
Oct 23 22:54:43.060: INFO: Created: latency-svc-wwhg0
Oct 23 22:54:43.078: INFO: Got endpoints: latency-svc-khamt [395.314858ms]
Oct 23 22:54:43.106: INFO: Created: latency-svc-3p0fa
Oct 23 22:54:43.116: INFO: Created: latency-svc-fxk6x
Oct 23 22:54:43.149: INFO: Created: latency-svc-ocpsk
Oct 23 22:54:43.195: INFO: Created: latency-svc-ehghz
Oct 23 22:54:43.204: INFO: Got endpoints: latency-svc-0o1wc [521.266164ms]
Oct 23 22:54:43.221: INFO: Created: latency-svc-9fh54
Oct 23 22:54:43.233: INFO: Created: latency-svc-mfm3s
Oct 23 22:54:43.235: INFO: Got endpoints: latency-svc-fxk6x [553.007897ms]
Oct 23 22:54:43.268: INFO: Created: latency-svc-hv6ij
Oct 23 22:54:43.270: INFO: Created: latency-svc-q7ybz
Oct 23 22:54:43.281: INFO: Created: latency-svc-vuv3x
Oct 23 22:54:43.296: INFO: Created: latency-svc-onmpg
Oct 23 22:54:43.303: INFO: Created: latency-svc-lrqy0
Oct 23 22:54:43.319: INFO: Created: latency-svc-8otp3
Oct 23 22:54:43.331: INFO: Created: latency-svc-gh8jn
Oct 23 22:54:43.354: INFO: Created: latency-svc-rru33
Oct 23 22:54:43.370: INFO: Got endpoints: latency-svc-wwhg0 [687.587137ms]
Oct 23 22:54:43.382: INFO: Created: latency-svc-sdav9
Oct 23 22:54:43.386: INFO: Got endpoints: latency-svc-3p0fa [702.979852ms]
Oct 23 22:54:43.445: INFO: Created: latency-svc-oy7z6
Oct 23 22:54:43.445: INFO: Created: latency-svc-rwxby
Oct 23 22:54:43.832: INFO: Got endpoints: latency-svc-ocpsk [1.148875633s]
Oct 23 22:54:43.860: INFO: Created: latency-svc-4ki6w
Oct 23 22:54:43.881: INFO: Got endpoints: latency-svc-ehghz [1.198242321s]
Oct 23 22:54:43.920: INFO: Created: latency-svc-epb3l
Oct 23 22:54:43.982: INFO: Got endpoints: latency-svc-9fh54 [903.84066ms]
Oct 23 22:54:44.054: INFO: Created: latency-svc-n829m
Oct 23 22:54:44.082: INFO: Got endpoints: latency-svc-mfm3s [927.293485ms]
Oct 23 22:54:44.435: INFO: Got endpoints: latency-svc-hv6ij [1.273670266s]
Oct 23 22:54:44.494: INFO: Got endpoints: latency-svc-q7ybz [1.327191039s]
Oct 23 22:54:44.506: INFO: Created: latency-svc-z9qaa
Oct 23 22:54:44.534: INFO: Created: latency-svc-0slod
Oct 23 22:54:44.551: INFO: Got endpoints: latency-svc-vuv3x [1.379850937s]
Oct 23 22:54:44.569: INFO: Created: latency-svc-nsux0
Oct 23 22:54:44.595: INFO: Got endpoints: latency-svc-onmpg [1.390455614s]
Oct 23 22:54:44.652: INFO: Created: latency-svc-71z2y
Oct 23 22:54:44.660: INFO: Created: latency-svc-onatj
Oct 23 22:54:44.682: INFO: Got endpoints: latency-svc-lrqy0 [1.469710786s]
Oct 23 22:54:44.746: INFO: Created: latency-svc-613ax
Oct 23 22:54:45.033: INFO: Got endpoints: latency-svc-8otp3 [1.765222824s]
Oct 23 22:54:45.091: INFO: Got endpoints: latency-svc-gh8jn [1.854792985s]
Oct 23 22:54:45.143: INFO: Got endpoints: latency-svc-rru33 [1.836750586s]
Oct 23 22:54:45.151: INFO: Created: latency-svc-p5nx0
Oct 23 22:54:45.176: INFO: Created: latency-svc-qv2bc
Oct 23 22:54:45.209: INFO: Created: latency-svc-17whw
Oct 23 22:54:45.282: INFO: Got endpoints: latency-svc-sdav9 [1.969624051s]
Oct 23 22:54:45.353: INFO: Created: latency-svc-v9zpy
Oct 23 22:54:45.632: INFO: Got endpoints: latency-svc-rwxby [2.237873326s]
Oct 23 22:54:45.687: INFO: Got endpoints: latency-svc-oy7z6 [2.267010874s]
Oct 23 22:54:45.728: INFO: Created: latency-svc-b2j77
Oct 23 22:54:45.786: INFO: Created: latency-svc-6i8qh
Oct 23 22:54:45.833: INFO: Got endpoints: latency-svc-4ki6w [1.983548775s]
Oct 23 22:54:45.900: INFO: Created: latency-svc-v4oau
Oct 23 22:54:46.083: INFO: Got endpoints: latency-svc-epb3l [2.176514882s]
Oct 23 22:54:46.154: INFO: Created: latency-svc-lm6mc
Oct 23 22:54:46.282: INFO: Got endpoints: latency-svc-n829m [2.25635822s]
Oct 23 22:54:46.351: INFO: Created: latency-svc-6es71
Oct 23 22:54:46.633: INFO: Got endpoints: latency-svc-z9qaa [2.187015416s]
Oct 23 22:54:46.685: INFO: Got endpoints: latency-svc-0slod [2.200463429s]
Oct 23 22:54:46.756: INFO: Got endpoints: latency-svc-nsux0 [2.210588328s]
Oct 23 22:54:46.762: INFO: Created: latency-svc-ebnbe
Oct 23 22:54:46.798: INFO: Created: latency-svc-z4okc
Oct 23 22:54:46.817: INFO: Created: latency-svc-tf9zr
Oct 23 22:54:47.133: INFO: Got endpoints: latency-svc-71z2y [2.536586301s]
Oct 23 22:54:47.183: INFO: Got endpoints: latency-svc-onatj [2.544107002s]
Oct 23 22:54:47.242: INFO: Created: latency-svc-uk7v5
Oct 23 22:54:47.253: INFO: Created: latency-svc-9adcl
Oct 23 22:54:47.282: INFO: Got endpoints: latency-svc-613ax [2.549443787s]
Oct 23 22:54:47.318: INFO: Created: latency-svc-7zets
Oct 23 22:54:47.634: INFO: Got endpoints: latency-svc-p5nx0 [2.541483416s]
Oct 23 22:54:47.716: INFO: Created: latency-svc-q4e1s
Oct 23 22:54:47.783: INFO: Got endpoints: latency-svc-qv2bc [2.632764237s]
Oct 23 22:54:47.840: INFO: Got endpoints: latency-svc-17whw [2.649333615s]
Oct 23 22:54:47.858: INFO: Created: latency-svc-ywuxt
Oct 23 22:54:47.917: INFO: Created: latency-svc-zvfyq
Oct 23 22:54:48.033: INFO: Got endpoints: latency-svc-v9zpy [2.693631473s]
Oct 23 22:54:48.103: INFO: Created: latency-svc-5sl03
Oct 23 22:54:48.232: INFO: Got endpoints: latency-svc-b2j77 [2.547671682s]
Oct 23 22:54:48.300: INFO: Created: latency-svc-ngutu
Oct 23 22:54:48.432: INFO: Got endpoints: latency-svc-6i8qh [2.660769306s]
Oct 23 22:54:48.513: INFO: Created: latency-svc-l0gvb
Oct 23 22:54:48.632: INFO: Got endpoints: latency-svc-v4oau [2.746898804s]
Oct 23 22:54:48.702: INFO: Created: latency-svc-j5rnt
Oct 23 22:54:48.782: INFO: Got endpoints: latency-svc-lm6mc [2.640021878s]
Oct 23 22:54:48.848: INFO: Created: latency-svc-8mbsd
Oct 23 22:54:48.983: INFO: Got endpoints: latency-svc-6es71 [2.644175272s]
Oct 23 22:54:49.055: INFO: Created: latency-svc-12o8l
Oct 23 22:54:49.282: INFO: Got endpoints: latency-svc-ebnbe [2.593657038s]
Oct 23 22:54:49.358: INFO: Created: latency-svc-prstu
Oct 23 22:54:49.601: INFO: Got endpoints: latency-svc-z4okc [2.827641023s]
Oct 23 22:54:49.637: INFO: Got endpoints: latency-svc-tf9zr [2.832691586s]
Oct 23 22:54:49.719: INFO: Created: latency-svc-piegb
Oct 23 22:54:49.751: INFO: Created: latency-svc-ezn8z
Oct 23 22:54:50.183: INFO: Got endpoints: latency-svc-uk7v5 [2.997188358s]
Oct 23 22:54:50.225: INFO: Created: latency-svc-amy7t
Oct 23 22:54:50.237: INFO: Got endpoints: latency-svc-9adcl [3.010691001s]
Oct 23 22:54:50.309: INFO: Created: latency-svc-p0ksw
Oct 23 22:54:50.383: INFO: Got endpoints: latency-svc-7zets [3.077653737s]
Oct 23 22:54:50.447: INFO: Created: latency-svc-xxude
Oct 23 22:54:50.632: INFO: Got endpoints: latency-svc-q4e1s [2.93054407s]
Oct 23 22:54:50.701: INFO: Created: latency-svc-2nxux
Oct 23 22:54:50.832: INFO: Got endpoints: latency-svc-ywuxt [2.996180414s]
Oct 23 22:54:50.921: INFO: Created: latency-svc-68hd8
Oct 23 22:54:50.982: INFO: Got endpoints: latency-svc-zvfyq [3.079197422s]
Oct 23 22:54:51.051: INFO: Created: latency-svc-2bb40
Oct 23 22:54:51.232: INFO: Got endpoints: latency-svc-5sl03 [3.141172076s]
Oct 23 22:54:51.303: INFO: Created: latency-svc-ded0g
Oct 23 22:54:51.386: INFO: Got endpoints: latency-svc-ngutu [3.098072486s]
Oct 23 22:54:51.454: INFO: Created: latency-svc-rbp9d
Oct 23 22:54:51.598: INFO: Got endpoints: latency-svc-l0gvb [3.110642466s]
Oct 23 22:54:51.672: INFO: Created: latency-svc-j114w
Oct 23 22:54:51.782: INFO: Got endpoints: latency-svc-j5rnt [3.094805775s]
Oct 23 22:54:51.848: INFO: Created: latency-svc-ilsh2
Oct 23 22:54:51.933: INFO: Got endpoints: latency-svc-8mbsd [3.099637656s]
Oct 23 22:54:52.009: INFO: Created: latency-svc-fnesk
Oct 23 22:54:52.133: INFO: Got endpoints: latency-svc-12o8l [3.096867685s]
Oct 23 22:54:52.202: INFO: Created: latency-svc-ech8k
Oct 23 22:54:52.282: INFO: Got endpoints: latency-svc-prstu [2.944109229s]
Oct 23 22:54:52.358: INFO: Created: latency-svc-vi3lu
Oct 23 22:54:52.532: INFO: Got endpoints: latency-svc-piegb [2.856129266s]
Oct 23 22:54:52.596: INFO: Created: latency-svc-ag1bg
Oct 23 22:54:52.682: INFO: Got endpoints: latency-svc-ezn8z [2.943669132s]
Oct 23 22:54:52.753: INFO: Created: latency-svc-ljit0
Oct 23 22:54:52.781: INFO: Got endpoints: latency-svc-amy7t [2.57156642s]
Oct 23 22:54:52.864: INFO: Created: latency-svc-cwma4
Oct 23 22:54:53.033: INFO: Got endpoints: latency-svc-p0ksw [2.737425548s]
Oct 23 22:54:53.122: INFO: Created: latency-svc-elu15
Oct 23 22:54:53.183: INFO: Got endpoints: latency-svc-xxude [2.748583123s]
Oct 23 22:54:53.262: INFO: Created: latency-svc-a5ihk
Oct 23 22:54:53.385: INFO: Got endpoints: latency-svc-2nxux [2.698850626s]
Oct 23 22:54:53.499: INFO: Created: latency-svc-1y1kl
Oct 23 22:54:53.600: INFO: Got endpoints: latency-svc-68hd8 [2.691206685s]
Oct 23 22:54:53.703: INFO: Created: latency-svc-xofcz
Oct 23 22:54:53.732: INFO: Got endpoints: latency-svc-2bb40 [2.695484834s]
Oct 23 22:54:53.813: INFO: Created: latency-svc-6v43t
Oct 23 22:54:53.932: INFO: Got endpoints: latency-svc-ded0g [2.647689665s]
Oct 23 22:54:54.012: INFO: Created: latency-svc-wf9d0
Oct 23 22:54:54.083: INFO: Got endpoints: latency-svc-rbp9d [2.643683289s]
Oct 23 22:54:54.153: INFO: Created: latency-svc-84ei6
Oct 23 22:54:54.282: INFO: Got endpoints: latency-svc-j114w [2.625032046s]
Oct 23 22:54:54.350: INFO: Created: latency-svc-xef2g
Oct 23 22:54:54.482: INFO: Got endpoints: latency-svc-ilsh2 [2.647924692s]
Oct 23 22:54:54.551: INFO: Created: latency-svc-oyx7x
Oct 23 22:54:54.632: INFO: Got endpoints: latency-svc-fnesk [2.638100818s]
Oct 23 22:54:54.722: INFO: Created: latency-svc-f41tw
Oct 23 22:54:54.833: INFO: Got endpoints: latency-svc-ech8k [2.642838411s]
Oct 23 22:54:54.963: INFO: Created: latency-svc-bwi6x
Oct 23 22:54:54.982: INFO: Got endpoints: latency-svc-vi3lu [2.643420896s]
Oct 23 22:54:55.055: INFO: Created: latency-svc-dhn8u
Oct 23 22:54:55.184: INFO: Got endpoints: latency-svc-ag1bg [2.600326316s]
Oct 23 22:54:55.248: INFO: Created: latency-svc-awthi
Oct 23 22:54:55.391: INFO: Got endpoints: latency-svc-ljit0 [2.653036807s]
Oct 23 22:54:55.462: INFO: Created: latency-svc-jat6r
Oct 23 22:54:55.533: INFO: Got endpoints: latency-svc-cwma4 [2.684977173s]
Oct 23 22:54:55.604: INFO: Created: latency-svc-vhen8
Oct 23 22:54:55.733: INFO: Got endpoints: latency-svc-elu15 [2.633809472s]
Oct 23 22:54:55.832: INFO: Created: latency-svc-mlzkq
Oct 23 22:54:55.882: INFO: Got endpoints: latency-svc-a5ihk [2.641413126s]
Oct 23 22:54:55.958: INFO: Created: latency-svc-b85qh
Oct 23 22:54:56.083: INFO: Got endpoints: latency-svc-1y1kl [2.640496453s]
Oct 23 22:54:56.153: INFO: Created: latency-svc-jk74e
Oct 23 22:54:56.282: INFO: Got endpoints: latency-svc-xofcz [2.593702671s]
Oct 23 22:54:56.412: INFO: Created: latency-svc-7l6l1
Oct 23 22:54:56.482: INFO: Got endpoints: latency-svc-6v43t [2.683794913s]
Oct 23 22:54:56.547: INFO: Created: latency-svc-cfoq1
Oct 23 22:54:56.681: INFO: Got endpoints: latency-svc-wf9d0 [2.6804341s]
Oct 23 22:54:56.766: INFO: Created: latency-svc-xjz56
Oct 23 22:54:56.833: INFO: Got endpoints: latency-svc-84ei6 [2.691378779s]
Oct 23 22:54:56.898: INFO: Created: latency-svc-gs766
Oct 23 22:54:57.032: INFO: Got endpoints: latency-svc-xef2g [2.695072325s]
Oct 23 22:54:57.100: INFO: Created: latency-svc-o3wof
Oct 23 22:54:57.232: INFO: Got endpoints: latency-svc-oyx7x [2.69485965s]
Oct 23 22:54:57.298: INFO: Created: latency-svc-ob94u
Oct 23 22:54:57.396: INFO: Got endpoints: latency-svc-f41tw [2.710513284s]
Oct 23 22:54:57.464: INFO: Created: latency-svc-3lhq1
Oct 23 22:54:57.581: INFO: Got endpoints: latency-svc-bwi6x [2.634614555s]
Oct 23 22:54:57.655: INFO: Created: latency-svc-mj61b
Oct 23 22:54:57.734: INFO: Got endpoints: latency-svc-dhn8u [2.699009065s]
Oct 23 22:54:57.849: INFO: Created: latency-svc-ldamm
Oct 23 22:54:57.932: INFO: Got endpoints: latency-svc-awthi [2.694378137s]
Oct 23 22:54:57.996: INFO: Created: latency-svc-dgpj2
Oct 23 22:54:58.132: INFO: Got endpoints: latency-svc-jat6r [2.684140623s]
Oct 23 22:54:58.207: INFO: Created: latency-svc-wljei
Oct 23 22:54:58.282: INFO: Got endpoints: latency-svc-vhen8 [2.692909717s]
Oct 23 22:54:58.350: INFO: Created: latency-svc-xq93h
Oct 23 22:54:58.482: INFO: Got endpoints: latency-svc-mlzkq [2.671941548s]
Oct 23 22:54:58.555: INFO: Created: latency-svc-t2mm7
Oct 23 22:54:58.632: INFO: Got endpoints: latency-svc-b85qh [2.68747017s]
Oct 23 22:54:58.722: INFO: Created: latency-svc-vebii
Oct 23 22:54:58.833: INFO: Got endpoints: latency-svc-jk74e [2.691776643s]
Oct 23 22:54:58.898: INFO: Created: latency-svc-skgaq
Oct 23 22:54:59.032: INFO: Got endpoints: latency-svc-7l6l1 [2.631855495s]
Oct 23 22:54:59.105: INFO: Created: latency-svc-5y2dj
Oct 23 22:54:59.183: INFO: Got endpoints: latency-svc-cfoq1 [2.652129469s]
Oct 23 22:54:59.255: INFO: Created: latency-svc-41kmb
Oct 23 22:54:59.395: INFO: Got endpoints: latency-svc-xjz56 [2.641929107s]
Oct 23 22:54:59.469: INFO: Created: latency-svc-twi5t
Oct 23 22:54:59.532: INFO: Got endpoints: latency-svc-gs766 [2.645035956s]
Oct 23 22:54:59.610: INFO: Created: latency-svc-2bbkm
Oct 23 22:54:59.782: INFO: Got endpoints: latency-svc-o3wof [2.693427243s]
Oct 23 22:54:59.860: INFO: Created: latency-svc-bc1g1
Oct 23 22:54:59.982: INFO: Got endpoints: latency-svc-ob94u [2.695509392s]
Oct 23 22:55:00.056: INFO: Created: latency-svc-u47xu
Oct 23 22:55:00.132: INFO: Got endpoints: latency-svc-3lhq1 [2.68268963s]
Oct 23 22:55:00.205: INFO: Created: latency-svc-pyfe1
Oct 23 22:55:00.482: INFO: Got endpoints: latency-svc-mj61b [2.839586776s]
Oct 23 22:55:00.553: INFO: Created: latency-svc-qj66q
Oct 23 22:55:00.632: INFO: Got endpoints: latency-svc-ldamm [2.794584856s]
Oct 23 22:55:00.696: INFO: Created: latency-svc-aarys
Oct 23 22:55:01.033: INFO: Got endpoints: latency-svc-dgpj2 [3.050909083s]
Oct 23 22:55:01.117: INFO: Created: latency-svc-kol4o
Oct 23 22:55:01.232: INFO: Got endpoints: latency-svc-wljei [3.038175019s]
Oct 23 22:55:01.302: INFO: Created: latency-svc-8brpt
Oct 23 22:55:01.394: INFO: Got endpoints: latency-svc-xq93h [3.061093348s]
Oct 23 22:55:01.459: INFO: Created: latency-svc-ls4x2
Oct 23 22:55:01.582: INFO: Got endpoints: latency-svc-t2mm7 [3.043896611s]
Oct 23 22:55:01.650: INFO: Created: latency-svc-8oacs
Oct 23 22:55:01.782: INFO: Got endpoints: latency-svc-vebii [3.075192874s]
Oct 23 22:55:01.857: INFO: Created: latency-svc-wllty
Oct 23 22:55:01.987: INFO: Got endpoints: latency-svc-skgaq [3.102020397s]
Oct 23 22:55:02.097: INFO: Created: latency-svc-kqe4b
Oct 23 22:55:02.188: INFO: Got endpoints: latency-svc-5y2dj [3.09726706s]
Oct 23 22:55:02.272: INFO: Created: latency-svc-zsijr
Oct 23 22:55:02.331: INFO: Got endpoints: latency-svc-41kmb [3.093798972s]
Oct 23 22:55:02.435: INFO: Created: latency-svc-fbb1s
Oct 23 22:55:02.537: INFO: Got endpoints: latency-svc-twi5t [3.08652896s]
Oct 23 22:55:02.622: INFO: Created: latency-svc-a7bqp
Oct 23 22:55:02.682: INFO: Got endpoints: latency-svc-2bbkm [3.087264571s]
Oct 23 22:55:02.756: INFO: Created: latency-svc-qnhw3
Oct 23 22:55:02.882: INFO: Got endpoints: latency-svc-bc1g1 [3.036333134s]
Oct 23 22:55:02.964: INFO: Created: latency-svc-nikt5
Oct 23 22:55:03.083: INFO: Got endpoints: latency-svc-u47xu [3.049113034s]
Oct 23 22:55:03.195: INFO: Created: latency-svc-pi5g7
Oct 23 22:55:03.233: INFO: Got endpoints: latency-svc-pyfe1 [3.040132732s]
Oct 23 22:55:03.305: INFO: Created: latency-svc-jey9b
Oct 23 22:55:03.432: INFO: Got endpoints: latency-svc-qj66q [2.895642614s]
Oct 23 22:55:03.499: INFO: Created: latency-svc-02ksk
Oct 23 22:55:03.582: INFO: Got endpoints: latency-svc-aarys [2.897710929s]
Oct 23 22:55:03.657: INFO: Created: latency-svc-08znm
Oct 23 22:55:03.782: INFO: Got endpoints: latency-svc-kol4o [2.681991706s]
Oct 23 22:55:03.846: INFO: Created: latency-svc-dvd21
Oct 23 22:55:03.982: INFO: Got endpoints: latency-svc-8brpt [2.691722267s]
Oct 23 22:55:04.074: INFO: Created: latency-svc-pa67r
Oct 23 22:55:04.132: INFO: Got endpoints: latency-svc-ls4x2 [2.684668888s]
Oct 23 22:55:04.213: INFO: Created: latency-svc-csq94
Oct 23 22:55:04.332: INFO: Got endpoints: latency-svc-8oacs [2.69779114s]
Oct 23 22:55:04.431: INFO: Created: latency-svc-qjmtp
Oct 23 22:55:04.482: INFO: Got endpoints: latency-svc-wllty [2.639744418s]
Oct 23 22:55:04.563: INFO: Created: latency-svc-76kb6
Oct 23 22:55:04.683: INFO: Got endpoints: latency-svc-kqe4b [2.60573588s]
Oct 23 22:55:04.757: INFO: Created: latency-svc-bo8lu
Oct 23 22:55:04.883: INFO: Got endpoints: latency-svc-zsijr [2.624853857s]
Oct 23 22:55:04.952: INFO: Created: latency-svc-ldwun
Oct 23 22:55:05.032: INFO: Got endpoints: latency-svc-fbb1s [2.610190637s]
Oct 23 22:55:05.122: INFO: Created: latency-svc-h32yu
Oct 23 22:55:05.232: INFO: Got endpoints: latency-svc-a7bqp [2.631072086s]
Oct 23 22:55:05.299: INFO: Created: latency-svc-7jevh
Oct 23 22:55:05.384: INFO: Got endpoints: latency-svc-qnhw3 [2.643716346s]
Oct 23 22:55:05.452: INFO: Created: latency-svc-hituc
Oct 23 22:55:05.582: INFO: Got endpoints: latency-svc-nikt5 [2.632851857s]
Oct 23 22:55:05.655: INFO: Created: latency-svc-ewzmd
Oct 23 22:55:05.783: INFO: Got endpoints: latency-svc-pi5g7 [2.616310798s]
Oct 23 22:55:05.861: INFO: Created: latency-svc-avlze
Oct 23 22:55:05.932: INFO: Got endpoints: latency-svc-jey9b [2.644034074s]
Oct 23 22:55:06.016: INFO: Created: latency-svc-j0jxa
Oct 23 22:55:06.132: INFO: Got endpoints: latency-svc-02ksk [2.645977701s]
Oct 23 22:55:06.205: INFO: Created: latency-svc-wmv6a
Oct 23 22:55:06.282: INFO: Got endpoints: latency-svc-08znm [2.640975404s]
Oct 23 22:55:06.352: INFO: Created: latency-svc-ehahc
Oct 23 22:55:06.482: INFO: Got endpoints: latency-svc-dvd21 [2.647764971s]
Oct 23 22:55:06.552: INFO: Created: latency-svc-7wk62
Oct 23 22:55:06.682: INFO: Got endpoints: latency-svc-pa67r [2.624421458s]
Oct 23 22:55:06.758: INFO: Created: latency-svc-7l8v6
Oct 23 22:55:06.832: INFO: Got endpoints: latency-svc-csq94 [2.63228683s]
Oct 23 22:55:06.900: INFO: Created: latency-svc-juvvj
Oct 23 22:55:07.082: INFO: Got endpoints: latency-svc-qjmtp [2.663468325s]
Oct 23 22:55:07.156: INFO: Created: latency-svc-mw38i
Oct 23 22:55:07.232: INFO: Got endpoints: latency-svc-76kb6 [2.689722395s]
Oct 23 22:55:07.300: INFO: Created: latency-svc-fbkik
Oct 23 22:55:07.433: INFO: Got endpoints: latency-svc-bo8lu [2.688908999s]
Oct 23 22:55:07.505: INFO: Created: latency-svc-vdrw4
Oct 23 22:55:07.638: INFO: Got endpoints: latency-svc-ldwun [2.698951976s]
Oct 23 22:55:07.713: INFO: Created: latency-svc-qrqef
Oct 23 22:55:07.782: INFO: Got endpoints: latency-svc-h32yu [2.684248476s]
Oct 23 22:55:07.855: INFO: Created: latency-svc-kwhos
Oct 23 22:55:07.982: INFO: Got endpoints: latency-svc-7jevh [2.697596784s]
Oct 23 22:55:08.079: INFO: Created: latency-svc-oj3ig
Oct 23 22:55:08.133: INFO: Got endpoints: latency-svc-hituc [2.696033002s]
Oct 23 22:55:08.207: INFO: Created: latency-svc-i87c5
Oct 23 22:55:08.333: INFO: Got endpoints: latency-svc-ewzmd [2.689111292s]
Oct 23 22:55:08.451: INFO: Created: latency-svc-kn4pa
Oct 23 22:55:08.532: INFO: Got endpoints: latency-svc-avlze [2.689653633s]
Oct 23 22:55:08.601: INFO: Created: latency-svc-xsvl4
Oct 23 22:55:08.682: INFO: Got endpoints: latency-svc-j0jxa [2.680965697s]
Oct 23 22:55:08.761: INFO: Created: latency-svc-5qi3g
Oct 23 22:55:08.882: INFO: Got endpoints: latency-svc-wmv6a [2.690653622s]
Oct 23 22:55:08.955: INFO: Created: latency-svc-m6z7b
Oct 23 22:55:09.033: INFO: Got endpoints: latency-svc-ehahc [2.699751632s]
Oct 23 22:55:09.103: INFO: Created: latency-svc-41ew3
Oct 23 22:55:09.232: INFO: Got endpoints: latency-svc-7wk62 [2.691881866s]
Oct 23 22:55:09.305: INFO: Created: latency-svc-dhbmb
Oct 23 22:55:09.432: INFO: Got endpoints: latency-svc-7l8v6 [2.688351918s]
Oct 23 22:55:09.509: INFO: Created: latency-svc-3h9ev
Oct 23 22:55:09.584: INFO: Got endpoints: latency-svc-juvvj [2.699581805s]
Oct 23 22:55:09.662: INFO: Created: latency-svc-v5uev
Oct 23 22:55:09.782: INFO: Got endpoints: latency-svc-mw38i [2.639796938s]
Oct 23 22:55:09.849: INFO: Created: latency-svc-nmdt9
Oct 23 22:55:09.932: INFO: Got endpoints: latency-svc-fbkik [2.645640538s]
Oct 23 22:55:10.035: INFO: Created: latency-svc-dfmbf
Oct 23 22:55:10.185: INFO: Got endpoints: latency-svc-vdrw4 [2.692374383s]
Oct 23 22:55:10.251: INFO: Created: latency-svc-ksf08
Oct 23 22:55:10.390: INFO: Got endpoints: latency-svc-qrqef [2.690360637s]
Oct 23 22:55:10.454: INFO: Created: latency-svc-rwbz3
Oct 23 22:55:10.532: INFO: Got endpoints: latency-svc-kwhos [2.694705777s]
Oct 23 22:55:10.597: INFO: Created: latency-svc-7w6dk
Oct 23 22:55:10.741: INFO: Got endpoints: latency-svc-oj3ig [2.698787012s]
Oct 23 22:55:10.820: INFO: Created: latency-svc-t5vvw
Oct 23 22:55:10.883: INFO: Got endpoints: latency-svc-i87c5 [2.688459924s]
Oct 23 22:55:10.956: INFO: Created: latency-svc-vv3ig
Oct 23 22:55:11.134: INFO: Got endpoints: latency-svc-kn4pa [2.700410519s]
Oct 23 22:55:11.208: INFO: Created: latency-svc-ceiev
Oct 23 22:55:11.432: INFO: Got endpoints: latency-svc-xsvl4 [2.85142185s]
Oct 23 22:55:11.503: INFO: Created: latency-svc-ap4bf
Oct 23 22:55:11.735: INFO: Got endpoints: latency-svc-5qi3g [2.992515318s]
Oct 23 22:55:11.808: INFO: Created: latency-svc-6eck6
Oct 23 22:55:12.186: INFO: Got endpoints: latency-svc-m6z7b [3.247220947s]
Oct 23 22:55:12.250: INFO: Created: latency-svc-4b682
Oct 23 22:55:12.332: INFO: Got endpoints: latency-svc-41ew3 [3.240442037s]
Oct 23 22:55:12.464: INFO: Created: latency-svc-jxpc5
Oct 23 22:55:12.582: INFO: Got endpoints: latency-svc-dhbmb [3.294777171s]
Oct 23 22:55:12.652: INFO: Created: latency-svc-tgrwr
Oct 23 22:55:12.782: INFO: Got endpoints: latency-svc-3h9ev [3.291293889s]
Oct 23 22:55:12.857: INFO: Created: latency-svc-i47og
Oct 23 22:55:12.931: INFO: Got endpoints: latency-svc-v5uev [3.290153404s]
Oct 23 22:55:13.022: INFO: Created: latency-svc-x3wz1
Oct 23 22:55:13.132: INFO: Got endpoints: latency-svc-nmdt9 [3.299105869s]
Oct 23 22:55:13.207: INFO: Created: latency-svc-qv52s
Oct 23 22:55:13.282: INFO: Got endpoints: latency-svc-7w6dk [2.698567187s]
Oct 23 22:55:13.364: INFO: Created: latency-svc-uwie8
Oct 23 22:55:13.398: INFO: Got endpoints: latency-svc-ceiev [2.203378042s]
Oct 23 22:55:13.489: INFO: Got endpoints: latency-svc-dfmbf [3.502199351s]
Oct 23 22:55:13.495: INFO: Created: latency-svc-jvtc2
Oct 23 22:55:13.538: INFO: Got endpoints: latency-svc-ksf08 [3.300444075s]
Oct 23 22:55:13.608: INFO: Created: latency-svc-p0fwh
Oct 23 22:55:13.627: INFO: Created: latency-svc-zxoe9
Oct 23 22:55:13.734: INFO: Got endpoints: latency-svc-rwbz3 [3.291661472s]
Oct 23 22:55:13.773: INFO: Created: latency-svc-3qc19
Oct 23 22:55:13.882: INFO: Got endpoints: latency-svc-t5vvw [3.077011569s]
Oct 23 22:55:13.951: INFO: Created: latency-svc-9tqhy
Oct 23 22:55:13.982: INFO: Got endpoints: latency-svc-vv3ig [3.040549818s]
Oct 23 22:55:14.060: INFO: Created: latency-svc-3pj7q
Oct 23 22:55:15.186: INFO: Got endpoints: latency-svc-ap4bf [3.694117039s]
Oct 23 22:55:15.234: INFO: Got endpoints: latency-svc-6eck6 [3.444223434s]
Oct 23 22:55:15.296: INFO: Created: latency-svc-mahkj
Oct 23 22:55:15.307: INFO: Created: latency-svc-kvci1
Oct 23 22:55:15.331: INFO: Got endpoints: latency-svc-4b682 [3.093205325s]
Oct 23 22:55:15.384: INFO: Created: latency-svc-u2v1r
Oct 23 22:55:15.582: INFO: Got endpoints: latency-svc-jxpc5 [3.13409199s]
Oct 23 22:55:15.654: INFO: Created: latency-svc-22aec
Oct 23 22:55:15.782: INFO: Got endpoints: latency-svc-tgrwr [3.143554804s]
Oct 23 22:55:15.849: INFO: Created: latency-svc-mwg97
Oct 23 22:55:15.931: INFO: Got endpoints: latency-svc-i47og [3.091435132s]
Oct 23 22:55:16.016: INFO: Created: latency-svc-8v43a
Oct 23 22:55:16.131: INFO: Got endpoints: latency-svc-x3wz1 [3.124820873s]
Oct 23 22:55:16.201: INFO: Created: latency-svc-456gs
Oct 23 22:55:16.282: INFO: Got endpoints: latency-svc-qv52s [3.0867081s]
Oct 23 22:55:16.481: INFO: Got endpoints: latency-svc-uwie8 [3.139938786s]
Oct 23 22:55:16.682: INFO: Got endpoints: latency-svc-jvtc2 [3.198754969s]
Oct 23 22:55:16.882: INFO: Got endpoints: latency-svc-p0fwh [3.292650021s]
Oct 23 22:55:17.032: INFO: Got endpoints: latency-svc-zxoe9 [3.482229763s]
Oct 23 22:55:17.133: INFO: Got endpoints: latency-svc-3qc19 [3.372064612s]
Oct 23 22:55:17.386: INFO: Got endpoints: latency-svc-9tqhy [3.448340001s]
Oct 23 22:55:17.582: INFO: Got endpoints: latency-svc-3pj7q [3.547666616s]
Oct 23 22:55:17.782: INFO: Got endpoints: latency-svc-mahkj [2.544580175s]
Oct 23 22:55:17.982: INFO: Got endpoints: latency-svc-kvci1 [2.693996762s]
Oct 23 22:55:18.081: INFO: Got endpoints: latency-svc-u2v1r [2.711187843s]
Oct 23 22:55:18.332: INFO: Got endpoints: latency-svc-22aec [2.692094824s]
Oct 23 22:55:18.532: INFO: Got endpoints: latency-svc-mwg97 [2.697735671s]
Oct 23 22:55:18.682: INFO: Got endpoints: latency-svc-8v43a [2.681688125s]
Oct 23 22:55:18.882: INFO: Got endpoints: latency-svc-456gs [2.693280799s]
STEP: deleting replication controller svc-latency-rc in namespace e2e-tests-svc-latency-2g2kl
Oct 23 22:55:20.969: INFO: Deleting RC svc-latency-rc took: 2.058756021s
Oct 23 22:55:30.975: INFO: Terminating RC svc-latency-rc pods took: 10.00641052s
Oct 23 22:55:30.975: INFO: Latencies: [169.681484ms 170.114752ms 210.71226ms 211.528137ms 259.188129ms 282.709433ms 338.061959ms 347.927223ms 395.314858ms 521.266164ms 553.007897ms 687.587137ms 702.979852ms 903.84066ms 927.293485ms 1.148875633s 1.198242321s 1.273670266s 1.327191039s 1.379850937s 1.390455614s 1.469710786s 1.765222824s 1.836750586s 1.854792985s 1.969624051s 1.983548775s 2.176514882s 2.187015416s 2.200463429s 2.203378042s 2.210588328s 2.237873326s 2.25635822s 2.267010874s 2.536586301s 2.541483416s 2.544107002s 2.544580175s 2.547671682s 2.549443787s 2.57156642s 2.593657038s 2.593702671s 2.600326316s 2.60573588s 2.610190637s 2.616310798s 2.624421458s 2.624853857s 2.625032046s 2.631072086s 2.631855495s 2.63228683s 2.632764237s 2.632851857s 2.633809472s 2.634614555s 2.638100818s 2.639744418s 2.639796938s 2.640021878s 2.640496453s 2.640975404s 2.641413126s 2.641929107s 2.642838411s 2.643420896s 2.643683289s 2.643716346s 2.644034074s 2.644175272s 2.645035956s 2.645640538s 2.645977701s 2.647689665s 2.647764971s 2.647924692s 2.649333615s 2.652129469s 2.653036807s 2.660769306s 2.663468325s 2.671941548s 2.6804341s 2.680965697s 2.681688125s 2.681991706s 2.68268963s 2.683794913s 2.684140623s 2.684248476s 2.684668888s 2.684977173s 2.68747017s 2.688351918s 2.688459924s 2.688908999s 2.689111292s 2.689653633s 2.689722395s 2.690360637s 2.690653622s 2.691206685s 2.691378779s 2.691722267s 2.691776643s 2.691881866s 2.692094824s 2.692374383s 2.692909717s 2.693280799s 2.693427243s 2.693631473s 2.693996762s 2.694378137s 2.694705777s 2.69485965s 2.695072325s 2.695484834s 2.695509392s 2.696033002s 2.697596784s 2.697735671s 2.69779114s 2.698567187s 2.698787012s 2.698850626s 2.698951976s 2.699009065s 2.699581805s 2.699751632s 2.700410519s 2.710513284s 2.711187843s 2.737425548s 2.746898804s 2.748583123s 2.794584856s 2.827641023s 2.832691586s 2.839586776s 2.85142185s 2.856129266s 2.895642614s 2.897710929s 2.93054407s 2.943669132s 2.944109229s 2.992515318s 2.996180414s 2.997188358s 3.010691001s 3.036333134s 3.038175019s 3.040132732s 3.040549818s 3.043896611s 3.049113034s 3.050909083s 3.061093348s 3.075192874s 3.077011569s 3.077653737s 3.079197422s 3.08652896s 3.0867081s 3.087264571s 3.091435132s 3.093205325s 3.093798972s 3.094805775s 3.096867685s 3.09726706s 3.098072486s 3.099637656s 3.102020397s 3.110642466s 3.124820873s 3.13409199s 3.139938786s 3.141172076s 3.143554804s 3.198754969s 3.240442037s 3.247220947s 3.290153404s 3.291293889s 3.291661472s 3.292650021s 3.294777171s 3.299105869s 3.300444075s 3.372064612s 3.444223434s 3.448340001s 3.482229763s 3.502199351s 3.547666616s 3.694117039s]
Oct 23 22:55:30.977: INFO: 50 %ile: 2.689722395s
Oct 23 22:55:30.977: INFO: 90 %ile: 3.139938786s
Oct 23 22:55:30.977: INFO: 99 %ile: 3.547666616s
Oct 23 22:55:30.977: INFO: Total sample count: 200
[AfterEach] Service endpoints latency
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:55:30.977: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:55:30.981: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:55:30.981: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:55:30.981: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:55:30.981: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:55:30.981: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:55:30.981: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:55:30.981: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:55:30.981: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:55:30.981: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:55:30.981: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:55:30.981: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:55:30.981: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-svc-latency-2g2kl" for this suite.
• [SLOW TEST:76.541 seconds]
Service endpoints latency
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:117
should not be very high [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 22:55:56.001: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-dlj4u
Oct 23 22:55:56.027: INFO: Service account default in ns e2e-tests-kubectl-dlj4u had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:55:58.030: INFO: Service account default in ns e2e-tests-kubectl-dlj4u with secrets found. (2.029268322s)
[BeforeEach] Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:104
[It] should scale a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:125
STEP: creating a replication controller
Oct 23 22:55:58.030: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:55:58.241: INFO: replicationcontroller "update-demo-nautilus" created
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 23 22:55:58.241: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:55:58.423: INFO: update-demo-nautilus-7yd6j update-demo-nautilus-tpg6h
Oct 23 22:55:58.423: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-7yd6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:55:58.608: INFO:
Oct 23 22:55:58.608: INFO: update-demo-nautilus-7yd6j is created but not running
Oct 23 22:56:03.609: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:03.794: INFO: update-demo-nautilus-7yd6j update-demo-nautilus-tpg6h
Oct 23 22:56:03.794: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-7yd6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:03.975: INFO: true
Oct 23 22:56:03.975: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-7yd6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:04.157: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 22:56:04.157: INFO: validating pod update-demo-nautilus-7yd6j
Oct 23 22:56:04.161: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 22:56:04.161: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 22:56:04.161: INFO: update-demo-nautilus-7yd6j is verified up and running
Oct 23 22:56:04.161: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-tpg6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:04.342: INFO:
Oct 23 22:56:04.342: INFO: update-demo-nautilus-tpg6h is created but not running
Oct 23 22:56:09.343: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:09.526: INFO: update-demo-nautilus-7yd6j update-demo-nautilus-tpg6h
Oct 23 22:56:09.526: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-7yd6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:09.707: INFO: true
Oct 23 22:56:09.707: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-7yd6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:09.888: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 22:56:09.888: INFO: validating pod update-demo-nautilus-7yd6j
Oct 23 22:56:09.892: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 22:56:09.892: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 22:56:09.892: INFO: update-demo-nautilus-7yd6j is verified up and running
Oct 23 22:56:09.892: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-tpg6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:10.085: INFO:
Oct 23 22:56:10.085: INFO: update-demo-nautilus-tpg6h is created but not running
Oct 23 22:56:15.085: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:15.267: INFO: update-demo-nautilus-7yd6j update-demo-nautilus-tpg6h
Oct 23 22:56:15.267: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-7yd6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:15.446: INFO: true
Oct 23 22:56:15.446: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-7yd6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:15.626: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 22:56:15.626: INFO: validating pod update-demo-nautilus-7yd6j
Oct 23 22:56:15.630: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 22:56:15.630: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 22:56:15.630: INFO: update-demo-nautilus-7yd6j is verified up and running
Oct 23 22:56:15.630: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-tpg6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:15.810: INFO: true
Oct 23 22:56:15.810: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-tpg6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:15.992: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 22:56:15.992: INFO: validating pod update-demo-nautilus-tpg6h
Oct 23 22:56:15.996: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 22:56:15.996: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 22:56:15.996: INFO: update-demo-nautilus-tpg6h is verified up and running
STEP: scaling down the replication controller
Oct 23 22:56:15.996: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:18.213: INFO: replicationcontroller "update-demo-nautilus" scaled
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 23 22:56:18.213: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:18.409: INFO: update-demo-nautilus-7yd6j update-demo-nautilus-tpg6h
STEP: Replicas for name=update-demo: expected=1 actual=2
Oct 23 22:56:23.409: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:23.591: INFO: update-demo-nautilus-7yd6j update-demo-nautilus-tpg6h
STEP: Replicas for name=update-demo: expected=1 actual=2
Oct 23 22:56:28.592: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:28.782: INFO: update-demo-nautilus-tpg6h
Oct 23 22:56:28.782: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-tpg6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:28.976: INFO: true
Oct 23 22:56:28.976: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-tpg6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:29.179: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 22:56:29.179: INFO: validating pod update-demo-nautilus-tpg6h
Oct 23 22:56:29.183: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 22:56:29.183: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 22:56:29.183: INFO: update-demo-nautilus-tpg6h is verified up and running
STEP: scaling up the replication controller
Oct 23 22:56:29.183: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:31.442: INFO: replicationcontroller "update-demo-nautilus" scaled
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 23 22:56:31.444: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:31.634: INFO: update-demo-nautilus-s9mo7 update-demo-nautilus-tpg6h
Oct 23 22:56:31.634: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-s9mo7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:31.814: INFO:
Oct 23 22:56:31.814: INFO: update-demo-nautilus-s9mo7 is created but not running
Oct 23 22:56:36.815: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:36.997: INFO: update-demo-nautilus-s9mo7 update-demo-nautilus-tpg6h
Oct 23 22:56:36.998: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-s9mo7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:37.178: INFO: true
Oct 23 22:56:37.178: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-s9mo7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:37.357: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 22:56:37.357: INFO: validating pod update-demo-nautilus-s9mo7
Oct 23 22:56:37.369: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 22:56:37.370: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 22:56:37.370: INFO: update-demo-nautilus-s9mo7 is verified up and running
Oct 23 22:56:37.370: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-tpg6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:37.550: INFO: true
Oct 23 22:56:37.551: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-tpg6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:37.729: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 22:56:37.729: INFO: validating pod update-demo-nautilus-tpg6h
Oct 23 22:56:37.733: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 22:56:37.733: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 22:56:37.733: INFO: update-demo-nautilus-tpg6h is verified up and running
STEP: using delete to clean up resources
Oct 23 22:56:37.733: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:39.963: INFO: replicationcontroller "update-demo-nautilus" deleted
Oct 23 22:56:39.963: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-dlj4u'
Oct 23 22:56:40.143: INFO:
Oct 23 22:56:40.143: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-dlj4u -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 22:56:40.325: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-dlj4u
• [SLOW TEST:49.344 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:136
should scale a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:125
------------------------------
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:51:35.699: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-s7i2b
Oct 23 22:51:35.729: INFO: Service account default in ns e2e-tests-services-s7i2b had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:51:37.731: INFO: Service account default in ns e2e-tests-services-s7i2b with secrets found. (2.031942064s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:51:37.731: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-s7i2b
Oct 23 22:51:37.733: INFO: Service account default in ns e2e-tests-services-s7i2b with secrets found. (1.675323ms)
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
[It] should be able to create a functioning NodePort service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:402
STEP: creating service nodeportservice-test with type=NodePort in namespace e2e-tests-services-s7i2b
STEP: creating pod to be part of service nodeportservice-test
Oct 23 22:51:37.837: INFO: Pod name webserver: Found 1 pods out of 1
STEP: ensuring each pod is running
Oct 23 22:51:37.837: INFO: Waiting up to 5m0s for pod webserver-ss1be status to be running
Oct 23 22:51:37.843: INFO: Waiting for pod webserver-ss1be in namespace 'e2e-tests-services-s7i2b' status to be 'running'(found phase: "Pending", readiness: false) (5.148638ms elapsed)
Oct 23 22:51:39.845: INFO: Waiting for pod webserver-ss1be in namespace 'e2e-tests-services-s7i2b' status to be 'running'(found phase: "Pending", readiness: false) (2.007830586s elapsed)
Oct 23 22:51:41.848: INFO: Found pod 'webserver-ss1be' on node 'pull-e2e-0-minion-l2bc'
STEP: trying to dial each unique pod
Oct 23 22:51:41.856: INFO: Controller webserver: Got non-empty result from replica 1 [webserver-ss1be]: "<pre>\n<a href=\"test-webserver\">test-webserver</a>\n<a href=\"dev/\">dev/</a>\n<a href=\"proc/\">proc/</a>\n<a href=\"etc/\">etc/</a>\n<a href=\"sys/\">sys/</a>\n<a href=\".dockerenv\">.dockerenv</a>\n<a href=\".dockerinit\">.dockerinit</a>\n<a href=\"var/\">var/</a>\n</pre>\n", 1 of 1 required successes so far
STEP: hitting the pod through the service's NodePort
STEP: Waiting up to 5m0s for the url http://10.240.0.5:30964 to be reachable
Oct 23 22:51:41.898: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (36.654177ms)
Oct 23 22:51:43.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2.073544996s)
Oct 23 22:51:45.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4.073529049s)
Oct 23 22:51:47.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (6.07350684s)
Oct 23 22:51:49.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (8.073703963s)
Oct 23 22:51:51.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (10.073775977s)
Oct 23 22:51:53.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (12.073547524s)
Oct 23 22:51:55.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (14.073602085s)
Oct 23 22:51:57.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (16.073283265s)
Oct 23 22:51:59.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (18.073620852s)
Oct 23 22:52:01.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (20.07397999s)
Oct 23 22:52:03.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (22.073683142s)
Oct 23 22:52:05.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (24.073977737s)
Oct 23 22:52:07.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (26.073728978s)
Oct 23 22:52:09.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (28.073836108s)
Oct 23 22:52:11.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (30.07355118s)
Oct 23 22:52:13.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (32.073411282s)
Oct 23 22:52:15.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (34.07389645s)
Oct 23 22:52:17.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (36.07376979s)
Oct 23 22:52:19.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (38.07362522s)
Oct 23 22:52:21.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (40.073513685s)
Oct 23 22:52:23.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (42.073887055s)
Oct 23 22:52:25.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (44.073584212s)
Oct 23 22:52:27.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (46.073952483s)
Oct 23 22:52:29.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (48.073389059s)
Oct 23 22:52:31.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (50.073612505s)
Oct 23 22:52:33.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (52.073643063s)
Oct 23 22:52:35.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (54.073646273s)
Oct 23 22:52:37.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (56.073621843s)
Oct 23 22:52:39.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (58.073616482s)
Oct 23 22:52:41.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m0.073344748s)
Oct 23 22:52:43.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m2.073841668s)
Oct 23 22:52:45.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m4.073889314s)
Oct 23 22:52:47.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m6.07370234s)
Oct 23 22:52:49.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m8.073686806s)
Oct 23 22:52:51.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m10.073207634s)
Oct 23 22:52:53.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m12.073787938s)
Oct 23 22:52:55.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m14.073332983s)
Oct 23 22:52:57.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m16.073707559s)
Oct 23 22:52:59.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m18.07357372s)
Oct 23 22:53:01.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m20.073357401s)
Oct 23 22:53:03.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m22.073414653s)
Oct 23 22:53:05.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m24.0736608s)
Oct 23 22:53:07.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m26.073701568s)
Oct 23 22:53:09.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m28.073816572s)
Oct 23 22:53:11.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m30.073500243s)
Oct 23 22:53:13.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m32.073659484s)
Oct 23 22:53:15.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m34.073912578s)
Oct 23 22:53:17.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m36.073263883s)
Oct 23 22:53:19.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m38.07382248s)
Oct 23 22:53:21.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m40.073254644s)
Oct 23 22:53:23.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m42.073999397s)
Oct 23 22:53:25.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m44.073411006s)
Oct 23 22:53:27.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m46.073855033s)
Oct 23 22:53:29.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m48.074137921s)
Oct 23 22:53:31.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m50.073663929s)
Oct 23 22:53:33.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m52.073282442s)
Oct 23 22:53:35.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m54.073892685s)
Oct 23 22:53:37.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m56.073497341s)
Oct 23 22:53:39.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (1m58.073964509s)
Oct 23 22:53:41.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m0.073355542s)
Oct 23 22:53:43.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m2.074019185s)
Oct 23 22:53:45.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m4.073291861s)
Oct 23 22:53:47.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m6.073562104s)
Oct 23 22:53:49.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m8.073514221s)
Oct 23 22:53:51.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m10.073349266s)
Oct 23 22:53:53.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m12.073639081s)
Oct 23 22:53:55.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m14.073627777s)
Oct 23 22:53:57.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m16.073851148s)
Oct 23 22:53:59.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m18.073663085s)
Oct 23 22:54:01.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m20.073652483s)
Oct 23 22:54:03.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m22.073746015s)
Oct 23 22:54:05.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m24.073666686s)
Oct 23 22:54:07.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m26.0737672s)
Oct 23 22:54:09.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m28.073795217s)
Oct 23 22:54:11.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m30.073523907s)
Oct 23 22:54:13.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m32.074000322s)
Oct 23 22:54:15.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m34.073395995s)
Oct 23 22:54:17.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m36.073307088s)
Oct 23 22:54:19.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m38.073904725s)
Oct 23 22:54:21.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m40.073366277s)
Oct 23 22:54:23.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m42.073394418s)
Oct 23 22:54:25.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m44.074040128s)
Oct 23 22:54:27.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m46.073647544s)
Oct 23 22:54:29.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m48.073643888s)
Oct 23 22:54:31.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m50.073706044s)
Oct 23 22:54:33.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m52.073592007s)
Oct 23 22:54:35.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m54.07389923s)
Oct 23 22:54:37.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m56.073428556s)
Oct 23 22:54:39.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (2m58.073420989s)
Oct 23 22:54:41.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m0.073207534s)
Oct 23 22:54:43.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m2.07366201s)
Oct 23 22:54:45.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m4.073684106s)
Oct 23 22:54:47.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m6.073098019s)
Oct 23 22:54:49.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m8.073286382s)
Oct 23 22:54:51.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m10.073103316s)
Oct 23 22:54:53.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m12.073768886s)
Oct 23 22:54:55.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m14.073638548s)
Oct 23 22:54:57.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m16.073655554s)
Oct 23 22:54:59.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m18.073474228s)
Oct 23 22:55:01.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m20.073549451s)
Oct 23 22:55:03.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m22.073575466s)
Oct 23 22:55:05.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m24.073578109s)
Oct 23 22:55:07.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m26.073766041s)
Oct 23 22:55:09.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m28.073444919s)
Oct 23 22:55:11.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m30.073476597s)
Oct 23 22:55:13.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m32.073804062s)
Oct 23 22:55:15.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m34.073653369s)
Oct 23 22:55:17.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m36.073740634s)
Oct 23 22:55:19.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m38.073227837s)
Oct 23 22:55:21.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m40.073575575s)
Oct 23 22:55:23.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m42.073624137s)
Oct 23 22:55:25.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m44.073869986s)
Oct 23 22:55:27.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m46.073722931s)
Oct 23 22:55:29.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m48.073595977s)
Oct 23 22:55:31.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m50.073323916s)
Oct 23 22:55:33.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m52.073891968s)
Oct 23 22:55:35.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m54.073730369s)
Oct 23 22:55:37.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m56.073973073s)
Oct 23 22:55:39.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (3m58.073556207s)
Oct 23 22:55:41.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m0.073609725s)
Oct 23 22:55:43.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m2.073343427s)
Oct 23 22:55:45.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m4.073498586s)
Oct 23 22:55:47.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m6.073570132s)
Oct 23 22:55:49.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m8.073888083s)
Oct 23 22:55:51.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m10.073381645s)
Oct 23 22:55:53.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m12.073390972s)
Oct 23 22:55:55.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m14.073967367s)
Oct 23 22:55:57.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m16.073596582s)
Oct 23 22:55:59.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m18.07392337s)
Oct 23 22:56:01.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m20.073661596s)
Oct 23 22:56:03.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m22.073614238s)
Oct 23 22:56:05.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m24.074032306s)
Oct 23 22:56:07.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m26.073569593s)
Oct 23 22:56:09.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m28.073066071s)
Oct 23 22:56:11.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m30.073338061s)
Oct 23 22:56:13.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m32.073730579s)
Oct 23 22:56:15.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m34.073620785s)
Oct 23 22:56:17.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m36.073626864s)
Oct 23 22:56:19.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m38.07341328s)
Oct 23 22:56:21.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m40.073955392s)
Oct 23 22:56:23.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m42.073671959s)
Oct 23 22:56:25.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m44.073345808s)
Oct 23 22:56:27.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m46.073828239s)
Oct 23 22:56:29.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m48.073694579s)
Oct 23 22:56:31.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m50.073591575s)
Oct 23 22:56:33.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m52.073704259s)
Oct 23 22:56:35.936: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m54.074087796s)
Oct 23 22:56:37.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m56.073240221s)
Oct 23 22:56:39.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (4m58.073741227s)
Oct 23 22:56:41.935: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (5m0.073238656s)
Oct 23 22:56:41.972: INFO: Got error waiting for reachability of http://10.240.0.5:30964: Get http://10.240.0.5:30964: dial tcp 10.240.0.5:30964: connection refused (5m0.110246968s)
STEP: deleting service nodeportservice-test in namespace e2e-tests-services-s7i2b
STEP: stopping RC webserver in namespace e2e-tests-services-s7i2b
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-services-s7i2b".
Oct 23 22:56:42.042: INFO: event for nodeportservice-test: {service-controller } DeletingLoadBalancer: Deleting load balancer
Oct 23 22:56:42.042: INFO: event for webserver-ss1be: {scheduler } Scheduled: Successfully assigned webserver-ss1be to pull-e2e-0-minion-l2bc
Oct 23 22:56:42.042: INFO: event for webserver-ss1be: {kubelet pull-e2e-0-minion-l2bc} Pulling: pulling image "beta.gcr.io/google_containers/pause:2.0"
Oct 23 22:56:42.042: INFO: event for webserver-ss1be: {kubelet pull-e2e-0-minion-l2bc} Pulled: Successfully pulled image "beta.gcr.io/google_containers/pause:2.0"
Oct 23 22:56:42.042: INFO: event for webserver-ss1be: {kubelet pull-e2e-0-minion-l2bc} Created: Created with docker id 64230cb8a957
Oct 23 22:56:42.042: INFO: event for webserver-ss1be: {kubelet pull-e2e-0-minion-l2bc} Started: Started with docker id 64230cb8a957
Oct 23 22:56:42.042: INFO: event for webserver-ss1be: {kubelet pull-e2e-0-minion-l2bc} Pulling: pulling image "gcr.io/google_containers/test-webserver"
Oct 23 22:56:42.042: INFO: event for webserver-ss1be: {kubelet pull-e2e-0-minion-l2bc} Pulled: Successfully pulled image "gcr.io/google_containers/test-webserver"
Oct 23 22:56:42.042: INFO: event for webserver-ss1be: {kubelet pull-e2e-0-minion-l2bc} Created: Created with docker id 3bae85ae737b
Oct 23 22:56:42.042: INFO: event for webserver-ss1be: {kubelet pull-e2e-0-minion-l2bc} Started: Started with docker id 3bae85ae737b
Oct 23 22:56:42.042: INFO: event for webserver: {replication-controller } SuccessfulCreate: Created pod: webserver-ss1be
Oct 23 22:56:42.049: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 23 22:56:42.049: INFO: webserver-ss1be pull-e2e-0-minion-l2bc Running 30s [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:40 +0000 UTC }]
Oct 23 22:56:42.049: INFO: elasticsearch-logging-v1-3gvtt pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:28 +0000 UTC }]
Oct 23 22:56:42.049: INFO: elasticsearch-logging-v1-t59p5 pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:10 +0000 UTC }]
Oct 23 22:56:42.049: INFO: heapster-v10-u3l0u pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:57 +0000 UTC }]
Oct 23 22:56:42.049: INFO: kibana-logging-v1-skk6z pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:00 +0000 UTC }]
Oct 23 22:56:42.049: INFO: kube-dns-v9-6u0vh pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:04 +0000 UTC }]
Oct 23 22:56:42.049: INFO: kube-ui-v3-laljg pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:49 +0000 UTC }]
Oct 23 22:56:42.049: INFO: monitoring-influxdb-grafana-v2-bpbto pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:57 +0000 UTC }]
Oct 23 22:56:42.049: INFO:
Oct 23 22:56:42.049: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:56:42.053: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:56:42.053: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:56:42.053: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:56:42.053: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:56:42.053: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:56:42.053: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:56:42.053: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:56:42.053: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:56:42.053: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:56:42.053: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:56:42.053: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:56:42.053: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-services-s7i2b" for this suite.
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• Failure [311.375 seconds]
Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:871
should be able to create a functioning NodePort service [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:402
Error waiting for the url http://10.240.0.5:30964 to be reachable
Expected error:
<*errors.errorString | 0xc20815b360>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1186
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:56:47.077: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-xvm0b
Oct 23 22:56:47.103: INFO: Service account default in ns e2e-tests-pods-xvm0b had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:56:49.106: INFO: Service account default in ns e2e-tests-pods-xvm0b with secrets found. (2.029008875s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:56:49.106: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-xvm0b
Oct 23 22:56:49.108: INFO: Service account default in ns e2e-tests-pods-xvm0b with secrets found. (2.004382ms)
[It] should be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:620
STEP: Creating pod liveness-http in namespace e2e-tests-pods-xvm0b
Oct 23 22:56:49.115: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Oct 23 22:56:49.143: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-xvm0b' status to be '!pending'(found phase: "Pending", readiness: false) (27.275979ms elapsed)
Oct 23 22:56:51.146: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-xvm0b' out of pending state (found '"Running"')
STEP: Started pod liveness-http in namespace e2e-tests-pods-xvm0b
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-http is 0
STEP: Restart count of pod e2e-tests-pods-xvm0b/liveness-http is now 1 (20.038999711s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:57:11.198: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:57:11.228: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:57:11.228: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:57:11.228: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:57:11.228: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:57:11.228: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:57:11.228: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:57:11.228: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:57:11.228: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:57:11.228: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:57:11.228: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:57:11.228: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:57:11.228: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-xvm0b" for this suite.
• [SLOW TEST:29.169 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:620
------------------------------
S
------------------------------
[BeforeEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 22:57:16.249: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-j1fdc
Oct 23 22:57:16.277: INFO: Service account default in ns e2e-tests-containers-j1fdc had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:57:18.280: INFO: Service account default in ns e2e-tests-containers-j1fdc with secrets found. (2.030988748s)
[It] should be able to override the image's default commmand (docker entrypoint) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:73
STEP: Creating a pod to test override command
Oct 23 22:57:18.285: INFO: Waiting up to 5m0s for pod client-containers-69195c09-79d9-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 22:57:18.315: INFO: No Status.Info for container 'test-container' in pod 'client-containers-69195c09-79d9-11e5-ba1c-42010af00002' yet
Oct 23 22:57:18.315: INFO: Waiting for pod client-containers-69195c09-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-j1fdc' status to be 'success or failure'(found phase: "Pending", readiness: false) (29.754327ms elapsed)
Oct 23 22:57:20.318: INFO: No Status.Info for container 'test-container' in pod 'client-containers-69195c09-79d9-11e5-ba1c-42010af00002' yet
Oct 23 22:57:20.318: INFO: Waiting for pod client-containers-69195c09-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-j1fdc' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.032677725s elapsed)
Oct 23 22:57:22.321: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-69195c09-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-containers-j1fdc' so far
Oct 23 22:57:22.321: INFO: Waiting for pod client-containers-69195c09-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-j1fdc' status to be 'success or failure'(found phase: "Running", readiness: true) (4.036022112s elapsed)
Oct 23 22:57:24.325: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-69195c09-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-containers-j1fdc' so far
Oct 23 22:57:24.325: INFO: Waiting for pod client-containers-69195c09-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-j1fdc' status to be 'success or failure'(found phase: "Running", readiness: true) (6.039756139s elapsed)
Oct 23 22:57:26.328: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-69195c09-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-containers-j1fdc' so far
Oct 23 22:57:26.328: INFO: Waiting for pod client-containers-69195c09-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-j1fdc' status to be 'success or failure'(found phase: "Running", readiness: true) (8.042857876s elapsed)
Oct 23 22:57:28.332: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-69195c09-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-containers-j1fdc' so far
Oct 23 22:57:28.332: INFO: Waiting for pod client-containers-69195c09-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-j1fdc' status to be 'success or failure'(found phase: "Running", readiness: true) (10.046544253s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod client-containers-69195c09-79d9-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep-2]
[AfterEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:19.163 seconds]
Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should be able to override the image's default commmand (docker entrypoint) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:73
------------------------------
[BeforeEach] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:57:35.422: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-3xhhq
Oct 23 22:57:35.453: INFO: Service account default in ns e2e-tests-downward-api-3xhhq had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:57:37.455: INFO: Service account default in ns e2e-tests-downward-api-3xhhq with secrets found. (2.032984821s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:57:37.455: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-3xhhq
Oct 23 22:57:37.457: INFO: Service account default in ns e2e-tests-downward-api-3xhhq with secrets found. (1.748995ms)
[It] should provide pod name and namespace as env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:60
STEP: Creating a pod to test downward api env vars
Oct 23 22:57:37.462: INFO: Waiting up to 5m0s for pod downward-api-74879d49-79d9-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 22:57:37.489: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-74879d49-79d9-11e5-ba1c-42010af00002' yet
Oct 23 22:57:37.489: INFO: Waiting for pod downward-api-74879d49-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-3xhhq' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.745269ms elapsed)
Oct 23 22:57:39.492: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-74879d49-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-3xhhq' so far
Oct 23 22:57:39.492: INFO: Waiting for pod downward-api-74879d49-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-3xhhq' status to be 'success or failure'(found phase: "Running", readiness: true) (2.029793541s elapsed)
Oct 23 22:57:41.495: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-74879d49-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-3xhhq' so far
Oct 23 22:57:41.495: INFO: Waiting for pod downward-api-74879d49-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-3xhhq' status to be 'success or failure'(found phase: "Running", readiness: true) (4.032870786s elapsed)
Oct 23 22:57:43.498: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-74879d49-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-3xhhq' so far
Oct 23 22:57:43.498: INFO: Waiting for pod downward-api-74879d49-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-3xhhq' status to be 'success or failure'(found phase: "Running", readiness: true) (6.035944266s elapsed)
Oct 23 22:57:45.501: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-74879d49-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-3xhhq' so far
Oct 23 22:57:45.501: INFO: Waiting for pod downward-api-74879d49-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-3xhhq' status to be 'success or failure'(found phase: "Running", readiness: true) (8.039046647s elapsed)
Oct 23 22:57:47.504: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-74879d49-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-3xhhq' so far
Oct 23 22:57:47.504: INFO: Waiting for pod downward-api-74879d49-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-3xhhq' status to be 'success or failure'(found phase: "Running", readiness: true) (10.042241919s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod downward-api-74879d49-79d9-11e5-ba1c-42010af00002 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=downward-api-74879d49-79d9-11e5-ba1c-42010af00002
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
POD_NAME=downward-api-74879d49-79d9-11e5-ba1c-42010af00002
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
POD_NAMESPACE=e2e-tests-downward-api-3xhhq
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
[AfterEach] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:57:49.524: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:57:49.554: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:57:49.554: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:57:49.555: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:57:49.555: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:57:49.555: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:57:49.555: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:57:49.555: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:57:49.555: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:57:49.555: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:57:49.555: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:57:49.555: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:57:49.555: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-downward-api-3xhhq" for this suite.
• [SLOW TEST:19.159 seconds]
Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:82
should provide pod name and namespace as env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:60
------------------------------
S
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:57:54.578: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-hru3g
Oct 23 22:57:54.639: INFO: Service account default in ns e2e-tests-emptydir-hru3g with secrets found. (60.790886ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:57:54.639: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-hru3g
Oct 23 22:57:54.641: INFO: Service account default in ns e2e-tests-emptydir-hru3g with secrets found. (2.054254ms)
[It] should support (non-root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:90
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 23 22:57:54.646: INFO: Waiting up to 5m0s for pod pod-7ec59c68-79d9-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 22:57:54.673: INFO: No Status.Info for container 'test-container' in pod 'pod-7ec59c68-79d9-11e5-ba1c-42010af00002' yet
Oct 23 22:57:54.673: INFO: Waiting for pod pod-7ec59c68-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-hru3g' status to be 'success or failure'(found phase: "Pending", readiness: false) (27.529466ms elapsed)
Oct 23 22:57:56.691: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-7ec59c68-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-hru3g' so far
Oct 23 22:57:56.691: INFO: Waiting for pod pod-7ec59c68-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-hru3g' status to be 'success or failure'(found phase: "Running", readiness: true) (2.045365116s elapsed)
Oct 23 22:57:58.700: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-7ec59c68-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-hru3g' so far
Oct 23 22:57:58.701: INFO: Waiting for pod pod-7ec59c68-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-hru3g' status to be 'success or failure'(found phase: "Running", readiness: true) (4.054870392s elapsed)
Oct 23 22:58:00.704: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-7ec59c68-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-hru3g' so far
Oct 23 22:58:00.704: INFO: Waiting for pod pod-7ec59c68-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-hru3g' status to be 'success or failure'(found phase: "Running", readiness: true) (6.058465768s elapsed)
Oct 23 22:58:02.708: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-7ec59c68-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-hru3g' so far
Oct 23 22:58:02.708: INFO: Waiting for pod pod-7ec59c68-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-hru3g' status to be 'success or failure'(found phase: "Running", readiness: true) (8.062102011s elapsed)
Oct 23 22:58:04.711: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-7ec59c68-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-hru3g' so far
Oct 23 22:58:04.712: INFO: Waiting for pod pod-7ec59c68-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-hru3g' status to be 'success or failure'(found phase: "Running", readiness: true) (10.065868036s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod pod-7ec59c68-79d9-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:58:06.742: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:58:06.771: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:58:06.771: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:58:06.771: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:58:06.771: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:58:06.771: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:58:06.771: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:58:06.771: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:58:06.771: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:58:06.771: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:58:06.771: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:58:06.771: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:58:06.771: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-hru3g" for this suite.
• [SLOW TEST:17.212 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:90
------------------------------
[BeforeEach] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:39
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:56:45.347: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-9oxb7
Oct 23 22:56:45.391: INFO: Service account default in ns e2e-tests-container-probe-9oxb7 had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:56:47.394: INFO: Service account default in ns e2e-tests-container-probe-9oxb7 with secrets found. (2.046691546s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:56:47.394: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-9oxb7
Oct 23 22:56:47.396: INFO: Service account default in ns e2e-tests-container-probe-9oxb7 with secrets found. (1.95317ms)
[It] with readiness probe that fails should never be ready and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:99
[AfterEach] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:41
Oct 23 22:58:17.408: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:58:17.412: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:58:17.413: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:58:17.413: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:58:17.413: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:58:17.413: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:58:17.413: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:58:17.413: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:58:17.413: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:58:17.413: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:58:17.413: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:58:17.413: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:58:17.413: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-container-probe-9oxb7" for this suite.
• [SLOW TEST:97.084 seconds]
Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:101
with readiness probe that fails should never be ready and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:99
------------------------------
S
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 22:58:11.792: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-32dr0
Oct 23 22:58:11.820: INFO: Service account default in ns e2e-tests-kubectl-32dr0 had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:58:13.822: INFO: Service account default in ns e2e-tests-kubectl-32dr0 with secrets found. (2.029852268s)
[BeforeEach] Kubectl label
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:655
STEP: creating the pod
Oct 23 22:58:13.822: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-32dr0'
Oct 23 22:58:14.034: INFO: pod "nginx" created
Oct 23 22:58:14.034: INFO: Waiting up to 5m0s for the following 1 pods to be running and ready: [nginx]
Oct 23 22:58:14.034: INFO: Waiting up to 5m0s for pod nginx status to be running and ready
Oct 23 22:58:14.037: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-32dr0' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.586727ms elapsed)
Oct 23 22:58:16.053: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-32dr0' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.018415373s elapsed)
Oct 23 22:58:18.056: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-32dr0' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.021208489s elapsed)
Oct 23 22:58:20.059: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-32dr0' status to be 'running and ready'(found phase: "Pending", readiness: false) (6.024562259s elapsed)
Oct 23 22:58:22.062: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-32dr0' status to be 'running and ready'(found phase: "Pending", readiness: false) (8.027986642s elapsed)
Oct 23 22:58:24.066: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx]
[It] should update the label on a resource [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:679
STEP: adding the label testing-label with value testing-label-value to a pod
Oct 23 22:58:24.066: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config label pods nginx testing-label=testing-label-value --namespace=e2e-tests-kubectl-32dr0'
Oct 23 22:58:24.245: INFO: pod "nginx" labeled
STEP: verifying the pod has the label testing-label with the value testing-label-value
Oct 23 22:58:24.245: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-32dr0'
Oct 23 22:58:24.415: INFO: NAME READY STATUS RESTARTS AGE TESTING-LABEL
nginx 1/1 Running 0 10s testing-label-value
STEP: removing the label testing-label of a pod
Oct 23 22:58:24.415: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config label pods nginx testing-label- --namespace=e2e-tests-kubectl-32dr0'
Oct 23 22:58:24.593: INFO: pod "nginx" labeled
STEP: verifying the pod doesn't have the label testing-label
Oct 23 22:58:24.593: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-32dr0'
Oct 23 22:58:24.762: INFO: NAME READY STATUS RESTARTS AGE TESTING-LABEL
nginx 1/1 Running 0 10s <none>
[AfterEach] Kubectl label
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:658
STEP: using delete to clean up resources
Oct 23 22:58:24.762: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-32dr0'
Oct 23 22:58:24.943: INFO: pod "nginx" deleted
Oct 23 22:58:24.943: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-32dr0'
Oct 23 22:58:25.121: INFO:
Oct 23 22:58:25.121: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-32dr0 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 22:58:25.295: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-32dr0
• [SLOW TEST:18.521 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl label
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:680
should update the label on a resource [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:679
------------------------------
S
------------------------------
[BeforeEach] kube-ui
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:58:22.434: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kube-ui-hw4p2
Oct 23 22:58:22.464: INFO: Service account default in ns e2e-tests-kube-ui-hw4p2 had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:58:24.466: INFO: Service account default in ns e2e-tests-kube-ui-hw4p2 with secrets found. (2.032157527s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:58:24.466: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kube-ui-hw4p2
Oct 23 22:58:24.468: INFO: Service account default in ns e2e-tests-kube-ui-hw4p2 with secrets found. (1.776934ms)
[It] should check that the kube-ui instance is alive [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:85
STEP: Checking the kube-ui service exists.
Oct 23 22:58:24.470: INFO: Service kube-ui in namespace kube-system found.
STEP: Checking to make sure the kube-ui pods are running
STEP: Checking to make sure we get a response from the kube-ui.
STEP: Checking that the ApiServer /ui endpoint redirects to a valid server.
[AfterEach] kube-ui
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 22:58:31.507: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 22:58:31.510: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 22:58:31.510: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 22:58:31.510: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 22:58:31.510: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 22:58:31.510: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 22:58:31.510: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 22:58:31.510: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 22:58:31.510: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 22:58:31.510: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 22:58:31.510: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 22:58:31.510: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 22:58:31.510: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-kube-ui-hw4p2" for this suite.
• [SLOW TEST:14.095 seconds]
kube-ui
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:86
should check that the kube-ui instance is alive [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:85
------------------------------
S
------------------------------
[BeforeEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:58:30.318: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-k42hl
Oct 23 22:58:30.345: INFO: Service account default in ns e2e-tests-proxy-k42hl had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:58:32.348: INFO: Service account default in ns e2e-tests-proxy-k42hl with secrets found. (2.030022198s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:58:32.348: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-k42hl
Oct 23 22:58:32.350: INFO: Service account default in ns e2e-tests-proxy-k42hl with secrets found. (2.025198ms)
[It] should proxy through a service and a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:218
STEP: creating replication controller proxy-service-oj8x5 in namespace e2e-tests-proxy-k42hl
Oct 23 22:58:32.398: INFO: Created replication controller with name: proxy-service-oj8x5, namespace: e2e-tests-proxy-k42hl, replica count: 1
Oct 23 22:58:33.398: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:34.398: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:35.399: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:36.399: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:37.399: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:38.399: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:39.400: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:40.400: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:41.400: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:42.400: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:43.400: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:44.401: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:45.401: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:46.401: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:47.401: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:48.401: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:49.402: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:50.402: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:51.402: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:52.402: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:53.402: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:54.403: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:55.403: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:56.403: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:57.403: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:58.403: INFO: proxy-service-oj8x5 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 22:58:58.436: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 3.534429ms)
Oct 23 22:58:58.638: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 5.44376ms)
Oct 23 22:58:58.836: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.162523ms)
Oct 23 22:58:59.049: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 16.000899ms)
Oct 23 22:58:59.245: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 11.993222ms)
Oct 23 22:58:59.436: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 3.030572ms)
Oct 23 22:58:59.636: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.928468ms)
Oct 23 22:58:59.837: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.79215ms)
Oct 23 22:59:00.037: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.869102ms)
Oct 23 22:59:00.237: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.854863ms)
Oct 23 22:59:00.446: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 11.666452ms)
Oct 23 22:59:00.637: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.663119ms)
Oct 23 22:59:00.845: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 10.952336ms)
Oct 23 22:59:01.037: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.876918ms)
Oct 23 22:59:01.238: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 3.281218ms)
Oct 23 22:59:01.438: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.970152ms)
Oct 23 22:59:01.638: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.986739ms)
Oct 23 22:59:01.838: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.624711ms)
Oct 23 22:59:02.039: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 3.187992ms)
Oct 23 22:59:02.239: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.056337ms)
Oct 23 22:59:02.439: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.875921ms)
Oct 23 22:59:02.639: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.869899ms)
Oct 23 22:59:02.839: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.121992ms)
Oct 23 22:59:03.039: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.052982ms)
Oct 23 22:59:03.239: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.766567ms)
Oct 23 22:59:03.439: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.708953ms)
Oct 23 22:59:03.640: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 3.012961ms)
Oct 23 22:59:03.840: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.87596ms)
Oct 23 22:59:04.041: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 4.037244ms)
Oct 23 22:59:04.240: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.81911ms)
Oct 23 22:59:04.441: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 3.16865ms)
Oct 23 22:59:04.640: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.564499ms)
Oct 23 22:59:04.841: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 3.249468ms)
Oct 23 22:59:05.041: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.902451ms)
Oct 23 22:59:05.241: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.213474ms)
Oct 23 22:59:05.441: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.814646ms)
Oct 23 22:59:05.641: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.824477ms)
Oct 23 22:59:05.841: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.69268ms)
Oct 23 22:59:06.042: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 3.152689ms)
Oct 23 22:59:06.242: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.827864ms)
Oct 23 22:59:06.443: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.664969ms)
Oct 23 22:59:06.652: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 12.228598ms)
Oct 23 22:59:06.842: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.643543ms)
Oct 23 22:59:07.042: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.765767ms)
Oct 23 22:59:07.243: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.732505ms)
Oct 23 22:59:07.443: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 3.098059ms)
Oct 23 22:59:07.644: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.471783ms)
Oct 23 22:59:07.843: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 2.934476ms)
Oct 23 22:59:08.043: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.801936ms)
Oct 23 22:59:08.244: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 3.024578ms)
Oct 23 22:59:08.444: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.796758ms)
Oct 23 22:59:08.644: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.797145ms)
Oct 23 22:59:08.844: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.845026ms)
Oct 23 22:59:09.044: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.063902ms)
Oct 23 22:59:09.244: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.487377ms)
Oct 23 22:59:09.444: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.548886ms)
Oct 23 22:59:09.645: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.792942ms)
Oct 23 22:59:09.845: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.966708ms)
Oct 23 22:59:10.045: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 3.002372ms)
Oct 23 22:59:10.245: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 2.78027ms)
Oct 23 22:59:10.445: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.961353ms)
Oct 23 22:59:10.645: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.904845ms)
Oct 23 22:59:10.846: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.161978ms)
Oct 23 22:59:11.046: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.826682ms)
Oct 23 22:59:11.246: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.097337ms)
Oct 23 22:59:11.446: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.854099ms)
Oct 23 22:59:11.646: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 3.048472ms)
Oct 23 22:59:11.847: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 3.239363ms)
Oct 23 22:59:12.046: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.730376ms)
Oct 23 22:59:12.247: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.935194ms)
Oct 23 22:59:12.447: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.235783ms)
Oct 23 22:59:12.648: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.377974ms)
Oct 23 22:59:12.847: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.760135ms)
Oct 23 22:59:13.047: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.915662ms)
Oct 23 22:59:13.247: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.8229ms)
Oct 23 22:59:13.448: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.967966ms)
Oct 23 22:59:13.648: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 3.277725ms)
Oct 23 22:59:13.848: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.246632ms)
Oct 23 22:59:14.048: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 3.058786ms)
Oct 23 22:59:14.248: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.723851ms)
Oct 23 22:59:14.448: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.731706ms)
Oct 23 22:59:14.648: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.526699ms)
Oct 23 22:59:14.848: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.498438ms)
Oct 23 22:59:15.049: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.926545ms)
Oct 23 22:59:15.249: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.677024ms)
Oct 23 22:59:15.449: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 3.022435ms)
Oct 23 22:59:15.650: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.463878ms)
Oct 23 22:59:15.850: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.283284ms)
Oct 23 22:59:16.050: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 3.067579ms)
Oct 23 22:59:16.250: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.871287ms)
Oct 23 22:59:16.450: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.782379ms)
Oct 23 22:59:16.650: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.472034ms)
Oct 23 22:59:16.850: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.631205ms)
Oct 23 22:59:17.050: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.945249ms)
Oct 23 22:59:17.250: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.88902ms)
Oct 23 22:59:17.450: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.763182ms)
Oct 23 22:59:17.651: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.125055ms)
Oct 23 22:59:17.851: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.238423ms)
Oct 23 22:59:18.051: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.853836ms)
Oct 23 22:59:18.251: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.795977ms)
Oct 23 22:59:18.452: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 3.118583ms)
Oct 23 22:59:18.653: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 4.418722ms)
Oct 23 22:59:18.852: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.782455ms)
Oct 23 22:59:19.051: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 2.576094ms)
Oct 23 22:59:19.252: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 3.037423ms)
Oct 23 22:59:19.452: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.752697ms)
Oct 23 22:59:19.657: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 7.645285ms)
Oct 23 22:59:19.852: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.737476ms)
Oct 23 22:59:20.053: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.936666ms)
Oct 23 22:59:20.253: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.454735ms)
Oct 23 22:59:20.456: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 5.866566ms)
Oct 23 22:59:20.653: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.635755ms)
Oct 23 22:59:20.853: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.940749ms)
Oct 23 22:59:21.053: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 3.038744ms)
Oct 23 22:59:21.253: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.572242ms)
Oct 23 22:59:21.454: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.947145ms)
Oct 23 22:59:21.654: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 2.976892ms)
Oct 23 22:59:21.854: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.81096ms)
Oct 23 22:59:22.054: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.819048ms)
Oct 23 22:59:22.254: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.677495ms)
Oct 23 22:59:22.454: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.941866ms)
Oct 23 22:59:22.654: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.824595ms)
Oct 23 22:59:22.855: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.059786ms)
Oct 23 22:59:23.054: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.508965ms)
Oct 23 22:59:23.255: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.589864ms)
Oct 23 22:59:23.455: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.609207ms)
Oct 23 22:59:23.655: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.996826ms)
Oct 23 22:59:23.855: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.764951ms)
Oct 23 22:59:24.055: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 2.863196ms)
Oct 23 22:59:24.255: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.608255ms)
Oct 23 22:59:24.455: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.34463ms)
Oct 23 22:59:24.656: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.856643ms)
Oct 23 22:59:24.856: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.932869ms)
Oct 23 22:59:25.056: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.694115ms)
Oct 23 22:59:25.256: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.428474ms)
Oct 23 22:59:25.456: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.787727ms)
Oct 23 22:59:25.657: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.872552ms)
Oct 23 22:59:25.857: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.729591ms)
Oct 23 22:59:26.057: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.68238ms)
Oct 23 22:59:26.257: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.570679ms)
Oct 23 22:59:26.457: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.662475ms)
Oct 23 22:59:26.657: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 2.898856ms)
Oct 23 22:59:26.858: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.15579ms)
Oct 23 22:59:27.058: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.851353ms)
Oct 23 22:59:27.257: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.478817ms)
Oct 23 22:59:27.484: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 28.661355ms)
Oct 23 22:59:27.659: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 3.374641ms)
Oct 23 22:59:27.859: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.100028ms)
Oct 23 22:59:28.059: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.56916ms)
Oct 23 22:59:28.258: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.444685ms)
Oct 23 22:59:28.494: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 37.675744ms)
Oct 23 22:59:28.659: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.714395ms)
Oct 23 22:59:28.859: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.873538ms)
Oct 23 22:59:29.059: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.69976ms)
Oct 23 22:59:29.259: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.58997ms)
Oct 23 22:59:29.483: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 26.057635ms)
Oct 23 22:59:29.665: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 8.317253ms)
Oct 23 22:59:29.860: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.981876ms)
Oct 23 22:59:30.060: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.026546ms)
Oct 23 22:59:30.260: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.909256ms)
Oct 23 22:59:30.460: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.705479ms)
Oct 23 22:59:30.660: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.813482ms)
Oct 23 22:59:30.860: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.773383ms)
Oct 23 22:59:31.061: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.247042ms)
Oct 23 22:59:31.261: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.565198ms)
Oct 23 22:59:31.461: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.863698ms)
Oct 23 22:59:31.661: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.612339ms)
Oct 23 22:59:31.861: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 3.013654ms)
Oct 23 22:59:32.064: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 5.321566ms)
Oct 23 22:59:32.274: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 15.141975ms)
Oct 23 22:59:32.468: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 8.931941ms)
Oct 23 22:59:32.665: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 6.113546ms)
Oct 23 22:59:32.868: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 8.596651ms)
Oct 23 22:59:33.064: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 4.4914ms)
Oct 23 22:59:33.265: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 5.836532ms)
Oct 23 22:59:33.472: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 11.90352ms)
Oct 23 22:59:33.667: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 6.802247ms)
Oct 23 22:59:33.869: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 9.106548ms)
Oct 23 22:59:34.063: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.28388ms)
Oct 23 22:59:34.263: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.835223ms)
Oct 23 22:59:34.465: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 4.591849ms)
Oct 23 22:59:34.663: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.810229ms)
Oct 23 22:59:34.864: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 3.658826ms)
Oct 23 22:59:35.065: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 3.748065ms)
Oct 23 22:59:35.265: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.786118ms)
Oct 23 22:59:35.464: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 3.020353ms)
Oct 23 22:59:35.664: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.939727ms)
Oct 23 22:59:35.865: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 3.139553ms)
Oct 23 22:59:36.065: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.243269ms)
Oct 23 22:59:36.265: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.399745ms)
Oct 23 22:59:36.486: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 24.154316ms)
Oct 23 22:59:36.665: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 3.072048ms)
Oct 23 22:59:36.865: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 3.003819ms)
Oct 23 22:59:37.066: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.402956ms)
Oct 23 22:59:37.266: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.092098ms)
Oct 23 22:59:37.486: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 23.295204ms)
Oct 23 22:59:37.666: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 3.302913ms)
Oct 23 22:59:37.866: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.283139ms)
Oct 23 22:59:38.067: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 3.500935ms)
Oct 23 22:59:38.267: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 3.438242ms)
Oct 23 22:59:38.483: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 19.456544ms)
Oct 23 22:59:38.668: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 4.142267ms)
Oct 23 22:59:38.867: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.973185ms)
Oct 23 22:59:39.067: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.180044ms)
Oct 23 22:59:39.267: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.851442ms)
Oct 23 22:59:39.467: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.890233ms)
Oct 23 22:59:39.667: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.994455ms)
Oct 23 22:59:39.868: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.366576ms)
Oct 23 22:59:40.068: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.641275ms)
Oct 23 22:59:40.268: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.83519ms)
Oct 23 22:59:40.468: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 3.376262ms)
Oct 23 22:59:40.668: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.690889ms)
Oct 23 22:59:40.869: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 3.407008ms)
Oct 23 22:59:41.069: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.358319ms)
Oct 23 22:59:41.270: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 4.187543ms)
Oct 23 22:59:41.469: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 3.416285ms)
Oct 23 22:59:41.669: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 3.026017ms)
Oct 23 22:59:41.869: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 3.169909ms)
Oct 23 22:59:42.070: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 3.229354ms)
Oct 23 22:59:42.270: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 3.137845ms)
Oct 23 22:59:42.474: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 7.114788ms)
Oct 23 22:59:42.670: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 3.370443ms)
Oct 23 22:59:42.870: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.368793ms)
Oct 23 22:59:43.071: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.788878ms)
Oct 23 22:59:43.272: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 4.080503ms)
Oct 23 22:59:43.471: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 3.086441ms)
Oct 23 22:59:43.671: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 3.110097ms)
Oct 23 22:59:43.871: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 3.328068ms)
Oct 23 22:59:44.072: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 3.304972ms)
Oct 23 22:59:44.272: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.151506ms)
Oct 23 22:59:44.472: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.746028ms)
Oct 23 22:59:44.672: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.866553ms)
Oct 23 22:59:44.872: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.766727ms)
Oct 23 22:59:45.072: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 3.012903ms)
Oct 23 22:59:45.272: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 3.007593ms)
Oct 23 22:59:45.472: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 3.112418ms)
Oct 23 22:59:45.673: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 3.193719ms)
Oct 23 22:59:45.873: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 3.081072ms)
Oct 23 22:59:46.073: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 3.060883ms)
Oct 23 22:59:46.273: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.862535ms)
Oct 23 22:59:46.482: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 12.410222ms)
Oct 23 22:59:46.673: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.747653ms)
Oct 23 22:59:46.873: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.935583ms)
Oct 23 22:59:47.073: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.755286ms)
Oct 23 22:59:47.273: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.777438ms)
Oct 23 22:59:47.473: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.721557ms)
Oct 23 22:59:47.674: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 3.210043ms)
Oct 23 22:59:47.874: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.057892ms)
Oct 23 22:59:48.074: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.062201ms)
Oct 23 22:59:48.274: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.666646ms)
Oct 23 22:59:48.475: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 3.297035ms)
Oct 23 22:59:48.674: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.694523ms)
Oct 23 22:59:48.875: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.796397ms)
Oct 23 22:59:49.075: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.266566ms)
Oct 23 22:59:49.275: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.357225ms)
Oct 23 22:59:49.475: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 3.23728ms)
Oct 23 22:59:49.675: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 3.023658ms)
Oct 23 22:59:49.875: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.96899ms)
Oct 23 22:59:50.076: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.893854ms)
Oct 23 22:59:50.275: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.611547ms)
Oct 23 22:59:50.485: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 12.483081ms)
Oct 23 22:59:50.676: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.704894ms)
Oct 23 22:59:50.876: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.595393ms)
Oct 23 22:59:51.076: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.805976ms)
Oct 23 22:59:51.276: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.390349ms)
Oct 23 22:59:51.524: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 50.20457ms)
Oct 23 22:59:51.677: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.092251ms)
Oct 23 22:59:51.877: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.672878ms)
Oct 23 22:59:52.077: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 3.209425ms)
Oct 23 22:59:52.277: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.452337ms)
Oct 23 22:59:52.483: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 8.036581ms)
Oct 23 22:59:52.677: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.754293ms)
Oct 23 22:59:52.878: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.098594ms)
Oct 23 22:59:53.077: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.51506ms)
Oct 23 22:59:53.278: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.524149ms)
Oct 23 22:59:53.485: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 10.146653ms)
Oct 23 22:59:53.679: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 3.384974ms)
Oct 23 22:59:53.878: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.755708ms)
Oct 23 22:59:54.078: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 2.624532ms)
Oct 23 22:59:54.279: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.721505ms)
Oct 23 22:59:54.486: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 9.845744ms)
Oct 23 22:59:54.679: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.6077ms)
Oct 23 22:59:54.879: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.752581ms)
Oct 23 22:59:55.080: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 3.057736ms)
Oct 23 22:59:55.279: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 2.595107ms)
Oct 23 22:59:55.481: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 4.561732ms)
Oct 23 22:59:55.680: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 3.190881ms)
Oct 23 22:59:55.880: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.599782ms)
Oct 23 22:59:56.081: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.300994ms)
Oct 23 22:59:56.280: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 2.90758ms)
Oct 23 22:59:56.483: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 5.153033ms)
Oct 23 22:59:56.680: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.523248ms)
Oct 23 22:59:56.880: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.387008ms)
Oct 23 22:59:57.081: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.819907ms)
Oct 23 22:59:57.281: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.575884ms)
Oct 23 22:59:57.482: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.101667ms)
Oct 23 22:59:57.682: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.075133ms)
Oct 23 22:59:57.892: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 12.938001ms)
Oct 23 22:59:58.082: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 3.030355ms)
Oct 23 22:59:58.282: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.945219ms)
Oct 23 22:59:58.482: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 3.026944ms)
Oct 23 22:59:58.682: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.928209ms)
Oct 23 22:59:58.882: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.801119ms)
Oct 23 22:59:59.083: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.993819ms)
Oct 23 22:59:59.283: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.88377ms)
Oct 23 22:59:59.483: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.874323ms)
Oct 23 22:59:59.683: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.027928ms)
Oct 23 22:59:59.884: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.13598ms)
Oct 23 23:00:00.084: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.966795ms)
Oct 23 23:00:00.283: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.644248ms)
Oct 23 23:00:00.484: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.926532ms)
Oct 23 23:00:00.684: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.001237ms)
Oct 23 23:00:00.884: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 2.567384ms)
Oct 23 23:00:01.084: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.706701ms)
Oct 23 23:00:01.284: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.601618ms)
Oct 23 23:00:01.485: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.827681ms)
Oct 23 23:00:01.685: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 3.074707ms)
Oct 23 23:00:01.885: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.841532ms)
Oct 23 23:00:02.085: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.257991ms)
Oct 23 23:00:02.285: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.938385ms)
Oct 23 23:00:02.485: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.827776ms)
Oct 23 23:00:02.686: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.196283ms)
Oct 23 23:00:02.886: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.849075ms)
Oct 23 23:00:03.086: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.760326ms)
Oct 23 23:00:03.286: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 3.035612ms)
Oct 23 23:00:03.486: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.916893ms)
Oct 23 23:00:03.686: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.860853ms)
Oct 23 23:00:03.887: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.188516ms)
Oct 23 23:00:04.087: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.610611ms)
Oct 23 23:00:04.286: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.490264ms)
Oct 23 23:00:04.486: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.430151ms)
Oct 23 23:00:04.687: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.758855ms)
Oct 23 23:00:04.887: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.962708ms)
Oct 23 23:00:05.088: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.272655ms)
Oct 23 23:00:05.287: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.666637ms)
Oct 23 23:00:05.488: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.678815ms)
Oct 23 23:00:05.688: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.946797ms)
Oct 23 23:00:05.888: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 3.122245ms)
Oct 23 23:00:06.088: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.154699ms)
Oct 23 23:00:06.289: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.083815ms)
Oct 23 23:00:06.489: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.956852ms)
Oct 23 23:00:06.688: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.6482ms)
Oct 23 23:00:06.889: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.985902ms)
Oct 23 23:00:07.089: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.953438ms)
Oct 23 23:00:07.289: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.768509ms)
Oct 23 23:00:07.490: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.548297ms)
Oct 23 23:00:07.689: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.895722ms)
Oct 23 23:00:07.889: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.649794ms)
Oct 23 23:00:08.090: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.1392ms)
Oct 23 23:00:08.290: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.288004ms)
Oct 23 23:00:08.490: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.089408ms)
Oct 23 23:00:08.690: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 3.041096ms)
Oct 23 23:00:08.891: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 3.928611ms)
Oct 23 23:00:09.091: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 3.000765ms)
Oct 23 23:00:09.291: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.780433ms)
Oct 23 23:00:09.491: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.194099ms)
Oct 23 23:00:09.691: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.12565ms)
Oct 23 23:00:09.891: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.928567ms)
Oct 23 23:00:10.091: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.977699ms)
Oct 23 23:00:10.292: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 3.172633ms)
Oct 23 23:00:10.492: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 3.289961ms)
Oct 23 23:00:10.692: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.041365ms)
Oct 23 23:00:10.892: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.001166ms)
Oct 23 23:00:11.092: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.66057ms)
Oct 23 23:00:11.292: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.339186ms)
Oct 23 23:00:11.492: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.737636ms)
Oct 23 23:00:11.693: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.751622ms)
Oct 23 23:00:11.893: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.780544ms)
Oct 23 23:00:12.093: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 3.061432ms)
Oct 23 23:00:12.294: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 3.146372ms)
Oct 23 23:00:12.494: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 3.521348ms)
Oct 23 23:00:12.694: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.402391ms)
Oct 23 23:00:12.894: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.101266ms)
Oct 23 23:00:13.094: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.195507ms)
Oct 23 23:00:13.294: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.896968ms)
Oct 23 23:00:13.494: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.987761ms)
Oct 23 23:00:13.695: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 3.747584ms)
Oct 23 23:00:13.895: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 3.035834ms)
Oct 23 23:00:14.095: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 3.292281ms)
Oct 23 23:00:14.295: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.635187ms)
Oct 23 23:00:14.495: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 3.031817ms)
Oct 23 23:00:14.695: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.781815ms)
Oct 23 23:00:14.895: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.882712ms)
Oct 23 23:00:15.095: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 2.720199ms)
Oct 23 23:00:15.296: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.672779ms)
Oct 23 23:00:15.496: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.084365ms)
Oct 23 23:00:15.696: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.120884ms)
Oct 23 23:00:15.896: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.660776ms)
Oct 23 23:00:16.096: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.420719ms)
Oct 23 23:00:16.296: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 2.894023ms)
Oct 23 23:00:16.497: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 3.066728ms)
Oct 23 23:00:16.697: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.694736ms)
Oct 23 23:00:16.898: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.570758ms)
Oct 23 23:00:17.097: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.557771ms)
Oct 23 23:00:17.297: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.642146ms)
Oct 23 23:00:17.498: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 3.10302ms)
Oct 23 23:00:17.698: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 2.978003ms)
Oct 23 23:00:17.898: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 3.010965ms)
Oct 23 23:00:18.098: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.773364ms)
Oct 23 23:00:18.298: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 3.22898ms)
Oct 23 23:00:18.499: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 3.121629ms)
Oct 23 23:00:18.698: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.851213ms)
Oct 23 23:00:18.898: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.562415ms)
Oct 23 23:00:19.099: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 3.14549ms)
Oct 23 23:00:19.299: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 2.853025ms)
Oct 23 23:00:19.499: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 3.246286ms)
Oct 23 23:00:19.700: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 3.239989ms)
Oct 23 23:00:19.899: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.878546ms)
Oct 23 23:00:20.100: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.921877ms)
Oct 23 23:00:20.299: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.454379ms)
Oct 23 23:00:20.500: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.997257ms)
Oct 23 23:00:20.700: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.939113ms)
Oct 23 23:00:20.900: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 2.980247ms)
Oct 23 23:00:21.100: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 2.792448ms)
Oct 23 23:00:21.300: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 2.523553ms)
Oct 23 23:00:21.501: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.115113ms)
Oct 23 23:00:21.701: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 2.994682ms)
Oct 23 23:00:21.901: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 2.758771ms)
Oct 23 23:00:22.101: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/: bar (200; 3.320892ms)
Oct 23 23:00:22.301: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf/proxy/rewriteme"... (200; 2.672718ms)
Oct 23 23:00:22.502: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname1/: foo (200; 3.02927ms)
Oct 23 23:00:22.702: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/: bar (200; 3.46269ms)
Oct 23 23:00:22.902: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname2/: tls qux (200; 3.041597ms)
Oct 23 23:00:23.102: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/proxy/rewrite... (200; 2.980547ms)
Oct 23 23:00:23.302: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.969549ms)
Oct 23 23:00:23.502: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:462/proxy/: tls qux (200; 2.882549ms)
Oct 23 23:00:23.702: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/proxy-service-oj8x5:portname2/: bar (200; 2.739803ms)
Oct 23 23:00:23.903: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname1/: foo (200; 3.011717ms)
Oct 23 23:00:24.103: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/: foo (200; 2.902951ms)
Oct 23 23:00:24.303: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:160/: foo (200; 3.172687ms)
Oct 23 23:00:24.503: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/proxy/re... (200; 2.932276ms)
Oct 23 23:00:24.703: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:443/proxy/... (200; 2.849661ms)
Oct 23 23:00:24.903: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/http:proxy-service-oj8x5:portname2/: bar (200; 2.496302ms)
Oct 23 23:00:25.103: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/services/https:proxy-service-oj8x5:tlsportname1/: tls baz (200; 2.629266ms)
Oct 23 23:00:25.303: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:160/proxy/: foo (200; 2.602353ms)
Oct 23 23:00:25.504: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 2.837598ms)
Oct 23 23:00:25.704: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:162/proxy/: bar (200; 3.076083ms)
Oct 23 23:00:25.904: INFO: /api/v1/namespaces/e2e-tests-proxy-k42hl/pods/https:proxy-service-oj8x5-pewmf:460/proxy/: tls baz (200; 3.349165ms)
Oct 23 23:00:26.104: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/proxy-service-oj8x5-pewmf:80/rewrite... (200; 3.139948ms)
Oct 23 23:00:26.304: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-k42hl/pods/http:proxy-service-oj8x5-pewmf:80/re... (200; 2.83678ms)
STEP: deleting replication controller proxy-service-oj8x5 in namespace e2e-tests-proxy-k42hl
Oct 23 23:00:28.558: INFO: Deleting RC proxy-service-oj8x5 took: 2.053866505s
Oct 23 23:00:38.564: INFO: Terminating RC proxy-service-oj8x5 pods took: 10.006039266s
[AfterEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:00:38.602: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:00:38.607: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:00:38.607: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:00:38.607: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:00:38.607: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:00:38.607: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:00:38.607: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:00:38.607: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:00:38.607: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:00:38.607: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:00:38.607: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:00:38.607: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:00:38.607: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-k42hl" for this suite.
• [SLOW TEST:133.311 seconds]
Proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy through a service and a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:218
------------------------------
[BeforeEach] Port forwarding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:00:43.636: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-doee3
Oct 23 23:00:43.663: INFO: Service account default in ns e2e-tests-port-forwarding-doee3 with secrets found. (27.105731ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:00:43.663: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-doee3
Oct 23 23:00:43.665: INFO: Service account default in ns e2e-tests-port-forwarding-doee3 with secrets found. (2.001552ms)
[It] should support a client that connects, sends data, and disconnects [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:201
STEP: creating the target pod
Oct 23 23:00:43.670: INFO: Waiting up to 5m0s for pod pfpod status to be running
Oct 23 23:00:43.700: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-doee3' status to be 'running'(found phase: "Pending", readiness: false) (30.569598ms elapsed)
Oct 23 23:00:45.703: INFO: Found pod 'pfpod' on node 'pull-e2e-0-minion-l2bc'
STEP: Running 'kubectl port-forward'
Oct 23 23:00:45.703: INFO: starting port-forward command and streaming output
Oct 23 23:00:45.703: INFO: Asynchronously running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config port-forward --namespace=e2e-tests-port-forwarding-doee3 pfpod :80'
Oct 23 23:00:45.704: INFO: reading from `kubectl port-forward` command's stderr
STEP: Dialing the local port
STEP: Sending the expected data to the local port
STEP: Closing the write half of the client's connection
STEP: Reading data from the local port
STEP: Closing the connection to the local port
[AfterEach] Port forwarding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-port-forwarding-doee3".
Oct 23 23:00:46.070: INFO: event for pfpod: {scheduler } Scheduled: Successfully assigned pfpod to pull-e2e-0-minion-l2bc
Oct 23 23:00:46.070: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Pulled: Container image "beta.gcr.io/google_containers/pause:2.0" already present on machine
Oct 23 23:00:46.070: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Created: Created with docker id e3935c74ac72
Oct 23 23:00:46.070: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Started: Started with docker id e3935c74ac72
Oct 23 23:00:46.070: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Pulling: pulling image "gcr.io/google_containers/portforwardtester:1.0"
Oct 23 23:00:46.070: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Pulled: Successfully pulled image "gcr.io/google_containers/portforwardtester:1.0"
Oct 23 23:00:46.070: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Created: Created with docker id 29f02e48dac8
Oct 23 23:00:46.071: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Started: Started with docker id 29f02e48dac8
Oct 23 23:00:46.079: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 23 23:00:46.079: INFO: rand-local-1bibd pull-e2e-0-minion-dp0i Running [{Ready False 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:00:08 +0000 UTC ContainersNotReady containers with unready status: [c]}]
Oct 23 23:00:46.080: INFO: rand-local-gz1ps pull-e2e-0-minion-n5ko Succeeded [{Ready False 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:59:18 +0000 UTC ContainersNotReady containers with unready status: [c]}]
Oct 23 23:00:46.080: INFO: rand-local-rsego pull-e2e-0-minion-n5ko Succeeded [{Ready False 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:59:49 +0000 UTC ContainersNotReady containers with unready status: [c]}]
Oct 23 23:00:46.080: INFO: rand-local-u6bwn pull-e2e-0-minion-dp0i Succeeded [{Ready False 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:59:28 +0000 UTC ContainersNotReady containers with unready status: [c]}]
Oct 23 23:00:46.080: INFO: pfpod pull-e2e-0-minion-l2bc Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:00:45 +0000 UTC }]
Oct 23 23:00:46.080: INFO: elasticsearch-logging-v1-3gvtt pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:28 +0000 UTC }]
Oct 23 23:00:46.080: INFO: elasticsearch-logging-v1-t59p5 pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:10 +0000 UTC }]
Oct 23 23:00:46.080: INFO: heapster-v10-u3l0u pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:57 +0000 UTC }]
Oct 23 23:00:46.080: INFO: kibana-logging-v1-skk6z pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:00 +0000 UTC }]
Oct 23 23:00:46.080: INFO: kube-dns-v9-6u0vh pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:04 +0000 UTC }]
Oct 23 23:00:46.080: INFO: kube-ui-v3-laljg pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:49 +0000 UTC }]
Oct 23 23:00:46.080: INFO: monitoring-influxdb-grafana-v2-bpbto pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:57 +0000 UTC }]
Oct 23 23:00:46.080: INFO:
Oct 23 23:00:46.080: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:00:46.084: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:00:46.084: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:00:46.084: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:00:46.084: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:00:46.084: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:00:46.084: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:00:46.084: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:00:46.084: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:00:46.084: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:00:46.084: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:00:46.084: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:00:46.084: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-port-forwarding-doee3" for this suite.
• Failure [7.465 seconds]
Port forwarding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:239
With a server that expects a client request
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:202
should support a client that connects, sends data, and disconnects [Conformance] [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:201
Oct 23 23:00:46.067: Expected "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" from server, got ""
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:194
------------------------------
[BeforeEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 22:58:36.531: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-5x2o2
Oct 23 22:58:36.557: INFO: Service account default in ns e2e-tests-job-5x2o2 had 0 secrets, ignoring for 2s: <nil>
Oct 23 22:58:38.560: INFO: Service account default in ns e2e-tests-job-5x2o2 with secrets found. (2.029041801s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 22:58:38.561: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-5x2o2
Oct 23 22:58:38.562: INFO: Service account default in ns e2e-tests-job-5x2o2 with secrets found. (1.774977ms)
[It] should run a job to completion when tasks sometimes fail and are locally restarted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:75
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:01:00.571: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:01:00.575: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:01:00.575: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:01:00.575: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:01:00.575: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:01:00.575: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:01:00.575: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:01:00.575: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:01:00.575: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:01:00.575: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:01:00.575: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:01:00.575: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:01:00.575: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-job-5x2o2" for this suite.
• [SLOW TEST:149.062 seconds]
Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should run a job to completion when tasks sometimes fail and are locally restarted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:75
------------------------------
S
------------------------------
[BeforeEach] hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:53
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:00:51.104: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-aftpv
Oct 23 23:00:51.133: INFO: Service account default in ns e2e-tests-hostpath-aftpv with secrets found. (29.755616ms)
[It] should support r/w [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:103
STEP: Creating a pod to test hostPath r/w
Oct 23 23:00:51.138: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Oct 23 23:00:51.167: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Oct 23 23:00:51.167: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-aftpv' status to be 'success or failure'(found phase: "Pending", readiness: false) (29.424929ms elapsed)
STEP: Saw pod success
Oct 23 23:00:53.171: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Oct 23 23:00:53.174: INFO: Nil State.Terminated for container 'test-container-2' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-aftpv' so far
Oct 23 23:00:53.174: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-aftpv' status to be 'success or failure'(found phase: "Running", readiness: false) (2.550855ms elapsed)
Oct 23 23:00:55.177: INFO: Nil State.Terminated for container 'test-container-2' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-aftpv' so far
Oct 23 23:00:55.177: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-aftpv' status to be 'success or failure'(found phase: "Running", readiness: false) (2.005972246s elapsed)
Oct 23 23:00:57.181: INFO: Nil State.Terminated for container 'test-container-2' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-aftpv' so far
Oct 23 23:00:57.181: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-aftpv' status to be 'success or failure'(found phase: "Running", readiness: false) (4.009593367s elapsed)
Oct 23 23:00:59.184: INFO: Nil State.Terminated for container 'test-container-2' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-aftpv' so far
Oct 23 23:00:59.184: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-aftpv' status to be 'success or failure'(found phase: "Running", readiness: false) (6.013103804s elapsed)
Oct 23 23:01:01.187: INFO: Nil State.Terminated for container 'test-container-2' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-aftpv' so far
Oct 23 23:01:01.187: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-aftpv' status to be 'success or failure'(found phase: "Running", readiness: false) (8.016288082s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod pod-host-path-test container test-container-2: <nil>
STEP: Successfully fetched pod logs:content of file "/test-volume/test-file": mount-tester new file
[AfterEach] hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:60
STEP: Destroying namespace for this suite e2e-tests-hostpath-aftpv
• [SLOW TEST:17.147 seconds]
hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:104
should support r/w [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:103
------------------------------
S
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:01:08.253: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-nui42
Oct 23 23:01:08.281: INFO: Service account default in ns e2e-tests-emptydir-nui42 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:01:10.283: INFO: Service account default in ns e2e-tests-emptydir-nui42 with secrets found. (2.029916728s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:01:10.283: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-nui42
Oct 23 23:01:10.285: INFO: Service account default in ns e2e-tests-emptydir-nui42 with secrets found. (1.940046ms)
[It] volume on default medium should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:70
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 23 23:01:10.292: INFO: Waiting up to 5m0s for pod pod-f3629d4d-79d9-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:01:10.322: INFO: No Status.Info for container 'test-container' in pod 'pod-f3629d4d-79d9-11e5-ba1c-42010af00002' yet
Oct 23 23:01:10.322: INFO: Waiting for pod pod-f3629d4d-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nui42' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.354609ms elapsed)
Oct 23 23:01:12.326: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f3629d4d-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nui42' so far
Oct 23 23:01:12.326: INFO: Waiting for pod pod-f3629d4d-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nui42' status to be 'success or failure'(found phase: "Running", readiness: true) (2.033969454s elapsed)
Oct 23 23:01:14.329: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f3629d4d-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nui42' so far
Oct 23 23:01:14.329: INFO: Waiting for pod pod-f3629d4d-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nui42' status to be 'success or failure'(found phase: "Running", readiness: true) (4.037091924s elapsed)
Oct 23 23:01:16.332: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f3629d4d-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nui42' so far
Oct 23 23:01:16.332: INFO: Waiting for pod pod-f3629d4d-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nui42' status to be 'success or failure'(found phase: "Running", readiness: true) (6.040376401s elapsed)
Oct 23 23:01:18.335: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f3629d4d-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nui42' so far
Oct 23 23:01:18.335: INFO: Waiting for pod pod-f3629d4d-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nui42' status to be 'success or failure'(found phase: "Running", readiness: true) (8.043809326s elapsed)
Oct 23 23:01:20.339: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f3629d4d-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nui42' so far
Oct 23 23:01:20.339: INFO: Waiting for pod pod-f3629d4d-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nui42' status to be 'success or failure'(found phase: "Running", readiness: true) (10.047423649s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod pod-f3629d4d-79d9-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
perms of file "/test-volume": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:01:22.372: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:01:22.401: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:01:22.401: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:01:22.401: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:01:22.401: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:01:22.401: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:01:22.401: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:01:22.401: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:01:22.401: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:01:22.401: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:01:22.401: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:01:22.401: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:01:22.401: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-nui42" for this suite.
• [SLOW TEST:19.165 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
volume on default medium should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:70
------------------------------
SSSS
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:01:27.425: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ij8a5
Oct 23 23:01:27.468: INFO: Service account default in ns e2e-tests-emptydir-ij8a5 with secrets found. (43.131805ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:01:27.468: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ij8a5
Oct 23 23:01:27.470: INFO: Service account default in ns e2e-tests-emptydir-ij8a5 with secrets found. (1.769799ms)
[It] should support (root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:50
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 23 23:01:27.475: INFO: Waiting up to 5m0s for pod pod-fda0c775-79d9-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:01:27.506: INFO: No Status.Info for container 'test-container' in pod 'pod-fda0c775-79d9-11e5-ba1c-42010af00002' yet
Oct 23 23:01:27.506: INFO: Waiting for pod pod-fda0c775-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-ij8a5' status to be 'success or failure'(found phase: "Pending", readiness: false) (31.208665ms elapsed)
Oct 23 23:01:29.554: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-fda0c775-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-ij8a5' so far
Oct 23 23:01:29.554: INFO: Waiting for pod pod-fda0c775-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-ij8a5' status to be 'success or failure'(found phase: "Running", readiness: true) (2.079500567s elapsed)
Oct 23 23:01:31.557: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-fda0c775-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-ij8a5' so far
Oct 23 23:01:31.557: INFO: Waiting for pod pod-fda0c775-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-ij8a5' status to be 'success or failure'(found phase: "Running", readiness: true) (4.082684383s elapsed)
Oct 23 23:01:33.560: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-fda0c775-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-ij8a5' so far
Oct 23 23:01:33.560: INFO: Waiting for pod pod-fda0c775-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-ij8a5' status to be 'success or failure'(found phase: "Running", readiness: true) (6.085881912s elapsed)
Oct 23 23:01:35.564: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-fda0c775-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-ij8a5' so far
Oct 23 23:01:35.564: INFO: Waiting for pod pod-fda0c775-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-ij8a5' status to be 'success or failure'(found phase: "Running", readiness: true) (8.089214411s elapsed)
Oct 23 23:01:37.567: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-fda0c775-79d9-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-ij8a5' so far
Oct 23 23:01:37.567: INFO: Waiting for pod pod-fda0c775-79d9-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-ij8a5' status to be 'success or failure'(found phase: "Running", readiness: true) (10.092829437s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-dp0i pod pod-fda0c775-79d9-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:01:39.590: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:01:39.620: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:01:39.620: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:01:39.620: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:01:39.620: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:01:39.620: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:01:39.620: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:01:39.620: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:01:39.620: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:01:39.620: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:01:39.620: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:01:39.620: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:01:39.620: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-ij8a5" for this suite.
• [SLOW TEST:17.213 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:50
------------------------------
S
------------------------------
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:01:44.641: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-m1jbw
Oct 23 23:01:44.714: INFO: Service account default in ns e2e-tests-services-m1jbw with secrets found. (73.476593ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:01:44.714: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-m1jbw
Oct 23 23:01:44.716: INFO: Service account default in ns e2e-tests-services-m1jbw with secrets found. (1.624266ms)
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
[It] should provide secure master service [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:71
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:01:44.719: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:01:44.723: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:01:44.723: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:01:44.723: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:01:44.723: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:01:44.723: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:01:44.723: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:01:44.723: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:01:44.723: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:01:44.723: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:01:44.723: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:01:44.723: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:01:44.723: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-services-m1jbw" for this suite.
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:5.101 seconds]
Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:871
should provide secure master service [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:71
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:01:49.745: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-3b7ds
Oct 23 23:01:49.772: INFO: Service account default in ns e2e-tests-kubectl-3b7ds with secrets found. (27.264061ms)
[It] should check if Kubernetes master services is included in cluster-info [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:483
STEP: validating cluster-info
Oct 23 23:01:49.772: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config cluster-info'
Oct 23 23:01:49.960: INFO: Kubernetes master is running at https://104.196.0.155
Elasticsearch is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://104.196.0.155/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-3b7ds
• [SLOW TEST:5.235 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:484
should check if Kubernetes master services is included in cluster-info [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:483
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:01:54.981: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-fm7lf
Oct 23 23:01:55.007: INFO: Service account default in ns e2e-tests-kubectl-fm7lf had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:01:57.010: INFO: Service account default in ns e2e-tests-kubectl-fm7lf with secrets found. (2.028708104s)
[BeforeEach] Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:104
[It] should do a rolling update of a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:135
STEP: creating the initial replication controller
Oct 23 23:01:57.010: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:01:57.225: INFO: replicationcontroller "update-demo-nautilus" created
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 23 23:01:57.225: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:01:57.412: INFO: update-demo-nautilus-fkkca update-demo-nautilus-wpf46
Oct 23 23:01:57.412: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-fkkca -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:01:57.597: INFO: true
Oct 23 23:01:57.597: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-fkkca -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:01:57.774: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 23:01:57.774: INFO: validating pod update-demo-nautilus-fkkca
Oct 23 23:01:57.777: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 23:01:57.777: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 23:01:57.777: INFO: update-demo-nautilus-fkkca is verified up and running
Oct 23 23:01:57.777: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-wpf46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:01:57.957: INFO: true
Oct 23 23:01:57.957: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-wpf46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:01:58.145: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 23:01:58.145: INFO: validating pod update-demo-nautilus-wpf46
Oct 23 23:01:58.148: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 23:01:58.148: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 23:01:58.148: INFO: update-demo-nautilus-wpf46 is verified up and running
STEP: rolling-update to new replication controller
Oct 23 23:01:58.148: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config rolling-update update-demo-nautilus --update-period=1s -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/update-demo/kitten-rc.yaml --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:02:33.699: INFO: Created update-demo-kitten
Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
Scaling update-demo-kitten up to 1
Scaling update-demo-nautilus down to 1
Scaling update-demo-kitten up to 2
Scaling update-demo-nautilus down to 0
Update succeeded. Deleting update-demo-nautilus
replicationcontroller "update-demo-nautilus" rolling updated to "update-demo-kitten"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 23 23:02:33.700: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:02:33.882: INFO: update-demo-kitten-dkdgl update-demo-kitten-s0hs3
Oct 23 23:02:33.882: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-kitten-dkdgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:02:34.062: INFO: true
Oct 23 23:02:34.062: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-kitten-dkdgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:02:34.252: INFO: gcr.io/google_containers/update-demo:kitten
Oct 23 23:02:34.252: INFO: validating pod update-demo-kitten-dkdgl
Oct 23 23:02:34.257: INFO: got data: {
"image": "kitten.jpg"
}
Oct 23 23:02:34.257: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Oct 23 23:02:34.257: INFO: update-demo-kitten-dkdgl is verified up and running
Oct 23 23:02:34.257: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-kitten-s0hs3 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:02:34.443: INFO: true
Oct 23 23:02:34.443: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-kitten-s0hs3 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-fm7lf'
Oct 23 23:02:34.630: INFO: gcr.io/google_containers/update-demo:kitten
Oct 23 23:02:34.630: INFO: validating pod update-demo-kitten-s0hs3
Oct 23 23:02:34.634: INFO: got data: {
"image": "kitten.jpg"
}
Oct 23 23:02:34.634: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Oct 23 23:02:34.634: INFO: update-demo-kitten-s0hs3 is verified up and running
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-fm7lf
• [SLOW TEST:44.671 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:136
should do a rolling update of a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:135
------------------------------
S
------------------------------
[BeforeEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:02:39.655: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-aixq8
Oct 23 23:02:39.684: INFO: Service account default in ns e2e-tests-job-aixq8 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:02:41.687: INFO: Service account default in ns e2e-tests-job-aixq8 with secrets found. (2.03103612s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:02:41.687: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-aixq8
Oct 23 23:02:41.688: INFO: Service account default in ns e2e-tests-job-aixq8 with secrets found. (1.909885ms)
[It] should scale a job down
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:158
STEP: Creating a job
STEP: Ensuring active pods == startParallelism
STEP: scale job down
STEP: Ensuring active pods == endParallelism
[AfterEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:03:19.767: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:03:19.771: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:03:19.771: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:03:19.771: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:03:19.771: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:03:19.771: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:03:19.771: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:03:19.771: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:03:19.771: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:03:19.771: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:03:19.771: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:03:19.771: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:03:19.771: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-job-aixq8" for this suite.
• [SLOW TEST:45.134 seconds]
Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should scale a job down
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:158
------------------------------
[BeforeEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:03:24.792: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-t3nz2
Oct 23 23:03:24.820: INFO: Service account default in ns e2e-tests-proxy-t3nz2 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:03:26.823: INFO: Service account default in ns e2e-tests-proxy-t3nz2 with secrets found. (2.031782067s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:03:26.823: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-t3nz2
Oct 23 23:03:26.825: INFO: Service account default in ns e2e-tests-proxy-t3nz2 with secrets found. (1.757678ms)
[It] should proxy logs on node with explicit kubelet port [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
Oct 23 23:03:26.846: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 17.078503ms)
Oct 23 23:03:26.849: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.608935ms)
Oct 23 23:03:26.851: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.420167ms)
Oct 23 23:03:26.854: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.31537ms)
Oct 23 23:03:26.856: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.372937ms)
Oct 23 23:03:26.858: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.123692ms)
Oct 23 23:03:26.861: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.535066ms)
Oct 23 23:03:26.991: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 129.903874ms)
Oct 23 23:03:27.191: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 200.19276ms)
Oct 23 23:03:27.390: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 198.746604ms)
Oct 23 23:03:27.590: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.807268ms)
Oct 23 23:03:27.790: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 200.101092ms)
Oct 23 23:03:27.990: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 200.323482ms)
Oct 23 23:03:28.190: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.793772ms)
Oct 23 23:03:28.390: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.992015ms)
Oct 23 23:03:28.591: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 200.689305ms)
Oct 23 23:03:28.789: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 198.784462ms)
Oct 23 23:03:28.990: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 200.402146ms)
Oct 23 23:03:29.190: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 200.362549ms)
Oct 23 23:03:29.390: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:10250/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.45558ms)
[AfterEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:03:29.390: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:03:29.591: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:03:29.591: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:03:29.591: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:03:29.591: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:03:29.591: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:03:29.591: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:03:29.591: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:03:29.591: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:03:29.591: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:03:29.591: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:03:29.591: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:03:29.591: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-t3nz2" for this suite.
• [SLOW TEST:5.404 seconds]
Proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy logs on node with explicit kubelet port [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
------------------------------
[BeforeEach] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:01:05.596: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-b46n7
Oct 23 23:01:05.624: INFO: Get service account default in ns e2e-tests-deployment-b46n7 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 23 23:01:07.627: INFO: Service account default in ns e2e-tests-deployment-b46n7 with secrets found. (2.031792042s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:01:07.627: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-b46n7
Oct 23 23:01:07.629: INFO: Service account default in ns e2e-tests-deployment-b46n7 with secrets found. (1.952368ms)
[It] deployment should delete old pods and create new ones
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:37
Oct 23 23:01:07.689: INFO: Pod name sample-pod: Found 3 pods out of 3
STEP: ensuring each pod is running
Oct 23 23:01:07.689: INFO: Waiting up to 5m0s for pod nginx-controller-5bq58 status to be running
Oct 23 23:01:07.691: INFO: Waiting for pod nginx-controller-5bq58 in namespace 'e2e-tests-deployment-b46n7' status to be 'running'(found phase: "Pending", readiness: false) (2.329281ms elapsed)
Oct 23 23:01:09.695: INFO: Found pod 'nginx-controller-5bq58' on node 'pull-e2e-0-minion-dp0i'
Oct 23 23:01:09.695: INFO: Waiting up to 5m0s for pod nginx-controller-7zh2f status to be running
Oct 23 23:01:09.697: INFO: Found pod 'nginx-controller-7zh2f' on node 'pull-e2e-0-minion-l2bc'
Oct 23 23:01:09.697: INFO: Waiting up to 5m0s for pod nginx-controller-pjqp2 status to be running
Oct 23 23:01:09.699: INFO: Found pod 'nginx-controller-pjqp2' on node 'pull-e2e-0-minion-n5ko'
STEP: trying to dial each unique pod
Oct 23 23:01:09.708: INFO: Controller sample-pod: Got non-empty result from replica 1 [nginx-controller-5bq58]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 1 of 3 required successes so far
Oct 23 23:01:09.713: INFO: Controller sample-pod: Got non-empty result from replica 2 [nginx-controller-7zh2f]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 2 of 3 required successes so far
Oct 23 23:01:09.718: INFO: Controller sample-pod: Got non-empty result from replica 3 [nginx-controller-pjqp2]: "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n", 3 of 3 required successes so far
Oct 23 23:01:09.718: INFO: Creating deployment redis-deployment
Oct 23 23:03:31.743: INFO: deleting deployment redis-deployment
Oct 23 23:03:31.747: INFO: deleting replication controller nginx-controller
[AfterEach] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:03:31.751: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:03:31.755: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:03:31.755: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:03:31.755: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:03:31.755: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:03:31.755: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:03:31.755: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:03:31.755: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:03:31.755: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:03:31.755: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:03:31.755: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:03:31.755: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:03:31.755: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-deployment-b46n7" for this suite.
• [SLOW TEST:151.178 seconds]
Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:41
deployment should delete old pods and create new ones
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:37
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:03:36.775: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-59jqh
Oct 23 23:03:36.802: INFO: Service account default in ns e2e-tests-kubectl-59jqh with secrets found. (27.180591ms)
[BeforeEach] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:164
STEP: creating the pod
Oct 23 23:03:36.802: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-59jqh'
Oct 23 23:03:37.014: INFO: pod "nginx" created
Oct 23 23:03:37.014: INFO: Waiting up to 5m0s for the following 1 pods to be running and ready: [nginx]
Oct 23 23:03:37.014: INFO: Waiting up to 5m0s for pod nginx status to be running and ready
Oct 23 23:03:37.017: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-59jqh' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.961455ms elapsed)
Oct 23 23:03:39.020: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx]
[It] should support port-forward
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:434
STEP: forwarding the container port to a local port
Oct 23 23:03:39.020: INFO: starting port-forward command and streaming output
Oct 23 23:03:39.021: INFO: Asynchronously running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config port-forward --namespace=e2e-tests-kubectl-59jqh nginx :80'
Oct 23 23:03:39.021: INFO: reading from `kubectl port-forward` command's stderr
STEP: curling local port output
Oct 23 23:03:39.387: INFO: got:
[AfterEach] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:167
STEP: using delete to clean up resources
Oct 23 23:03:39.387: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-59jqh'
Oct 23 23:03:39.594: INFO: pod "nginx" deleted
Oct 23 23:03:39.594: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-59jqh'
Oct 23 23:03:39.774: INFO:
Oct 23 23:03:39.774: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-59jqh -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:03:39.952: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-59jqh
• Failure [8.195 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:435
should support port-forward [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:434
Oct 23 23:03:39.387: Failed http.Get of forwarded port (http://localhost:46381): Get http://localhost:46381: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:429
------------------------------
SS
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:03:30.204: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-nlpss
Oct 23 23:03:30.232: INFO: Service account default in ns e2e-tests-emptydir-nlpss had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:03:32.235: INFO: Service account default in ns e2e-tests-emptydir-nlpss with secrets found. (2.03040489s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:03:32.235: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-nlpss
Oct 23 23:03:32.237: INFO: Service account default in ns e2e-tests-emptydir-nlpss with secrets found. (1.756489ms)
[It] should support (non-root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:66
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 23 23:03:32.241: INFO: Waiting up to 5m0s for pod pod-47fea541-79da-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:03:32.270: INFO: No Status.Info for container 'test-container' in pod 'pod-47fea541-79da-11e5-ba1c-42010af00002' yet
Oct 23 23:03:32.270: INFO: Waiting for pod pod-47fea541-79da-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nlpss' status to be 'success or failure'(found phase: "Pending", readiness: false) (29.0973ms elapsed)
Oct 23 23:03:34.273: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-47fea541-79da-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nlpss' so far
Oct 23 23:03:34.273: INFO: Waiting for pod pod-47fea541-79da-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nlpss' status to be 'success or failure'(found phase: "Running", readiness: true) (2.032331559s elapsed)
Oct 23 23:03:36.277: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-47fea541-79da-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nlpss' so far
Oct 23 23:03:36.277: INFO: Waiting for pod pod-47fea541-79da-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nlpss' status to be 'success or failure'(found phase: "Running", readiness: true) (4.035666938s elapsed)
Oct 23 23:03:38.280: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-47fea541-79da-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nlpss' so far
Oct 23 23:03:38.280: INFO: Waiting for pod pod-47fea541-79da-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nlpss' status to be 'success or failure'(found phase: "Running", readiness: true) (6.039014408s elapsed)
Oct 23 23:03:40.283: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-47fea541-79da-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nlpss' so far
Oct 23 23:03:40.283: INFO: Waiting for pod pod-47fea541-79da-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nlpss' status to be 'success or failure'(found phase: "Running", readiness: true) (8.042307858s elapsed)
Oct 23 23:03:42.287: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-47fea541-79da-11e5-ba1c-42010af00002' in namespace 'e2e-tests-emptydir-nlpss' so far
Oct 23 23:03:42.287: INFO: Waiting for pod pod-47fea541-79da-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-nlpss' status to be 'success or failure'(found phase: "Running", readiness: true) (10.04605378s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod pod-47fea541-79da-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:03:44.308: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:03:44.336: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:03:44.336: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:03:44.336: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:03:44.336: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:03:44.336: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:03:44.336: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:03:44.336: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:03:44.336: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:03:44.336: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:03:44.336: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:03:44.336: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:03:44.336: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-nlpss" for this suite.
• [SLOW TEST:19.157 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:66
------------------------------
S
------------------------------
[BeforeEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:03:49.361: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-zt75o
Oct 23 23:03:49.389: INFO: Service account default in ns e2e-tests-job-zt75o with secrets found. (27.962276ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:03:49.389: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-zt75o
Oct 23 23:03:49.391: INFO: Service account default in ns e2e-tests-job-zt75o with secrets found. (1.803963ms)
[It] should run a job to completion when tasks succeed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:61
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:04:11.398: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:04:11.402: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:04:11.402: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:04:11.402: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:04:11.402: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:04:11.402: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:04:11.402: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:04:11.402: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:04:11.402: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:04:11.402: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:04:11.402: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:04:11.402: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:04:11.402: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-job-zt75o" for this suite.
• [SLOW TEST:27.063 seconds]
Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should run a job to completion when tasks succeed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:61
------------------------------
[BeforeEach] Mesos
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:04:16.424: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-mj7sa
Oct 23 23:04:16.452: INFO: Service account default in ns e2e-tests-pods-mj7sa had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:04:18.454: INFO: Service account default in ns e2e-tests-pods-mj7sa with secrets found. (2.030013438s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:04:18.454: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-mj7sa
Oct 23 23:04:18.456: INFO: Service account default in ns e2e-tests-pods-mj7sa with secrets found. (1.880249ms)
[BeforeEach] Mesos
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:33
[AfterEach] Mesos
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:04:18.457: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:04:18.461: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:04:18.461: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:04:18.461: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:04:18.461: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:04:18.461: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:04:18.461: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:04:18.461: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:04:18.461: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:04:18.461: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:04:18.461: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:04:18.461: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:04:18.461: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-mj7sa" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.055 seconds]
Mesos
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:53
applies slave attributes as labels [BeforeEach]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/mesos.go:52
Oct 23 23:04:18.456: Only supported for providers [mesos/docker] (not gce)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:216
------------------------------
S
------------------------------
[BeforeEach] Monitoring
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
[It] should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:48
•SS
------------------------------
[BeforeEach] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:04:23.621: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-kjw4j
Oct 23 23:04:23.648: INFO: Service account default in ns e2e-tests-replication-controller-kjw4j had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:04:25.651: INFO: Service account default in ns e2e-tests-replication-controller-kjw4j with secrets found. (2.029498769s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:04:25.651: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-kjw4j
Oct 23 23:04:25.653: INFO: Service account default in ns e2e-tests-replication-controller-kjw4j with secrets found. (2.168243ms)
[It] should serve a basic image on each replica with a private image
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:45
STEP: Creating replication controller my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002
Oct 23 23:04:25.688: INFO: Pod name my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002: Found 2 pods out of 2
STEP: Ensuring each pod is running
Oct 23 23:04:25.688: INFO: Waiting up to 5m0s for pod my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-2cczd status to be running
Oct 23 23:04:25.690: INFO: Waiting for pod my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-2cczd in namespace 'e2e-tests-replication-controller-kjw4j' status to be 'running'(found phase: "Pending", readiness: false) (2.523623ms elapsed)
Oct 23 23:04:27.694: INFO: Waiting for pod my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-2cczd in namespace 'e2e-tests-replication-controller-kjw4j' status to be 'running'(found phase: "Pending", readiness: false) (2.005957303s elapsed)
Oct 23 23:04:29.697: INFO: Found pod 'my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-2cczd' on node 'pull-e2e-0-minion-n5ko'
Oct 23 23:04:29.697: INFO: Waiting up to 5m0s for pod my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-don1i status to be running
Oct 23 23:04:29.700: INFO: Found pod 'my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-don1i' on node 'pull-e2e-0-minion-l2bc'
STEP: Trying to dial each unique pod
Oct 23 23:04:34.709: INFO: Controller my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002: Got expected result from replica 1 [my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-2cczd]: "my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-2cczd", 1 of 2 required successes so far
Oct 23 23:04:34.714: INFO: Controller my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002: Got expected result from replica 2 [my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-don1i]: "my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002-don1i", 2 of 2 required successes so far
STEP: deleting replication controller my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002 in namespace e2e-tests-replication-controller-kjw4j
Oct 23 23:04:36.765: INFO: Deleting RC my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002 took: 2.048888039s
Oct 23 23:04:46.771: INFO: Terminating RC my-hostname-private-67d561af-79da-11e5-ba1c-42010af00002 pods took: 10.005839756s
[AfterEach] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:04:46.771: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:04:46.775: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:04:46.775: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:04:46.775: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:04:46.775: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:04:46.775: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:04:46.775: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:04:46.775: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:04:46.775: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:04:46.775: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:04:46.775: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:04:46.775: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:04:46.775: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-replication-controller-kjw4j" for this suite.
• [SLOW TEST:28.195 seconds]
ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:46
should serve a basic image on each replica with a private image
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:45
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:03:44.975: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4uru9
Oct 23 23:03:45.002: INFO: Service account default in ns e2e-tests-pods-4uru9 with secrets found. (27.82562ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:03:45.002: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4uru9
Oct 23 23:03:45.005: INFO: Service account default in ns e2e-tests-pods-4uru9 with secrets found. (2.073079ms)
[It] should not back-off restarting a container on LivenessProbe failure
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:952
STEP: submitting the pod to kubernetes
Oct 23 23:03:45.010: INFO: Waiting up to 5m0s for pod pod-back-off-liveness status to be running
Oct 23 23:03:45.040: INFO: Waiting for pod pod-back-off-liveness in namespace 'e2e-tests-pods-4uru9' status to be 'running'(found phase: "Pending", readiness: false) (30.171572ms elapsed)
Oct 23 23:03:47.044: INFO: Found pod 'pod-back-off-liveness' on node 'pull-e2e-0-minion-l2bc'
STEP: verifying the pod is in kubernetes
STEP: getting restart delay-0
Oct 23 23:05:06.126: INFO: getRestartDelay: finishedAt=2015-10-23 23:05:05 +0000 UTC restartedAt=2015-10-23 23:05:05 +0000 UTC (0)
STEP: getting restart delay-1
Oct 23 23:05:45.261: INFO: getRestartDelay: finishedAt=2015-10-23 23:05:45 +0000 UTC restartedAt=2015-10-23 23:05:45 +0000 UTC (0)
STEP: getting restart delay-2
Oct 23 23:06:25.411: INFO: getRestartDelay: finishedAt=2015-10-23 23:06:25 +0000 UTC restartedAt=2015-10-23 23:06:25 +0000 UTC (0)
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:06:25.421: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:06:25.449: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:06:25.449: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:06:25.449: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:06:25.449: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:06:25.449: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:06:25.449: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:06:25.449: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:06:25.449: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:06:25.449: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:06:25.449: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:06:25.449: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:06:25.449: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-4uru9" for this suite.
• [SLOW TEST:165.493 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should not back-off restarting a container on LivenessProbe failure
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:952
------------------------------
S
------------------------------
[BeforeEach] Port forwarding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:06:30.473: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-pc72x
Oct 23 23:06:30.506: INFO: Service account default in ns e2e-tests-port-forwarding-pc72x with secrets found. (33.125687ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:06:30.506: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-pc72x
Oct 23 23:06:30.508: INFO: Service account default in ns e2e-tests-port-forwarding-pc72x with secrets found. (1.912399ms)
[It] should support a client that connects, sends no data, and disconnects [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:237
STEP: creating the target pod
Oct 23 23:06:30.513: INFO: Waiting up to 5m0s for pod pfpod status to be running
Oct 23 23:06:30.555: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-pc72x' status to be 'running'(found phase: "Pending", readiness: false) (41.977994ms elapsed)
Oct 23 23:06:32.558: INFO: Found pod 'pfpod' on node 'pull-e2e-0-minion-dp0i'
STEP: Running 'kubectl port-forward'
Oct 23 23:06:32.558: INFO: starting port-forward command and streaming output
Oct 23 23:06:32.559: INFO: Asynchronously running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config port-forward --namespace=e2e-tests-port-forwarding-pc72x pfpod :80'
Oct 23 23:06:32.560: INFO: reading from `kubectl port-forward` command's stderr
STEP: Dialing the local port
STEP: Reading data from the local port
STEP: Closing the connection to the local port
[AfterEach] Port forwarding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-port-forwarding-pc72x".
Oct 23 23:06:32.919: INFO: event for pfpod: {scheduler } Scheduled: Successfully assigned pfpod to pull-e2e-0-minion-dp0i
Oct 23 23:06:32.919: INFO: event for pfpod: {kubelet pull-e2e-0-minion-dp0i} Pulled: Container image "beta.gcr.io/google_containers/pause:2.0" already present on machine
Oct 23 23:06:32.919: INFO: event for pfpod: {kubelet pull-e2e-0-minion-dp0i} Created: Created with docker id 95d7eac7ba69
Oct 23 23:06:32.919: INFO: event for pfpod: {kubelet pull-e2e-0-minion-dp0i} Started: Started with docker id 95d7eac7ba69
Oct 23 23:06:32.919: INFO: event for pfpod: {kubelet pull-e2e-0-minion-dp0i} Pulling: pulling image "gcr.io/google_containers/portforwardtester:1.0"
Oct 23 23:06:32.919: INFO: event for pfpod: {kubelet pull-e2e-0-minion-dp0i} Pulled: Successfully pulled image "gcr.io/google_containers/portforwardtester:1.0"
Oct 23 23:06:32.919: INFO: event for pfpod: {kubelet pull-e2e-0-minion-dp0i} Created: Created with docker id 70536f351093
Oct 23 23:06:32.919: INFO: event for pfpod: {kubelet pull-e2e-0-minion-dp0i} Started: Started with docker id 70536f351093
Oct 23 23:06:32.927: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 23 23:06:32.927: INFO: liveness-exec pull-e2e-0-minion-n5ko Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:04:54 +0000 UTC }]
Oct 23 23:06:32.927: INFO: pfpod pull-e2e-0-minion-dp0i Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:06:32 +0000 UTC }]
Oct 23 23:06:32.927: INFO: elasticsearch-logging-v1-3gvtt pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:28 +0000 UTC }]
Oct 23 23:06:32.927: INFO: elasticsearch-logging-v1-t59p5 pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:10 +0000 UTC }]
Oct 23 23:06:32.927: INFO: heapster-v10-u3l0u pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:57 +0000 UTC }]
Oct 23 23:06:32.927: INFO: kibana-logging-v1-skk6z pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:00 +0000 UTC }]
Oct 23 23:06:32.927: INFO: kube-dns-v9-6u0vh pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:04 +0000 UTC }]
Oct 23 23:06:32.927: INFO: kube-ui-v3-laljg pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:49 +0000 UTC }]
Oct 23 23:06:32.927: INFO: monitoring-influxdb-grafana-v2-bpbto pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:57 +0000 UTC }]
Oct 23 23:06:32.927: INFO:
Oct 23 23:06:32.927: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:06:32.931: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:06:32.931: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:06:32.931: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:06:32.931: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:06:32.931: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:06:32.931: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:06:32.931: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:06:32.931: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:06:32.931: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:06:32.931: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:06:32.931: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:06:32.931: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-port-forwarding-pc72x" for this suite.
• Failure [7.482 seconds]
Port forwarding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:239
With a server that expects no client request
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:238
should support a client that connects, sends no data, and disconnects [Conformance] [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:237
Oct 23 23:06:32.915: Expected "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" from server, got ""
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:231
------------------------------
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:06:37.956: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-phkxt
Oct 23 23:06:37.986: INFO: Service account default in ns e2e-tests-services-phkxt with secrets found. (30.175325ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:06:37.986: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-phkxt
Oct 23 23:06:37.988: INFO: Service account default in ns e2e-tests-services-phkxt with secrets found. (1.765014ms)
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
[It] should prevent NodePort collisions
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:648
STEP: creating service nodeport-collision-1 with type NodePort in namespace e2e-tests-services-phkxt
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace e2e-tests-services-phkxt
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:06:38.157: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:06:38.161: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:06:38.161: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:06:38.161: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:06:38.161: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:06:38.161: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:06:38.161: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:06:38.161: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:06:38.161: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:06:38.161: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:06:38.161: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:06:38.161: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:06:38.161: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-services-phkxt" for this suite.
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:5.225 seconds]
Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:871
should prevent NodePort collisions
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:648
------------------------------
SS
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:04:51.796: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-rwocv
Oct 23 23:04:51.822: INFO: Service account default in ns e2e-tests-pods-rwocv had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:04:53.825: INFO: Service account default in ns e2e-tests-pods-rwocv with secrets found. (2.029405944s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:04:53.825: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-rwocv
Oct 23 23:04:53.827: INFO: Service account default in ns e2e-tests-pods-rwocv with secrets found. (1.722881ms)
[It] should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:593
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-rwocv
Oct 23 23:04:53.832: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Oct 23 23:04:53.884: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-rwocv' status to be '!pending'(found phase: "Pending", readiness: false) (51.751409ms elapsed)
Oct 23 23:04:55.887: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-rwocv' out of pending state (found '"Running"')
STEP: Started pod liveness-exec in namespace e2e-tests-pods-rwocv
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:06:56.163: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:06:56.196: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:06:56.196: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:06:56.196: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:06:56.196: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:06:56.196: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:06:56.196: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:06:56.196: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:06:56.196: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:06:56.196: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:06:56.196: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:06:56.196: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:06:56.196: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-rwocv" for this suite.
• [SLOW TEST:129.419 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:593
------------------------------
S
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:07:01.218: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-bgare
Oct 23 23:07:01.246: INFO: Service account default in ns e2e-tests-kubectl-bgare with secrets found. (28.430022ms)
[BeforeEach] Kubectl logs
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:694
STEP: creating an rc
Oct 23 23:07:01.246: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-bgare'
Oct 23 23:07:01.478: INFO: replicationcontroller "redis-master" created
[It] should be able to retrieve and filter logs [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:735
Oct 23 23:07:03.484: INFO: Waiting up to 5m0s for pod redis-master-z1bv7 status to be running
Oct 23 23:07:03.486: INFO: Waiting for pod redis-master-z1bv7 in namespace 'e2e-tests-kubectl-bgare' status to be 'running'(found phase: "Pending", readiness: false) (2.229135ms elapsed)
Oct 23 23:07:05.489: INFO: Waiting for pod redis-master-z1bv7 in namespace 'e2e-tests-kubectl-bgare' status to be 'running'(found phase: "Pending", readiness: false) (2.005442359s elapsed)
Oct 23 23:07:07.493: INFO: Waiting for pod redis-master-z1bv7 in namespace 'e2e-tests-kubectl-bgare' status to be 'running'(found phase: "Pending", readiness: false) (4.008605293s elapsed)
Oct 23 23:07:09.496: INFO: Waiting for pod redis-master-z1bv7 in namespace 'e2e-tests-kubectl-bgare' status to be 'running'(found phase: "Pending", readiness: false) (6.011682889s elapsed)
Oct 23 23:07:11.499: INFO: Found pod 'redis-master-z1bv7' on node 'pull-e2e-0-minion-n5ko'
STEP: checking for a matching strings
Oct 23 23:07:11.499: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config log redis-master-z1bv7 redis-master --namespace=e2e-tests-kubectl-bgare'
Oct 23 23:07:11.678: INFO: 1:C 23 Oct 23:07:10.914 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.0.5 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
1:M 23 Oct 23:07:10.915 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 23 Oct 23:07:10.915 # Server started, Redis version 3.0.5
1:M 23 Oct 23:07:10.915 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 23 Oct 23:07:10.915 * The server is now ready to accept connections on port 6379
STEP: limiting log lines
Oct 23 23:07:11.678: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config log redis-master-z1bv7 redis-master --namespace=e2e-tests-kubectl-bgare --tail=1'
Oct 23 23:07:11.855: INFO: 1:M 23 Oct 23:07:10.915 * The server is now ready to accept connections on port 6379
STEP: limiting log bytes
Oct 23 23:07:11.855: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config log redis-master-z1bv7 redis-master --namespace=e2e-tests-kubectl-bgare --limit-bytes=1'
Oct 23 23:07:12.037: INFO: 1
STEP: exposing timestamps
Oct 23 23:07:12.037: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config log redis-master-z1bv7 redis-master --namespace=e2e-tests-kubectl-bgare --tail=1 --timestamps'
Oct 23 23:07:12.215: INFO: 2015-10-23T23:07:10.934817418Z 1:M 23 Oct 23:07:10.915 * The server is now ready to accept connections on port 6379
STEP: restricting to a time range
Oct 23 23:07:13.716: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config log redis-master-z1bv7 redis-master --namespace=e2e-tests-kubectl-bgare --since=1s'
Oct 23 23:07:13.907: INFO:
Oct 23 23:07:13.907: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config log redis-master-z1bv7 redis-master --namespace=e2e-tests-kubectl-bgare --since=24h'
Oct 23 23:07:14.088: INFO: 1:C 23 Oct 23:07:10.914 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.0.5 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
1:M 23 Oct 23:07:10.915 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 23 Oct 23:07:10.915 # Server started, Redis version 3.0.5
1:M 23 Oct 23:07:10.915 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 23 Oct 23:07:10.915 * The server is now ready to accept connections on port 6379
[AfterEach] Kubectl logs
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:697
STEP: using delete to clean up resources
Oct 23 23:07:14.088: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-bgare'
Oct 23 23:07:16.319: INFO: replicationcontroller "redis-master" deleted
Oct 23 23:07:16.319: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-bgare'
Oct 23 23:07:16.501: INFO:
Oct 23 23:07:16.501: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-bgare -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:07:16.722: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-bgare
• [SLOW TEST:20.524 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl logs
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:736
should be able to retrieve and filter logs [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:735
------------------------------
[BeforeEach] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:07:21.742: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-6i3es
Oct 23 23:07:21.770: INFO: Service account default in ns e2e-tests-events-6i3es had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:07:23.773: INFO: Service account default in ns e2e-tests-events-6i3es with secrets found. (2.030514197s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:07:23.773: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-6i3es
Oct 23 23:07:23.775: INFO: Service account default in ns e2e-tests-events-6i3es with secrets found. (2.531288ms)
[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:127
STEP: creating the pod
STEP: submitting the pod to kubernetes
Oct 23 23:07:23.780: INFO: Waiting up to 5m0s for pod send-events-d200ae08-79da-11e5-ba1c-42010af00002 status to be running
Oct 23 23:07:23.813: INFO: Waiting for pod send-events-d200ae08-79da-11e5-ba1c-42010af00002 in namespace 'e2e-tests-events-6i3es' status to be 'running'(found phase: "Pending", readiness: false) (32.450242ms elapsed)
Oct 23 23:07:25.816: INFO: Found pod 'send-events-d200ae08-79da-11e5-ba1c-42010af00002' on node 'pull-e2e-0-minion-l2bc'
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
&{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:send-events-d200ae08-79da-11e5-ba1c-42010af00002 GenerateName: Namespace:e2e-tests-events-6i3es SelfLink:/api/v1/namespaces/e2e-tests-events-6i3es/pods/send-events-d200ae08-79da-11e5-ba1c-42010af00002 UID:d2036562-79da-11e5-b1b8-42010af00002 ResourceVersion:2928 Generation:0 CreationTimestamp:2015-10-23 23:07:23 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[time:775956519 name:foo] Annotations:map[]} Spec:{Volumes:[{Name:default-token-w2koe VolumeSource:{HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil> GitRepo:<nil> Secret:0xc208bdcfd0 NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaim:<nil> RBD:<nil> Cinder:<nil> CephFS:<nil> Flocker:<nil> DownwardAPI:<nil> FC:<nil>}}] Containers:[{Name:p Image:gcr.io/google_containers/serve_hostname:1.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-w2koe ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:<nil> Stdin:false StdinOnce:false TTY:false}] RestartPolicy:Always TerminationGracePeriodSeconds:0xc208bdd000 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName:default NodeName:pull-e2e-0-minion-l2bc SecurityContext:0xc208addc80 ImagePullSecrets:[]} Status:{Phase:Running Conditions:[{Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2015-10-23 23:07:24 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.245.2.21 StartTime:2015-10-23 23:07:23 +0000 UTC ContainerStatuses:[{Name:p State:{Waiting:<nil> Running:0xc20877a620 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:true RestartCount:0 Image:gcr.io/google_containers/serve_hostname:1.1 ImageID:docker://00619279d4083019321e4865829a65a550a23c677d76cbb44274ade0d92ca7a9 ContainerID:docker://1f25eb6267ab0522fdeaff9bf78d7cca911a7011b113caeddb851e7da907a460}]}}
STEP: checking for scheduler event about the pod
Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:07:29.834: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:07:29.838: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:07:29.838: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:07:29.838: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:07:29.838: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:07:29.838: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:07:29.838: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:07:29.838: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:07:29.838: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:07:29.838: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:07:29.838: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:07:29.838: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:07:29.838: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-events-6i3es" for this suite.
• [SLOW TEST:13.114 seconds]
Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:127
------------------------------
SSS
------------------------------
[BeforeEach] PrivilegedPod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:07:34.860: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-privilegedpod-kye2y
Oct 23 23:07:34.887: INFO: Service account default in ns e2e-tests-e2e-privilegedpod-kye2y with secrets found. (27.409339ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:07:34.887: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-privilegedpod-kye2y
Oct 23 23:07:34.889: INFO: Service account default in ns e2e-tests-e2e-privilegedpod-kye2y with secrets found. (1.799824ms)
[It] should test privileged pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:73
STEP: Getting ssh-able hosts
[AfterEach] PrivilegedPod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-e2e-privilegedpod-kye2y".
Oct 23 23:07:34.904: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 23 23:07:34.904: INFO: dns-test-b9d4de92-79da-11e5-9772-42010af00002 pull-e2e-0-minion-dp0i Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:07:21 +0000 UTC }]
Oct 23 23:07:34.904: INFO: elasticsearch-logging-v1-3gvtt pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:28 +0000 UTC }]
Oct 23 23:07:34.904: INFO: elasticsearch-logging-v1-t59p5 pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:10 +0000 UTC }]
Oct 23 23:07:34.904: INFO: heapster-v10-u3l0u pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:57 +0000 UTC }]
Oct 23 23:07:34.904: INFO: kibana-logging-v1-skk6z pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:00 +0000 UTC }]
Oct 23 23:07:34.904: INFO: kube-dns-v9-6u0vh pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:04 +0000 UTC }]
Oct 23 23:07:34.904: INFO: kube-ui-v3-laljg pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:49 +0000 UTC }]
Oct 23 23:07:34.904: INFO: monitoring-influxdb-grafana-v2-bpbto pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:57 +0000 UTC }]
Oct 23 23:07:34.904: INFO:
Oct 23 23:07:34.904: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:07:34.907: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:07:34.907: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:07:34.907: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:07:34.907: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:07:34.907: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:07:34.907: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:07:34.907: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:07:34.907: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:07:34.907: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:07:34.907: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:07:34.907: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:07:34.907: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-e2e-privilegedpod-kye2y" for this suite.
• Failure [5.064 seconds]
PrivilegedPod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:74
should test privileged pod [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:73
Expected error:
<*errors.errorString | 0xc2083495f0>: {
s: "only found 0 external IPs on nodes, but found 6 nodes. Nodelist: &{{ } {/api/v1/nodes 2946} [{{ } {pull-e2e-0-minion-1dli /api/v1/nodes/pull-e2e-0-minion-1dli 473e964e-79d8-11e5-b1b8-42010af00002 2930 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-1dli] map[]} {10.245.0.0/24 pull-e2e-0-minion-1dli false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:25 +0000 UTC 2015-10-23 22:49:42 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.5} {InternalIP 10.240.0.5}] {{10250}} {06067045f01b156572deffb722c59e21 06067045-F01B-1565-72DE-FFB722C59E21 c1b93dc7-9f9c-4317-81d7-1d063eba1c00 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-djcb /api/v1/nodes/pull-e2e-0-minion-djcb 48872259-79d8-11e5-b1b8-42010af00002 2931 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-djcb] map[]} {10.245.1.0/24 pull-e2e-0-minion-djcb false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:27 +0000 UTC 2015-10-23 22:49:44 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.6} {InternalIP 10.240.0.6}] {{10250}} {b1aa0c82a3d85173231543dc6bda5cc2 B1AA0C82-A3D8-5173-2315-43DC6BDA5CC2 3aa605f8-62d1-433f-a90d-ef2ac6c1745e 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-dp0i /api/v1/nodes/pull-e2e-0-minion-dp0i 4a304db6-79d8-11e5-b1b8-42010af00002 2941 0 2015-10-23 22:49:16 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-dp0i] map[]} {10.245.4.0/24 pull-e2e-0-minion-dp0i false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:31 +0000 UTC 2015-10-23 22:50:07 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.8} {InternalIP 10.240.0.8}] {{10250}} {804cf75d9249d4fc0a14982a39e001fe 804CF75D-9249-D4FC-0A14-982A39E001FE a506e0e8-5682-4acb-b54c-91020168ed4f 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-l2bc /api/v1/nodes/pull-e2e-0-minion-l2bc 48f64a03-79d8-11e5-b1b8-42010af00002 2932 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-l2bc] map[]} {10.245.2.0/24 pull-e2e-0-minion-l2bc false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:27 +0000 UTC 2015-10-23 22:49:55 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.4} {InternalIP 10.240.0.4}] {{10250}} {dfc1c1a2abfb382eacf83a4da9490f83 DFC1C1A2-ABFB-382E-ACF8-3A4DA9490F83 258d6dfd-ef7f-41e7-b15f-60faf4203bd1 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-n5ko /api/v1/nodes/pull-e2e-0-minion-n5ko 4725d39b-79d8-11e5-b1b8-42010af00002 2929 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-n5ko] map[]} {10.245.3.0/24 pull-e2e-0-minion-n5ko false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:25 +0000 UTC 2015-10-23 22:49:52 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.3} {InternalIP 10.240.0.3}] {{10250}} {4445e34b51378526c1736108982220c7 4445E34B-5137-8526-C173-6108982220C7 4bfd0525-d31e-425c-8eab-0d8868fea7a7 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-zr43 /api/v1/nodes/pull-e2e-0-minion-zr43 46d4a2a4-79d8-11e5-b1b8-42010af00002 2942 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-zr43] map[]} {10.245.5.0/24 pull-e2e-0-minion-zr43 false} {map[memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{Ready True 2015-10-23 23:07:34 +0000 UTC 2015-10-23 22:49:41 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.7} {InternalIP 10.240.0.7}] {{10250}} {6d724240d0b36bd02e7ac95bb50c19a5 6D724240-D0B3-6BD0-2E7A-C95BB50C19A5 baa0ce0e-dcdc-428c-b65d-89e0a0e33cb3 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}}]}",
}
only found 0 external IPs on nodes, but found 6 nodes. Nodelist: &{{ } {/api/v1/nodes 2946} [{{ } {pull-e2e-0-minion-1dli /api/v1/nodes/pull-e2e-0-minion-1dli 473e964e-79d8-11e5-b1b8-42010af00002 2930 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-1dli] map[]} {10.245.0.0/24 pull-e2e-0-minion-1dli false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:25 +0000 UTC 2015-10-23 22:49:42 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.5} {InternalIP 10.240.0.5}] {{10250}} {06067045f01b156572deffb722c59e21 06067045-F01B-1565-72DE-FFB722C59E21 c1b93dc7-9f9c-4317-81d7-1d063eba1c00 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-djcb /api/v1/nodes/pull-e2e-0-minion-djcb 48872259-79d8-11e5-b1b8-42010af00002 2931 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-djcb] map[]} {10.245.1.0/24 pull-e2e-0-minion-djcb false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:27 +0000 UTC 2015-10-23 22:49:44 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.6} {InternalIP 10.240.0.6}] {{10250}} {b1aa0c82a3d85173231543dc6bda5cc2 B1AA0C82-A3D8-5173-2315-43DC6BDA5CC2 3aa605f8-62d1-433f-a90d-ef2ac6c1745e 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-dp0i /api/v1/nodes/pull-e2e-0-minion-dp0i 4a304db6-79d8-11e5-b1b8-42010af00002 2941 0 2015-10-23 22:49:16 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-dp0i] map[]} {10.245.4.0/24 pull-e2e-0-minion-dp0i false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:31 +0000 UTC 2015-10-23 22:50:07 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.8} {InternalIP 10.240.0.8}] {{10250}} {804cf75d9249d4fc0a14982a39e001fe 804CF75D-9249-D4FC-0A14-982A39E001FE a506e0e8-5682-4acb-b54c-91020168ed4f 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-l2bc /api/v1/nodes/pull-e2e-0-minion-l2bc 48f64a03-79d8-11e5-b1b8-42010af00002 2932 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-l2bc] map[]} {10.245.2.0/24 pull-e2e-0-minion-l2bc false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:27 +0000 UTC 2015-10-23 22:49:55 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.4} {InternalIP 10.240.0.4}] {{10250}} {dfc1c1a2abfb382eacf83a4da9490f83 DFC1C1A2-ABFB-382E-ACF8-3A4DA9490F83 258d6dfd-ef7f-41e7-b15f-60faf4203bd1 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-n5ko /api/v1/nodes/pull-e2e-0-minion-n5ko 4725d39b-79d8-11e5-b1b8-42010af00002 2929 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-n5ko] map[]} {10.245.3.0/24 pull-e2e-0-minion-n5ko false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:07:25 +0000 UTC 2015-10-23 22:49:52 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.3} {InternalIP 10.240.0.3}] {{10250}} {4445e34b51378526c1736108982220c7 4445E34B-5137-8526-C173-6108982220C7 4bfd0525-d31e-425c-8eab-0d8868fea7a7 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-zr43 /api/v1/nodes/pull-e2e-0-minion-zr43 46d4a2a4-79d8-11e5-b1b8-42010af00002 2942 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-zr43] map[]} {10.245.5.0/24 pull-e2e-0-minion-zr43 false} {map[memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{Ready True 2015-10-23 23:07:34 +0000 UTC 2015-10-23 22:49:41 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.7} {InternalIP 10.240.0.7}] {{10250}} {6d724240d0b36bd02e7ac95bb50c19a5 6D724240-D0B3-6BD0-2E7A-C95BB50C19A5 baa0ce0e-dcdc-428c-b65d-89e0a0e33cb3 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:59
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:07:39.927: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-fnsd9
Oct 23 23:07:39.970: INFO: Service account default in ns e2e-tests-kubectl-fnsd9 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:07:41.972: INFO: Service account default in ns e2e-tests-kubectl-fnsd9 with secrets found. (2.045405076s)
[BeforeEach] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:164
STEP: creating the pod
Oct 23 23:07:41.973: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-fnsd9'
Oct 23 23:07:42.182: INFO: pod "nginx" created
Oct 23 23:07:42.182: INFO: Waiting up to 5m0s for the following 1 pods to be running and ready: [nginx]
Oct 23 23:07:42.183: INFO: Waiting up to 5m0s for pod nginx status to be running and ready
Oct 23 23:07:42.186: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-fnsd9' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.53267ms elapsed)
Oct 23 23:07:44.189: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx]
[It] should support exec
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:199
STEP: executing a command in the container
Oct 23 23:07:44.189: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config exec --namespace=e2e-tests-kubectl-fnsd9 nginx echo running in container'
Oct 23 23:07:44.566: INFO: running in container
STEP: executing a command in the container with noninteractive stdin
Oct 23 23:07:44.566: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config exec --namespace=e2e-tests-kubectl-fnsd9 -i nginx cat'
Oct 23 23:07:44.941: INFO: abcd1234
STEP: executing a command in the container with pseudo-interactive stdin
Oct 23 23:07:44.941: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config exec --namespace=e2e-tests-kubectl-fnsd9 -i nginx bash'
Oct 23 23:07:45.325: INFO: hi
[AfterEach] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:167
STEP: using delete to clean up resources
Oct 23 23:07:45.325: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-fnsd9'
Oct 23 23:07:45.517: INFO: pod "nginx" deleted
Oct 23 23:07:45.517: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-fnsd9'
Oct 23 23:07:45.697: INFO:
Oct 23 23:07:45.697: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-fnsd9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:07:45.873: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-fnsd9
• [SLOW TEST:10.964 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:435
should support exec
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:199
------------------------------
SS
------------------------------
[BeforeEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:07:50.893: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-0x3t2
Oct 23 23:07:50.927: INFO: Service account default in ns e2e-tests-job-0x3t2 with secrets found. (33.470289ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:07:50.927: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-0x3t2
Oct 23 23:07:50.929: INFO: Service account default in ns e2e-tests-job-0x3t2 with secrets found. (1.84452ms)
[It] should keep restarting failed pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:106
STEP: Creating a job
STEP: Ensuring job shows many failures
[AfterEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:08:16.937: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:08:16.941: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:08:16.941: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:08:16.941: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:08:16.941: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:08:16.941: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:08:16.941: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:08:16.941: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:08:16.941: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:08:16.941: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:08:16.941: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:08:16.941: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:08:16.941: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-job-0x3t2" for this suite.
• [SLOW TEST:31.067 seconds]
Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should keep restarting failed pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:106
------------------------------
SS
------------------------------
[BeforeEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:08:21.963: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-lc077
Oct 23 23:08:21.993: INFO: Service account default in ns e2e-tests-proxy-lc077 with secrets found. (29.711225ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:08:21.993: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-lc077
Oct 23 23:08:21.995: INFO: Service account default in ns e2e-tests-proxy-lc077 with secrets found. (1.78807ms)
[It] should proxy logs on node [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
Oct 23 23:08:22.002: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 3.113284ms)
Oct 23 23:08:22.004: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.502783ms)
Oct 23 23:08:22.006: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.175147ms)
Oct 23 23:08:22.009: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.320464ms)
Oct 23 23:08:22.011: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.530402ms)
Oct 23 23:08:22.013: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 2.031401ms)
Oct 23 23:08:22.162: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 148.812741ms)
Oct 23 23:08:22.363: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 201.337378ms)
Oct 23 23:08:22.562: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 198.307104ms)
Oct 23 23:08:22.761: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.222126ms)
Oct 23 23:08:22.962: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 200.824413ms)
Oct 23 23:08:23.162: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.719337ms)
Oct 23 23:08:23.361: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.446348ms)
Oct 23 23:08:23.561: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.853771ms)
Oct 23 23:08:23.761: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.994333ms)
Oct 23 23:08:23.961: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 200.188158ms)
Oct 23 23:08:24.161: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 199.962315ms)
Oct 23 23:08:24.364: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 202.530677ms)
Oct 23 23:08:24.561: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 197.342143ms)
Oct 23 23:08:24.763: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli/logs/: <pre>
<a href="wtmp">wtmp</a>
<a href="google.log">google.log</a>
<a href="btmp">btmp</a>
<a href... (200; 201.851166ms)
[AfterEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:08:24.763: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:08:24.963: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:08:24.963: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:08:24.963: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:08:24.963: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:08:24.963: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:08:24.963: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:08:24.963: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:08:24.963: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:08:24.963: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:08:24.963: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:08:24.963: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:08:24.963: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-lc077" for this suite.
•
------------------------------
[BeforeEach] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:06:43.187: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-wn7zo
Oct 23 23:06:43.214: INFO: Service account default in ns e2e-tests-dns-wn7zo with secrets found. (26.935018ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:06:43.214: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-wn7zo
Oct 23 23:06:43.216: INFO: Service account default in ns e2e-tests-dns-wn7zo with secrets found. (1.573725ms)
[It] should provide DNS for the cluster
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:223
STEP: Waiting for DNS Service to be Running
Oct 23 23:06:43.220: INFO: Waiting up to 5m0s for pod kube-dns-v9-6u0vh status to be running
Oct 23 23:06:43.223: INFO: Found pod 'kube-dns-v9-6u0vh' on node 'pull-e2e-0-minion-djcb'
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
Oct 23 23:06:43.229: INFO: Waiting up to 5m0s for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 status to be running
Oct 23 23:06:43.257: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (27.763412ms elapsed)
Oct 23 23:06:45.260: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (2.030991136s elapsed)
Oct 23 23:06:47.264: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (4.035283519s elapsed)
Oct 23 23:06:49.267: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (6.038599921s elapsed)
Oct 23 23:06:51.271: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (8.042178489s elapsed)
Oct 23 23:06:53.275: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (10.045694105s elapsed)
Oct 23 23:06:55.278: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (12.048898892s elapsed)
Oct 23 23:06:57.282: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (14.052839598s elapsed)
Oct 23 23:06:59.285: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (16.055973713s elapsed)
Oct 23 23:07:01.288: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (18.059043873s elapsed)
Oct 23 23:07:03.292: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (20.062837598s elapsed)
Oct 23 23:07:05.295: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (22.065879641s elapsed)
Oct 23 23:07:07.299: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (24.069772643s elapsed)
Oct 23 23:07:09.302: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (26.073398062s elapsed)
Oct 23 23:07:11.306: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (28.0769214s elapsed)
Oct 23 23:07:13.309: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (30.080644546s elapsed)
Oct 23 23:07:15.313: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (32.083716663s elapsed)
Oct 23 23:07:17.317: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (34.087675144s elapsed)
Oct 23 23:07:19.320: INFO: Waiting for pod dns-test-b9d4de92-79da-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-wn7zo' status to be 'running'(found phase: "Pending", readiness: false) (36.091036006s elapsed)
Oct 23 23:07:21.323: INFO: Found pod 'dns-test-b9d4de92-79da-11e5-9772-42010af00002' on node 'pull-e2e-0-minion-dp0i'
STEP: retrieving the pod
STEP: looking for the results for each expected name from probiers
Oct 23 23:07:23.362: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:23.364: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:24.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:25.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:25.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:07:26.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:27.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:28.995: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:29.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:29.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:07:30.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:31.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:32.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:33.183: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:33.183: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:07:34.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:35.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:36.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:37.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:37.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:07:38.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:39.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:40.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:41.193: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:41.193: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:07:42.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:43.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:44.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:45.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:45.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:07:46.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:47.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:48.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:49.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:49.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:07:50.988: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:51.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:52.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:53.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:53.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:07:54.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:55.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:56.985: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:57.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:57.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:07:58.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:07:59.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:00.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:01.185: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:01.185: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:08:02.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:03.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:04.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:05.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:05.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:08:06.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:07.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:08.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:09.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:09.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:08:10.985: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:11.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:12.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:13.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:13.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:08:14.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:15.186: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:16.986: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:17.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:17.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:08:18.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:19.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:20.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:21.196: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:21.196: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:08:22.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:23.198: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:24.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:25.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:25.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:08:26.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:27.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:28.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:29.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:29.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
Oct 23 23:08:30.984: INFO: Unable to read wheezy_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:31.184: INFO: Unable to read wheezy_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:32.984: INFO: Unable to read jessie_udp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:33.184: INFO: Unable to read jessie_tcp@metadata from pod dns-test-b9d4de92-79da-11e5-9772-42010af00002: the server could not find the requested resource (get pods dns-test-b9d4de92-79da-11e5-9772-42010af00002)
Oct 23 23:08:33.184: INFO: Lookups using dns-test-b9d4de92-79da-11e5-9772-42010af00002 failed for: [wheezy_udp@metadata wheezy_tcp@metadata jessie_udp@metadata jessie_tcp@metadata]
STEP: deleting the pod
[AfterEach] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-dns-wn7zo".
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {scheduler } Scheduled: Successfully assigned dns-test-b9d4de92-79da-11e5-9772-42010af00002 to pull-e2e-0-minion-dp0i
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Pulled: Container image "beta.gcr.io/google_containers/pause:2.0" already present on machine
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Created: Created with docker id a2ec6941d877
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Started: Started with docker id a2ec6941d877
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Pulled: Container image "gcr.io/google_containers/test-webserver" already present on machine
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Created: Created with docker id 0dddc6ef63ca
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Started: Started with docker id 0dddc6ef63ca
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Pulling: pulling image "gcr.io/google_containers/dnsutils"
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Pulled: Successfully pulled image "gcr.io/google_containers/dnsutils"
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Created: Created with docker id b1dfde7e6ba1
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Started: Started with docker id b1dfde7e6ba1
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Pulling: pulling image "gcr.io/google_containers/jessie-dnsutils"
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Pulled: Successfully pulled image "gcr.io/google_containers/jessie-dnsutils"
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Created: Created with docker id 32b90feb17ff
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Started: Started with docker id 32b90feb17ff
Oct 23 23:08:33.613: INFO: event for dns-test-b9d4de92-79da-11e5-9772-42010af00002: {kubelet pull-e2e-0-minion-dp0i} Killing: Killing with docker id 0dddc6ef63ca
Oct 23 23:08:33.789: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 23 23:08:33.789: INFO: netexec pull-e2e-0-minion-l2bc Pending []
Oct 23 23:08:33.789: INFO: nginx pull-e2e-0-minion-n5ko Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:08:28 +0000 UTC }]
Oct 23 23:08:33.789: INFO: elasticsearch-logging-v1-3gvtt pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:28 +0000 UTC }]
Oct 23 23:08:33.789: INFO: elasticsearch-logging-v1-t59p5 pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:10 +0000 UTC }]
Oct 23 23:08:33.789: INFO: heapster-v10-u3l0u pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:57 +0000 UTC }]
Oct 23 23:08:33.789: INFO: kibana-logging-v1-skk6z pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:00 +0000 UTC }]
Oct 23 23:08:33.789: INFO: kube-dns-v9-6u0vh pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:04 +0000 UTC }]
Oct 23 23:08:33.789: INFO: kube-ui-v3-laljg pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:49 +0000 UTC }]
Oct 23 23:08:33.789: INFO: monitoring-influxdb-grafana-v2-bpbto pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:57 +0000 UTC }]
Oct 23 23:08:33.789: INFO:
Oct 23 23:08:33.789: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:08:33.985: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:08:33.985: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:08:33.985: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:08:33.985: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:08:33.985: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:08:33.985: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:08:33.985: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:08:33.985: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:08:33.985: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:08:33.985: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:08:33.985: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:08:33.985: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-dns-wn7zo" for this suite.
• Failure [111.404 seconds]
DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:309
should provide DNS for the cluster [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:223
Expected error:
<*errors.errorString | 0xc20813f380>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:151
------------------------------
S
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:08:25.568: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-7wsyt
Oct 23 23:08:25.596: INFO: Service account default in ns e2e-tests-kubectl-7wsyt had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:08:27.598: INFO: Service account default in ns e2e-tests-kubectl-7wsyt with secrets found. (2.029800633s)
[BeforeEach] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:164
STEP: creating the pod
Oct 23 23:08:27.598: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:27.817: INFO: pod "nginx" created
Oct 23 23:08:27.817: INFO: Waiting up to 5m0s for the following 1 pods to be running and ready: [nginx]
Oct 23 23:08:27.817: INFO: Waiting up to 5m0s for pod nginx status to be running and ready
Oct 23 23:08:27.819: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-7wsyt' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.642596ms elapsed)
Oct 23 23:08:29.822: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx]
[It] should support exec through an HTTP proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:371
STEP: Finding a static kubectl for upload
STEP: Using the kubectl in /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/386/kubectl
Oct 23 23:08:29.822: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/test/images/netexec/pod.yaml --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:30.047: INFO: pod "netexec" created
Oct 23 23:08:30.047: INFO: Waiting up to 5m0s for the following 1 pods to be running and ready: [netexec]
Oct 23 23:08:30.047: INFO: Waiting up to 5m0s for pod netexec status to be running and ready
Oct 23 23:08:30.049: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-7wsyt' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.376514ms elapsed)
Oct 23 23:08:32.053: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-7wsyt' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.005602319s elapsed)
Oct 23 23:08:34.055: INFO: Waiting for pod netexec in namespace 'e2e-tests-kubectl-7wsyt' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.008493651s elapsed)
Oct 23 23:08:36.059: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [netexec]
STEP: uploading kubeconfig to netexec
STEP: uploading kubectl to netexec
STEP: Running kubectl in netexec via an HTTP proxy using https_proxy
Oct 23 23:08:36.497: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:36.709: INFO: pod "goproxy" created
Oct 23 23:08:36.709: INFO: Waiting up to 5m0s for the following 1 pods to be running and ready: [goproxy]
Oct 23 23:08:36.709: INFO: Waiting up to 5m0s for pod goproxy status to be running and ready
Oct 23 23:08:36.745: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-7wsyt' status to be 'running and ready'(found phase: "Pending", readiness: false) (36.206516ms elapsed)
Oct 23 23:08:38.748: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-7wsyt' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.039021267s elapsed)
Oct 23 23:08:40.752: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [goproxy]
Oct 23 23:08:41.552: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config log goproxy --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:41.940: INFO: 2015/10/23 23:08:41 [001] INFO: Running 0 CONNECT handlers
2015/10/23 23:08:41 [001] INFO: Accepting CONNECT to 104.196.0.155:443
2015/10/23 23:08:41 [002] INFO: Running 0 CONNECT handlers
2015/10/23 23:08:41 [002] INFO: Accepting CONNECT to 104.196.0.155:443
2015/10/23 23:08:41 [002] WARN: Error copying to client: read tcp 10.245.4.16:36243->104.196.0.155:443: read tcp 10.245.4.16:8080->10.245.2.30:58658: read: connection reset by peer
STEP: using delete to clean up resources
Oct 23 23:08:41.940: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:42.161: INFO: pod "goproxy" deleted
Oct 23 23:08:42.161: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=goproxy --no-headers --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:42.350: INFO:
Oct 23 23:08:42.350: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=goproxy --namespace=e2e-tests-kubectl-7wsyt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:08:42.531: INFO:
STEP: Running kubectl in netexec via an HTTP proxy using HTTPS_PROXY
Oct 23 23:08:42.531: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:42.748: INFO: pod "goproxy" created
Oct 23 23:08:42.748: INFO: Waiting up to 5m0s for the following 1 pods to be running and ready: [goproxy]
Oct 23 23:08:42.748: INFO: Waiting up to 5m0s for pod goproxy status to be running and ready
Oct 23 23:08:42.751: INFO: Waiting for pod goproxy in namespace 'e2e-tests-kubectl-7wsyt' status to be 'running and ready'(found phase: "Pending", readiness: false) (3.466459ms elapsed)
Oct 23 23:08:44.755: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [goproxy]
Oct 23 23:08:45.561: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config log goproxy --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:45.743: INFO: 2015/10/23 23:08:45 [001] INFO: Running 0 CONNECT handlers
2015/10/23 23:08:45 [001] INFO: Accepting CONNECT to 104.196.0.155:443
2015/10/23 23:08:45 [002] INFO: Running 0 CONNECT handlers
2015/10/23 23:08:45 [002] INFO: Accepting CONNECT to 104.196.0.155:443
STEP: using delete to clean up resources
Oct 23 23:08:45.743: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/test/images/goproxy/pod.yaml --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:45.940: INFO: pod "goproxy" deleted
Oct 23 23:08:45.940: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=goproxy --no-headers --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:46.121: INFO:
Oct 23 23:08:46.121: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=goproxy --namespace=e2e-tests-kubectl-7wsyt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:08:46.306: INFO:
STEP: using delete to clean up resources
Oct 23 23:08:46.306: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/test/images/netexec/pod.yaml --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:46.502: INFO: pod "netexec" deleted
Oct 23 23:08:46.502: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=netexec --no-headers --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:46.690: INFO:
Oct 23 23:08:46.690: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=netexec --namespace=e2e-tests-kubectl-7wsyt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:08:46.874: INFO:
[AfterEach] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:167
STEP: using delete to clean up resources
Oct 23 23:08:46.875: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:47.097: INFO: pod "nginx" deleted
Oct 23 23:08:47.097: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-7wsyt'
Oct 23 23:08:47.280: INFO:
Oct 23 23:08:47.281: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-7wsyt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:08:47.466: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-7wsyt
• [SLOW TEST:26.921 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:435
should support exec through an HTTP proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:371
------------------------------
S
------------------------------
[BeforeEach] KubeletManagedEtcHosts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:08:52.495: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-kubelet-etc-hosts-amg2e
Oct 23 23:08:52.527: INFO: Service account default in ns e2e-tests-e2e-kubelet-etc-hosts-amg2e with secrets found. (31.626032ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:08:52.527: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-kubelet-etc-hosts-amg2e
Oct 23 23:08:52.529: INFO: Service account default in ns e2e-tests-e2e-kubelet-etc-hosts-amg2e with secrets found. (1.896011ms)
[It] should test kubelet managed /etc/hosts file
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:54
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
Oct 23 23:08:52.535: INFO: Waiting up to 5m0s for pod test-pod status to be running
Oct 23 23:08:52.564: INFO: Waiting for pod test-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-amg2e' status to be 'running'(found phase: "Pending", readiness: false) (28.495518ms elapsed)
Oct 23 23:08:54.568: INFO: Waiting for pod test-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-amg2e' status to be 'running'(found phase: "Pending", readiness: false) (2.032201719s elapsed)
Oct 23 23:08:56.571: INFO: Waiting for pod test-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-amg2e' status to be 'running'(found phase: "Pending", readiness: false) (4.035325558s elapsed)
Oct 23 23:08:58.574: INFO: Found pod 'test-pod' on node 'pull-e2e-0-minion-n5ko'
STEP: Creating hostNetwork=true pod
Oct 23 23:08:58.584: INFO: Waiting up to 5m0s for pod test-host-network-pod status to be running
Oct 23 23:08:58.614: INFO: Waiting for pod test-host-network-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-amg2e' status to be 'running'(found phase: "Pending", readiness: false) (30.083358ms elapsed)
Oct 23 23:09:00.618: INFO: Waiting for pod test-host-network-pod in namespace 'e2e-tests-e2e-kubelet-etc-hosts-amg2e' status to be 'running'(found phase: "Pending", readiness: false) (2.03355224s elapsed)
Oct 23 23:09:02.621: INFO: Found pod 'test-host-network-pod' on node 'pull-e2e-0-minion-dp0i'
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Oct 23 23:09:02.624: INFO: Asynchronously running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config exec --namespace=e2e-tests-e2e-kubelet-etc-hosts-amg2e test-pod -c busybox-1 cat /etc/hosts'
Oct 23 23:09:02.625: INFO: reading from `kubectl exec` command's stdout
[AfterEach] KubeletManagedEtcHosts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-e2e-kubelet-etc-hosts-amg2e".
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {scheduler } Scheduled: Successfully assigned test-host-network-pod to pull-e2e-0-minion-dp0i
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Pulled: Container image "beta.gcr.io/google_containers/pause:2.0" already present on machine
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Created: Created with docker id c39063c3d63a
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Started: Started with docker id c39063c3d63a
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Pulling: pulling image "gcr.io/google_containers/netexec:1.0"
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Pulled: Successfully pulled image "gcr.io/google_containers/netexec:1.0"
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Created: Created with docker id b11bb0ebb06e
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Started: Started with docker id b11bb0ebb06e
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Pulled: Container image "gcr.io/google_containers/netexec:1.0" already present on machine
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Created: Created with docker id 8262395d9be9
Oct 23 23:09:03.017: INFO: event for test-host-network-pod: {kubelet pull-e2e-0-minion-dp0i} Started: Started with docker id 8262395d9be9
Oct 23 23:09:03.017: INFO: event for test-pod: {scheduler } Scheduled: Successfully assigned test-pod to pull-e2e-0-minion-n5ko
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Pulled: Container image "beta.gcr.io/google_containers/pause:2.0" already present on machine
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Created: Created with docker id e0b0dc87f5e6
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Started: Started with docker id e0b0dc87f5e6
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Pulling: pulling image "gcr.io/google_containers/netexec:1.0"
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Pulled: Successfully pulled image "gcr.io/google_containers/netexec:1.0"
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Created: Created with docker id bff04193116c
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Started: Started with docker id bff04193116c
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Pulled: Container image "gcr.io/google_containers/netexec:1.0" already present on machine
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Created: Created with docker id 1c61146455aa
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Started: Started with docker id 1c61146455aa
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Created: Created with docker id 6576debe06fc
Oct 23 23:09:03.017: INFO: event for test-pod: {kubelet pull-e2e-0-minion-n5ko} Started: Started with docker id 6576debe06fc
Oct 23 23:09:03.025: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 23 23:09:03.025: INFO: test-host-network-pod pull-e2e-0-minion-dp0i Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:09:02 +0000 UTC }]
Oct 23 23:09:03.025: INFO: test-pod pull-e2e-0-minion-n5ko Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:08:56 +0000 UTC }]
Oct 23 23:09:03.025: INFO: pod-back-off-exponentially pull-e2e-0-minion-l2bc Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:08:36 +0000 UTC }]
Oct 23 23:09:03.025: INFO: elasticsearch-logging-v1-3gvtt pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:28 +0000 UTC }]
Oct 23 23:09:03.025: INFO: elasticsearch-logging-v1-t59p5 pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:10 +0000 UTC }]
Oct 23 23:09:03.025: INFO: heapster-v10-u3l0u pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:57 +0000 UTC }]
Oct 23 23:09:03.025: INFO: kibana-logging-v1-skk6z pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:00 +0000 UTC }]
Oct 23 23:09:03.025: INFO: kube-dns-v9-6u0vh pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:04 +0000 UTC }]
Oct 23 23:09:03.025: INFO: kube-ui-v3-laljg pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:49 +0000 UTC }]
Oct 23 23:09:03.025: INFO: monitoring-influxdb-grafana-v2-bpbto pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:57 +0000 UTC }]
Oct 23 23:09:03.025: INFO:
Oct 23 23:09:03.025: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:09:03.029: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:09:03.029: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:09:03.029: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:09:03.029: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:09:03.029: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:09:03.029: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:09:03.029: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:09:03.029: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:09:03.029: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:09:03.029: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:09:03.029: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:09:03.029: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-amg2e" for this suite.
• Failure [15.553 seconds]
KubeletManagedEtcHosts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:55
should test kubelet managed /etc/hosts file [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:54
Oct 23 23:09:03.009: /etc/hosts file should be kubelet managed, but is not: "10.245.3.24\ttest-pod\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet\nff00::0\tip6-mcastprefix\nff02::1\tip6-allnodes\nff02::2\tip6-allrouters\n"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:113
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:09:08.051: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-e25c5
Oct 23 23:09:08.093: INFO: Service account default in ns e2e-tests-kubectl-e25c5 with secrets found. (42.449966ms)
[BeforeEach] Guestbook application
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:143
[It] should create and stop a working application [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:153
STEP: creating all guestbook components
Oct 23 23:09:08.094: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-e25c5'
Oct 23 23:09:08.685: INFO: replicationcontroller "frontend" created
service "frontend" created
replicationcontroller "redis-master" created
service "redis-master" created
replicationcontroller "redis-slave" created
service "redis-slave" created
STEP: validating guestbook app
Oct 23 23:09:08.686: INFO: Waiting for frontend to serve content.
Oct 23 23:09:08.689: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Oct 23 23:09:13.692: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Oct 23 23:09:18.695: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Oct 23 23:09:23.698: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Oct 23 23:09:28.700: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Oct 23 23:09:33.703: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Oct 23 23:09:38.724: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Oct 23 23:09:43.726: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Oct 23 23:09:48.741: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Error while reading line from the server. [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(210): Predis\Connection\AbstractConnection-&gt;onConnectionError('Error while rea...')
#1 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(133): Predis\Connection\StreamConnection-&gt;read()
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(125): Predis\Connection\AbstractConnection-&gt;readResponse(Object(Predis\Command\StringGet))
#3 /usr/local/lib/php/Predis/Client.php(326): Predis\Connection\AbstractConnection-&gt;executeCommand(Object(Predis\Command\StringGet))
#4 /usr/local/lib/php/Predis/Client.php(310): Predis\Client-&gt;executeCommand(Object(Predis\Command\StringGet))
#5 /var/www/html/guestbook.php(38): Predis\Client-&gt;__call('get', Array)
#6 /var/www/html/guestbook.php(38): Predis\Client-&gt;get('messages')
#7 {main}
in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />
Oct 23 23:09:53.751: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Error while reading line from the server. [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(210): Predis\Connection\AbstractConnection-&gt;onConnectionError('Error while rea...')
#1 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(133): Predis\Connection\StreamConnection-&gt;read()
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(125): Predis\Connection\AbstractConnection-&gt;readResponse(Object(Predis\Command\StringGet))
#3 /usr/local/lib/php/Predis/Client.php(326): Predis\Connection\AbstractConnection-&gt;executeCommand(Object(Predis\Command\StringGet))
#4 /usr/local/lib/php/Predis/Client.php(310): Predis\Client-&gt;executeCommand(Object(Predis\Command\StringGet))
#5 /var/www/html/guestbook.php(38): Predis\Client-&gt;__call('get', Array)
#6 /var/www/html/guestbook.php(38): Predis\Client-&gt;get('messages')
#7 {main}
in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />
Oct 23 23:09:58.760: INFO: Trying to add a new entry to the guestbook.
Oct 23 23:09:58.768: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Oct 23 23:09:58.776: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-e25c5'
Oct 23 23:10:05.191: INFO: replicationcontroller "frontend" deleted
service "frontend" deleted
replicationcontroller "redis-master" deleted
service "redis-master" deleted
replicationcontroller "redis-slave" deleted
service "redis-slave" deleted
Oct 23 23:10:05.192: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=frontend --no-headers --namespace=e2e-tests-kubectl-e25c5'
Oct 23 23:10:05.382: INFO:
Oct 23 23:10:05.382: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=frontend --namespace=e2e-tests-kubectl-e25c5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:10:05.571: INFO:
Oct 23 23:10:05.571: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=redis-master --no-headers --namespace=e2e-tests-kubectl-e25c5'
Oct 23 23:10:05.750: INFO:
Oct 23 23:10:05.750: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=redis-master --namespace=e2e-tests-kubectl-e25c5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:10:05.933: INFO:
Oct 23 23:10:05.933: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=redis-slave --no-headers --namespace=e2e-tests-kubectl-e25c5'
Oct 23 23:10:06.116: INFO:
Oct 23 23:10:06.116: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=redis-slave --namespace=e2e-tests-kubectl-e25c5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:10:06.301: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-e25c5
• [SLOW TEST:63.273 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Guestbook application
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:154
should create and stop a working application [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:153
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:10:11.324: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-7hk2p
Oct 23 23:10:11.358: INFO: Service account default in ns e2e-tests-pods-7hk2p with secrets found. (34.236228ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:10:11.358: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-7hk2p
Oct 23 23:10:11.361: INFO: Service account default in ns e2e-tests-pods-7hk2p with secrets found. (2.274926ms)
[It] should contain environment variables for services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:541
Oct 23 23:10:11.371: INFO: Waiting up to 5m0s for pod server-envvars-35e427ca-79db-11e5-ba1c-42010af00002 status to be running
Oct 23 23:10:11.404: INFO: Waiting for pod server-envvars-35e427ca-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-pods-7hk2p' status to be 'running'(found phase: "Pending", readiness: false) (32.16789ms elapsed)
Oct 23 23:10:13.407: INFO: Found pod 'server-envvars-35e427ca-79db-11e5-ba1c-42010af00002' on node 'pull-e2e-0-minion-l2bc'
STEP: Creating a pod to test service env
Oct 23 23:10:13.450: INFO: Waiting up to 5m0s for pod client-envvars-371e462a-79db-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:10:13.483: INFO: No Status.Info for container 'env3cont' in pod 'client-envvars-371e462a-79db-11e5-ba1c-42010af00002' yet
Oct 23 23:10:13.483: INFO: Waiting for pod client-envvars-371e462a-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-pods-7hk2p' status to be 'success or failure'(found phase: "Pending", readiness: false) (33.130025ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod client-envvars-371e462a-79db-11e5-ba1c-42010af00002 container env3cont: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
FOOSERVICE_PORT_8765_TCP_PORT=8765
FOOSERVICE_PORT_8765_TCP_PROTO=tcp
HOSTNAME=client-envvars-371e462a-79db-11e5-ba1c-42010af00002
SHLVL=1
HOME=/root
FOOSERVICE_PORT_8765_TCP=tcp://10.0.45.14:8765
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
FOOSERVICE_SERVICE_HOST=10.0.45.14
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
FOOSERVICE_SERVICE_PORT=8765
FOOSERVICE_PORT=tcp://10.0.45.14:8765
FOOSERVICE_PORT_8765_TCP_ADDR=10.0.45.14
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:10:15.707: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:10:15.735: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:10:15.735: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:10:15.735: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:10:15.735: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:10:15.735: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:10:15.735: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:10:15.735: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:10:15.735: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:10:15.735: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:10:15.735: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:10:15.735: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:10:15.735: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-7hk2p" for this suite.
• [SLOW TEST:9.430 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should contain environment variables for services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:541
------------------------------
[BeforeEach] Cadvisor
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:43
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
[It] should be healthy on every node.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:47
STEP: getting list of nodes
STEP: Querying stats from node pull-e2e-0-minion-1dli using url api/v1/proxy/nodes/pull-e2e-0-minion-1dli/stats/
STEP: Querying stats from node pull-e2e-0-minion-djcb using url api/v1/proxy/nodes/pull-e2e-0-minion-djcb/stats/
STEP: Querying stats from node pull-e2e-0-minion-dp0i using url api/v1/proxy/nodes/pull-e2e-0-minion-dp0i/stats/
STEP: Querying stats from node pull-e2e-0-minion-l2bc using url api/v1/proxy/nodes/pull-e2e-0-minion-l2bc/stats/
STEP: Querying stats from node pull-e2e-0-minion-n5ko using url api/v1/proxy/nodes/pull-e2e-0-minion-n5ko/stats/
STEP: Querying stats from node pull-e2e-0-minion-zr43 using url api/v1/proxy/nodes/pull-e2e-0-minion-zr43/stats/
•SSSSS
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:10:20.993: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-apitg
Oct 23 23:10:21.023: INFO: Service account default in ns e2e-tests-emptydir-apitg with secrets found. (29.492112ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:10:21.023: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-apitg
Oct 23 23:10:21.025: INFO: Service account default in ns e2e-tests-emptydir-apitg with secrets found. (1.765389ms)
[It] should support (root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:74
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 23 23:10:21.029: INFO: Waiting up to 5m0s for pod pod-3ba6c24b-79db-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:10:21.054: INFO: No Status.Info for container 'test-container' in pod 'pod-3ba6c24b-79db-11e5-ba1c-42010af00002' yet
Oct 23 23:10:21.054: INFO: Waiting for pod pod-3ba6c24b-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-apitg' status to be 'success or failure'(found phase: "Pending", readiness: false) (25.008384ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-dp0i pod pod-3ba6c24b-79db-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:10:23.074: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:10:23.101: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:10:23.101: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:10:23.101: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:10:23.101: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:10:23.101: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:10:23.101: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:10:23.101: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:10:23.101: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:10:23.101: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:10:23.101: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:10:23.101: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:10:23.101: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-apitg" for this suite.
• [SLOW TEST:7.127 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:74
------------------------------
SSSSS
------------------------------
[BeforeEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:10:28.125: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-3l2xl
Oct 23 23:10:28.153: INFO: Service account default in ns e2e-tests-job-3l2xl with secrets found. (27.38997ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:10:28.153: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-3l2xl
Oct 23 23:10:28.154: INFO: Service account default in ns e2e-tests-job-3l2xl with secrets found. (1.76579ms)
[It] should stop a job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:182
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: scale job down
STEP: Ensuring job was deleted
[AfterEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:10:32.265: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:10:32.269: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:10:32.269: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:10:32.269: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:10:32.269: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:10:32.269: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:10:32.269: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:10:32.269: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:10:32.269: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:10:32.269: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:10:32.269: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:10:32.269: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:10:32.269: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-job-3l2xl" for this suite.
• [SLOW TEST:9.162 seconds]
Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should stop a job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:182
------------------------------
SS
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:10:37.297: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-vw1k5
Oct 23 23:10:37.324: INFO: Service account default in ns e2e-tests-kubectl-vw1k5 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:10:39.326: INFO: Service account default in ns e2e-tests-kubectl-vw1k5 with secrets found. (2.028933164s)
[It] should support proxy with --port 0 [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:895
STEP: starting the proxy server
Oct 23 23:10:39.326: INFO: Asynchronously running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config proxy -p 0'
STEP: curling proxy /api/ output
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-vw1k5
• [SLOW TEST:7.226 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Proxy server
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:924
should support proxy with --port 0 [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:895
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:10:44.518: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-0ppg1
Oct 23 23:10:44.546: INFO: Service account default in ns e2e-tests-emptydir-0ppg1 with secrets found. (27.799621ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:10:44.546: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-0ppg1
Oct 23 23:10:44.548: INFO: Service account default in ns e2e-tests-emptydir-0ppg1 with secrets found. (1.76532ms)
[It] should support (non-root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:62
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 23 23:10:44.552: INFO: Waiting up to 5m0s for pod pod-49ac221c-79db-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:10:44.582: INFO: No Status.Info for container 'test-container' in pod 'pod-49ac221c-79db-11e5-ba1c-42010af00002' yet
Oct 23 23:10:44.582: INFO: Waiting for pod pod-49ac221c-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-0ppg1' status to be 'success or failure'(found phase: "Pending", readiness: false) (29.996801ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod pod-49ac221c-79db-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:10:46.604: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:10:46.631: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:10:46.631: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:10:46.631: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:10:46.631: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:10:46.631: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:10:46.631: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:10:46.631: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:10:46.631: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:10:46.631: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:10:46.631: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:10:46.631: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:10:46.631: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-0ppg1" for this suite.
• [SLOW TEST:7.131 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:62
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:10:51.650: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-a9dih
Oct 23 23:10:51.678: INFO: Service account default in ns e2e-tests-kubectl-a9dih had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:10:53.680: INFO: Service account default in ns e2e-tests-kubectl-a9dih with secrets found. (2.030247716s)
[BeforeEach] Kubectl run rc
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:786
[It] should create an rc from an image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:818
STEP: running the image nginx
Oct 23 23:10:53.680: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config run e2e-test-nginx-rc --image=nginx --namespace=e2e-tests-kubectl-a9dih'
Oct 23 23:10:53.861: INFO: replicationcontroller "e2e-test-nginx-rc" created
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
[AfterEach] Kubectl run rc
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:790
Oct 23 23:10:53.866: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-a9dih'
Oct 23 23:10:56.103: INFO: replicationcontroller "e2e-test-nginx-rc" deleted
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-a9dih
• [SLOW TEST:9.472 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl run rc
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:820
should create an rc from an image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:818
------------------------------
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:11:01.125: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-8ixfz
Oct 23 23:11:01.153: INFO: Service account default in ns e2e-tests-nettest-8ixfz with secrets found. (28.819732ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:11:01.153: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-8ixfz
Oct 23 23:11:01.155: INFO: Service account default in ns e2e-tests-nettest-8ixfz with secrets found. (1.847258ms)
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:52
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:103
STEP: testing: /validate
STEP: testing: /healthz
[AfterEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:11:01.205: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:11:01.209: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:11:01.209: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:11:01.209: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:11:01.209: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:11:01.209: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:11:01.209: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:11:01.209: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:11:01.209: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:11:01.209: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:11:01.209: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:11:01.209: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:11:01.209: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-nettest-8ixfz" for this suite.
• [SLOW TEST:5.103 seconds]
Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:249
should provide unchanging, static URL paths for kubernetes api services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:103
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:11:06.230: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-jy43t
Oct 23 23:11:06.260: INFO: Service account default in ns e2e-tests-emptydir-jy43t with secrets found. (30.646516ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:11:06.260: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-jy43t
Oct 23 23:11:06.263: INFO: Service account default in ns e2e-tests-emptydir-jy43t with secrets found. (2.081495ms)
[It] should support (non-root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:94
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 23 23:11:06.267: INFO: Waiting up to 5m0s for pod pod-569d8595-79db-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:11:06.293: INFO: No Status.Info for container 'test-container' in pod 'pod-569d8595-79db-11e5-ba1c-42010af00002' yet
Oct 23 23:11:06.293: INFO: Waiting for pod pod-569d8595-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-jy43t' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.272156ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod pod-569d8595-79db-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:11:08.314: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:11:08.343: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:11:08.343: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:11:08.343: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:11:08.343: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:11:08.343: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:11:08.343: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:11:08.343: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:11:08.343: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:11:08.343: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:11:08.343: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:11:08.343: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:11:08.343: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-jy43t" for this suite.
• [SLOW TEST:7.140 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:94
------------------------------
[BeforeEach] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:11:13.373: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-fmlic
Oct 23 23:11:13.406: INFO: Service account default in ns e2e-tests-downward-api-fmlic had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:11:15.409: INFO: Service account default in ns e2e-tests-downward-api-fmlic with secrets found. (2.035835109s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:11:15.409: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-fmlic
Oct 23 23:11:15.411: INFO: Service account default in ns e2e-tests-downward-api-fmlic with secrets found. (2.016984ms)
[It] should provide pod IP as an env var
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:81
STEP: Creating a pod to test downward api env vars
Oct 23 23:11:15.416: INFO: Waiting up to 5m0s for pod downward-api-5c1171a0-79db-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:11:15.443: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-5c1171a0-79db-11e5-ba1c-42010af00002' yet
Oct 23 23:11:15.443: INFO: Waiting for pod downward-api-5c1171a0-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-fmlic' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.882868ms elapsed)
Oct 23 23:11:17.446: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-5c1171a0-79db-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-fmlic' so far
Oct 23 23:11:17.446: INFO: Waiting for pod downward-api-5c1171a0-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-fmlic' status to be 'success or failure'(found phase: "Running", readiness: true) (2.030045028s elapsed)
Oct 23 23:11:19.449: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-5c1171a0-79db-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-fmlic' so far
Oct 23 23:11:19.449: INFO: Waiting for pod downward-api-5c1171a0-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-fmlic' status to be 'success or failure'(found phase: "Running", readiness: true) (4.033576822s elapsed)
Oct 23 23:11:21.453: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-5c1171a0-79db-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-fmlic' so far
Oct 23 23:11:21.453: INFO: Waiting for pod downward-api-5c1171a0-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-fmlic' status to be 'success or failure'(found phase: "Running", readiness: true) (6.03694662s elapsed)
Oct 23 23:11:23.456: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-5c1171a0-79db-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-fmlic' so far
Oct 23 23:11:23.456: INFO: Waiting for pod downward-api-5c1171a0-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-fmlic' status to be 'success or failure'(found phase: "Running", readiness: true) (8.040493182s elapsed)
Oct 23 23:11:25.459: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-5c1171a0-79db-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-fmlic' so far
Oct 23 23:11:25.459: INFO: Waiting for pod downward-api-5c1171a0-79db-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-fmlic' status to be 'success or failure'(found phase: "Running", readiness: true) (10.043804456s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-dp0i pod downward-api-5c1171a0-79db-11e5-ba1c-42010af00002 container dapi-container: <nil>
STEP: Successfully fetched pod logs:POD_IP=10.245.4.22
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.0.0.1:443
HOSTNAME=downward-api-5c1171a0-79db-11e5-ba1c-42010af00002
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
[AfterEach] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:11:27.494: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:11:27.526: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:11:27.526: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:11:27.526: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:11:27.526: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:11:27.526: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:11:27.526: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:11:27.526: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:11:27.526: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:11:27.526: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:11:27.526: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:11:27.526: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:11:27.526: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-downward-api-fmlic" for this suite.
• [SLOW TEST:19.173 seconds]
Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:82
should provide pod IP as an env var
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:81
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:08:34.594: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-2zl0y
Oct 23 23:08:34.621: INFO: Service account default in ns e2e-tests-pods-2zl0y had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:08:36.624: INFO: Service account default in ns e2e-tests-pods-2zl0y with secrets found. (2.030464179s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:08:36.624: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-2zl0y
Oct 23 23:08:36.626: INFO: Service account default in ns e2e-tests-pods-2zl0y with secrets found. (1.998215ms)
[It] should have their container restart back-off timer increase exponentially
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:861
STEP: submitting the pod to kubernetes
Oct 23 23:08:36.631: INFO: Waiting up to 5m0s for pod pod-back-off-exponentially status to be running
Oct 23 23:08:36.660: INFO: Waiting for pod pod-back-off-exponentially in namespace 'e2e-tests-pods-2zl0y' status to be 'running'(found phase: "Pending", readiness: false) (28.957282ms elapsed)
Oct 23 23:08:38.663: INFO: Found pod 'pod-back-off-exponentially' on node 'pull-e2e-0-minion-l2bc'
STEP: verifying the pod is in kubernetes
STEP: getting restart delay-0
Oct 23 23:09:49.716: INFO: getRestartDelay: finishedAt=2015-10-23 23:09:23 +0000 UTC restartedAt=2015-10-23 23:09:48 +0000 UTC (25s)
STEP: getting restart delay-1
Oct 23 23:10:42.904: INFO: getRestartDelay: finishedAt=2015-10-23 23:09:53 +0000 UTC restartedAt=2015-10-23 23:10:42 +0000 UTC (49s)
STEP: getting restart delay-2
Oct 23 23:12:30.512: INFO: getRestartDelay: finishedAt=2015-10-23 23:10:47 +0000 UTC restartedAt=2015-10-23 23:12:22 +0000 UTC (1m35s)
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:12:30.527: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:12:30.558: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:12:30.558: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:12:30.558: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:12:30.558: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:12:30.558: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:12:30.558: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:12:30.558: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:12:30.558: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:12:30.558: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:12:30.558: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:12:30.558: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:12:30.558: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-2zl0y" for this suite.
• [SLOW TEST:240.984 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should have their container restart back-off timer increase exponentially
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:861
------------------------------
SS
------------------------------
[BeforeEach] kubelet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:11:32.545: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-siafk
Oct 23 23:11:32.572: INFO: Service account default in ns e2e-tests-kubelet-siafk had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:11:34.574: INFO: Service account default in ns e2e-tests-kubelet-siafk with secrets found. (2.029269637s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:11:34.574: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-siafk
Oct 23 23:11:34.576: INFO: Service account default in ns e2e-tests-kubelet-siafk with secrets found. (1.98993ms)
[BeforeEach] kubelet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:107
[It] kubelet should be able to delete 10 pods per node in 1m0s.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:159
STEP: Creating a RC of 60 pods and wait until all pods of this RC are running
STEP: creating replication controller cleanup60-677f0902-79db-11e5-ba1c-42010af00002 in namespace e2e-tests-kubelet-siafk
Oct 23 23:11:34.618: INFO: Created replication controller with name: cleanup60-677f0902-79db-11e5-ba1c-42010af00002, namespace: e2e-tests-kubelet-siafk, replica count: 60
Oct 23 23:11:34.688: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:34.705: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:34.717: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:34.730: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:34.749: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:34.852: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:35.763: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:35.791: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:35.843: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:36.031: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:36.251: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:36.438: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:36.832: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:36.886: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:37.006: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:37.218: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:37.428: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:37.635: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:37.905: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:38.011: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:38.243: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:38.428: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:38.609: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:38.821: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:39.005: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:39.206: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:39.438: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:39.607: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:39.823: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:40.003: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:40.208: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:40.423: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:40.644: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:40.824: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:41.010: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:41.216: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:41.441: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:41.631: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:41.872: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:42.034: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:42.234: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:42.439: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:42.610: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:42.826: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:43.012: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:43.205: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:43.466: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:43.607: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:43.804: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:44.003: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:44.213: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:44.427: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:44.608: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:44.618: INFO: cleanup60-677f0902-79db-11e5-ba1c-42010af00002 Pods: 60 out of 60 created, 28 running, 32 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 23:11:44.800: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:45.023: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:45.210: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:45.424: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:45.601: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:45.809: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:46.016: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:46.202: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:46.449: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:46.638: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:46.804: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:47.027: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:47.202: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:47.464: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:47.633: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:47.830: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:48.088: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:48.234: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:48.441: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:48.607: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:48.810: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:49.038: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:49.216: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:49.422: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:49.609: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:49.807: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:50.008: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:50.218: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:50.422: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:50.613: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:50.811: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:51.039: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:51.213: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:51.412: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:51.600: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:51.804: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:52.002: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:52.199: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:52.425: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:52.608: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:52.804: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:53.004: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:53.225: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:53.426: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:53.647: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:53.841: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:54.025: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:54.232: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:54.423: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:54.600: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:54.619: INFO: cleanup60-677f0902-79db-11e5-ba1c-42010af00002 Pods: 60 out of 60 created, 59 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 23:11:54.800: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:55.003: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:55.206: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:55.407: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:55.602: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:55.828: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:56.000: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:56.207: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:56.417: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:56.641: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:56.798: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:57.002: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:57.201: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:57.418: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:57.599: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:57.798: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:58.002: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:58.198: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:58.413: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:58.602: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:11:58.800: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:11:59.001: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:11:59.201: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:11:59.457: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:11:59.620: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:11:59.829: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:00.017: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:00.204: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:00.438: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:00.622: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:00.802: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:01.004: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:01.206: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:01.407: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:01.601: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:01.799: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:02.004: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:02.205: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:02.425: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:02.600: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:02.799: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:03.002: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:03.202: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:03.415: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:03.603: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:03.799: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:04.003: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:04.200: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:04.413: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:04.601: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:04.619: INFO: cleanup60-677f0902-79db-11e5-ba1c-42010af00002 Pods: 60 out of 60 created, 60 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 23 23:12:04.805: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:05.026: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:05.335: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:05.432: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:05.624: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:05.624: INFO: Checking pods on node pull-e2e-0-minion-1dli via /runningpods endpoint
Oct 23 23:12:05.624: INFO: Checking pods on node pull-e2e-0-minion-djcb via /runningpods endpoint
Oct 23 23:12:05.624: INFO: Checking pods on node pull-e2e-0-minion-dp0i via /runningpods endpoint
Oct 23 23:12:05.624: INFO: Checking pods on node pull-e2e-0-minion-l2bc via /runningpods endpoint
Oct 23 23:12:05.624: INFO: Checking pods on node pull-e2e-0-minion-n5ko via /runningpods endpoint
Oct 23 23:12:05.624: INFO: Checking pods on node pull-e2e-0-minion-zr43 via /runningpods endpoint
Oct 23 23:12:05.830: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:06.948: INFO: Resource usage on node "pull-e2e-0-minion-l2bc" is not ready yet
Oct 23 23:12:06.948: INFO: Resource usage on node "pull-e2e-0-minion-n5ko" is not ready yet
Oct 23 23:12:06.948: INFO: Resource usage on node "pull-e2e-0-minion-zr43" is not ready yet
Oct 23 23:12:06.948: INFO: Resource usage on node "pull-e2e-0-minion-1dli" is not ready yet
Oct 23 23:12:06.948: INFO: Resource usage on node "pull-e2e-0-minion-djcb" is not ready yet
Oct 23 23:12:06.948: INFO: Resource usage on node "pull-e2e-0-minion-dp0i" is not ready yet
STEP: Deleting the RC
STEP: deleting replication controller cleanup60-677f0902-79db-11e5-ba1c-42010af00002 in namespace e2e-tests-kubelet-siafk
Oct 23 23:12:07.202: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:07.403: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:07.603: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:07.794: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:07.999: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:08.199: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:08.573: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:08.975: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:09.237: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:09.585: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:09.799: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:09.995: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:10.195: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:10.446: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:10.655: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:11.029: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:11.559: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:11.744: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:11.979: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:12.211: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:12.396: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:12.597: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:12.801: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:12.996: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:13.410: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:13.577: INFO: Deleting RC cleanup60-677f0902-79db-11e5-ba1c-42010af00002 took: 5.198056926s
Oct 23 23:12:13.772: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:14.002: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:14.404: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:14.599: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:14.802: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:14.999: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:15.231: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:15.410: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:15.598: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:15.804: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:16.000: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:16.200: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:16.463: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:16.878: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:17.019: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:17.218: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:17.431: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:17.598: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:17.794: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:17.997: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:18.209: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:18.408: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:18.770: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:19.022: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:19.218: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:19.424: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:19.599: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:19.838: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:19.997: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:20.197: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:20.437: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:20.770: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:20.995: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:21.215: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:21.426: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:21.632: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:21.843: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:22.101: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:22.214: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:22.518: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:22.830: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:23.060: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:23.226: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:23.463: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:23.645: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:23.802: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:24.002: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:24.240: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:24.432: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:24.783: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:24.998: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:25.195: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:25.418: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:25.595: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:25.861: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:26.001: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:26.193: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:26.406: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:26.765: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:26.996: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:27.219: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:27.431: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:27.633: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:27.815: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:28.014: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:28.222: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:28.442: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:28.570: INFO: Terminating RC cleanup60-677f0902-79db-11e5-ba1c-42010af00002 pods took: 14.993513926s
Oct 23 23:12:28.768: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:28.996: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:29.199: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:29.403: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:29.571: INFO: Checking pods on node pull-e2e-0-minion-1dli via /runningpods endpoint
Oct 23 23:12:29.571: INFO: Checking pods on node pull-e2e-0-minion-djcb via /runningpods endpoint
Oct 23 23:12:29.571: INFO: Checking pods on node pull-e2e-0-minion-dp0i via /runningpods endpoint
Oct 23 23:12:29.571: INFO: Checking pods on node pull-e2e-0-minion-l2bc via /runningpods endpoint
Oct 23 23:12:29.571: INFO: Checking pods on node pull-e2e-0-minion-n5ko via /runningpods endpoint
Oct 23 23:12:29.571: INFO: Checking pods on node pull-e2e-0-minion-zr43 via /runningpods endpoint
Oct 23 23:12:29.618: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:29.800: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:30.947: INFO: Deleting 60 pods on 6 nodes completed in 2.376096684s after the RC was deleted
Oct 23 23:12:30.947: INFO:
CPU usage of containers on node "pull-e2e-0-minion-1dli":
container 5th% 20th% 50th% 70th% 90th% 95th% 99th%
"/" 0.121 0.157 0.214 0.326 0.414 0.495 0.834
"/docker-daemon" 0.000 0.000 0.007 0.019 0.090 0.117 0.271
"/kubelet" 0.100 0.142 0.192 0.263 0.312 0.350 0.382
"/kube-proxy" 0.000 0.000 0.000 0.000 0.000 0.000 0.033
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Oct 23 23:12:30.947: INFO:
CPU usage of containers on node "pull-e2e-0-minion-djcb":
container 5th% 20th% 50th% 70th% 90th% 95th% 99th%
"/" 0.105 0.146 0.210 0.255 0.403 0.480 0.716
"/docker-daemon" 0.000 0.000 0.003 0.013 0.066 0.099 0.132
"/kubelet" 0.100 0.133 0.180 0.218 0.322 0.336 0.345
"/kube-proxy" 0.000 0.000 0.000 0.000 0.000 0.000 0.020
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Oct 23 23:12:30.947: INFO:
CPU usage of containers on node "pull-e2e-0-minion-dp0i":
container 5th% 20th% 50th% 70th% 90th% 95th% 99th%
"/" 0.098 0.142 0.195 0.279 0.456 0.643 0.856
"/docker-daemon" 0.000 0.000 0.008 0.032 0.119 0.164 0.302
"/kubelet" 0.095 0.123 0.182 0.230 0.281 0.302 0.323
"/kube-proxy" 0.000 0.000 0.000 0.000 0.000 0.008 0.020
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Oct 23 23:12:30.947: INFO:
CPU usage of containers on node "pull-e2e-0-minion-l2bc":
container 5th% 20th% 50th% 70th% 90th% 95th% 99th%
"/" 0.123 0.172 0.223 0.298 0.572 0.635 1.082
"/docker-daemon" 0.000 0.000 0.014 0.057 0.167 0.235 0.437
"/kubelet" 0.106 0.138 0.175 0.245 0.307 0.338 0.386
"/kube-proxy" 0.000 0.000 0.000 0.000 0.000 0.000 0.019
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Oct 23 23:12:30.947: INFO:
CPU usage of containers on node "pull-e2e-0-minion-n5ko":
container 5th% 20th% 50th% 70th% 90th% 95th% 99th%
"/" 0.151 0.186 0.254 0.329 0.490 0.679 1.080
"/docker-daemon" 0.000 0.000 0.016 0.047 0.158 0.202 0.446
"/kubelet" 0.113 0.151 0.193 0.276 0.329 0.354 0.364
"/kube-proxy" NaN 0.000 0.000 0.000 0.000 0.000 0.000
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Oct 23 23:12:30.947: INFO:
CPU usage of containers on node "pull-e2e-0-minion-zr43":
container 5th% 20th% 50th% 70th% 90th% 95th% 99th%
"/" 0.155 0.182 0.336 1.478 1.544 1.554 1.571
"/docker-daemon" 0.000 0.000 0.003 0.021 0.049 0.149 0.197
"/kubelet" 0.127 0.164 0.212 0.253 0.287 0.315 0.368
"/kube-proxy" 0.000 0.000 0.002 0.002 0.004 0.006 0.025
"/system" 0.000 0.000 0.000 0.000 0.000 0.000 0.000
[AfterEach] kubelet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:12:30.947: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:12:31.190: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:31.404: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:31.593: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:31.792: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:32.022: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:32.200: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:32.384: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:12:32.384: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:12:32.384: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:12:32.384: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:12:32.384: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:12:32.384: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:12:32.384: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:12:32.384: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:12:32.384: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:12:32.384: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:12:32.384: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:12:32.384: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-kubelet-siafk" for this suite.
Oct 23 23:12:32.594: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:32.985: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:33.227: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:33.611: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:33.796: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:34.034: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:34.195: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:34.421: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:34.590: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:34.792: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:35.045: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:35.195: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:35.405: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:35.595: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:35.790: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:35.990: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:36.205: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:36.401: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:36.619: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:36.791: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:36.989: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:37.195: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:37.405: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:37.595: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:37.812: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:38.006: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:38.184: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:38.429: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:38.816: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:38.990: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:39.187: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:39.390: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:39.604: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:39.807: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:39.987: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:40.255: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:40.393: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:40.584: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:40.786: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:40.993: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:41.220: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:41.395: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:41.589: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:41.784: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:41.987: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:42.186: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:42.391: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:42.587: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:42.788: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:42.985: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:43.187: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:43.391: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:43.761: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:43.989: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:44.212: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:44.442: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:44.584: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:44.790: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:44.989: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:45.188: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:45.392: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:45.584: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:45.803: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:46.008: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:46.231: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:46.407: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:46.586: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:46.787: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:47.004: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:47.187: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:47.414: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:47.586: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:47.786: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:47.996: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:48.184: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:48.387: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:48.765: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:49.007: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:49.185: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:49.393: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:49.584: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:49.787: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:49.988: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:50.191: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:50.405: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:50.601: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:50.791: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:50.986: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:51.186: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:51.409: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:51.586: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:51.785: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:52.000: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:52.206: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:52.396: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:52.597: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:52.784: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:52.986: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:53.208: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:53.387: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:53.777: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:53.987: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:54.193: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:54.390: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:54.584: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:54.789: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:54.987: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:55.187: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:55.401: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:55.584: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:55.785: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:55.989: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:56.186: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:56.396: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:56.600: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:56.802: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:56.990: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:57.186: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:57.389: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:57.605: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:57.789: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:57.986: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:58.197: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:58.411: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:58.779: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:12:58.988: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:12:59.185: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:12:59.389: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:12:59.595: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:12:59.786: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:12:59.986: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:13:00.190: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:13:00.391: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:13:00.584: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:13:00.787: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:13:00.986: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:13:01.188: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:13:01.400: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:13:01.600: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:13:01.784: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:13:01.996: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:13:02.186: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:13:02.390: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:13:02.628: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
Oct 23 23:13:02.796: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:13:03.007: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:13:03.190: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-l2bc"
Oct 23 23:13:03.390: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-n5ko"
Oct 23 23:13:03.585: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-1dli"
Oct 23 23:13:03.964: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-djcb"
[AfterEach] kubelet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:111
• [SLOW TEST:91.632 seconds]
kubelet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:162
Clean up pods on node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:161
kubelet should be able to delete 10 pods per node in 1m0s.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:159
------------------------------
SS
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:12:35.579: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-69qti
Oct 23 23:12:35.612: INFO: Get service account default in ns e2e-tests-pods-69qti failed, ignoring for 2s: serviceaccounts "default" not found
Oct 23 23:12:37.624: INFO: Service account default in ns e2e-tests-pods-69qti with secrets found. (2.044706709s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:12:37.624: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-69qti
Oct 23 23:12:37.662: INFO: Service account default in ns e2e-tests-pods-69qti with secrets found. (37.545716ms)
[It] should *not* be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:680
STEP: Creating pod liveness-http in namespace e2e-tests-pods-69qti
Oct 23 23:12:37.667: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Oct 23 23:12:37.695: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-69qti' status to be '!pending'(found phase: "Pending", readiness: false) (28.292655ms elapsed)
Oct 23 23:12:39.698: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-69qti' status to be '!pending'(found phase: "Pending", readiness: false) (2.03097281s elapsed)
Oct 23 23:12:41.701: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-69qti' out of pending state (found '"Running"')
STEP: Started pod liveness-http in namespace e2e-tests-pods-69qti
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:14:41.925: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:14:41.953: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:14:41.953: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:14:41.953: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:14:41.953: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:14:41.953: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:14:41.953: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:14:41.953: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:14:41.953: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:14:41.953: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:14:41.953: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:14:41.953: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:14:41.953: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-69qti" for this suite.
• [SLOW TEST:131.398 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should *not* be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:680
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:14:46.980: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-83cdm
Oct 23 23:14:47.006: INFO: Service account default in ns e2e-tests-emptydir-83cdm had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:14:49.009: INFO: Service account default in ns e2e-tests-emptydir-83cdm with secrets found. (2.029485801s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:14:49.009: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-83cdm
Oct 23 23:14:49.011: INFO: Service account default in ns e2e-tests-emptydir-83cdm with secrets found. (1.999819ms)
[It] should support (root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:54
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 23 23:14:49.017: INFO: Waiting up to 5m0s for pod pod-db624835-79db-11e5-9772-42010af00002 status to be success or failure
Oct 23 23:14:49.045: INFO: No Status.Info for container 'test-container' in pod 'pod-db624835-79db-11e5-9772-42010af00002' yet
Oct 23 23:14:49.045: INFO: Waiting for pod pod-db624835-79db-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-83cdm' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.099058ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod pod-db624835-79db-11e5-9772-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:14:51.069: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:14:51.096: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:14:51.096: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:14:51.096: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:14:51.096: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:14:51.096: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:14:51.096: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:14:51.096: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:14:51.096: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:14:51.096: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:14:51.096: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:14:51.096: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:14:51.096: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-83cdm" for this suite.
• [SLOW TEST:9.135 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:54
------------------------------
[BeforeEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:14:56.120: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-m2dg0
Oct 23 23:14:56.157: INFO: Service account default in ns e2e-tests-containers-m2dg0 with secrets found. (36.424356ms)
[It] should be able to override the image's default command and arguments [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:83
STEP: Creating a pod to test override all
Oct 23 23:14:56.161: INFO: Waiting up to 5m0s for pod client-containers-dfa49512-79db-11e5-9772-42010af00002 status to be success or failure
Oct 23 23:14:56.189: INFO: No Status.Info for container 'test-container' in pod 'client-containers-dfa49512-79db-11e5-9772-42010af00002' yet
Oct 23 23:14:56.189: INFO: Waiting for pod client-containers-dfa49512-79db-11e5-9772-42010af00002 in namespace 'e2e-tests-containers-m2dg0' status to be 'success or failure'(found phase: "Pending", readiness: false) (27.630328ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod client-containers-dfa49512-79db-11e5-9772-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep-2 override arguments]
[AfterEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:7.135 seconds]
Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should be able to override the image's default command and arguments [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:83
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:15:03.253: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-omjxd
Oct 23 23:15:03.302: INFO: Service account default in ns e2e-tests-kubectl-omjxd with secrets found. (49.54245ms)
[It] should check is all data is printed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:776
Oct 23 23:15:03.302: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config version'
Oct 23 23:15:03.477: INFO: Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.2.246+f93f77766dd24b-dirty", GitCommit:"f93f77766dd24bdffc4d046df0bc0360978e2f2a", GitTreeState:"dirty"}
Server Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.2.246+f93f77766dd24b-dirty", GitCommit:"f93f77766dd24bdffc4d046df0bc0360978e2f2a", GitTreeState:"dirty"}
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-omjxd
• [SLOW TEST:5.242 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl version
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:777
should check is all data is printed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:776
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:15:08.502: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-wbpfj
Oct 23 23:15:08.529: INFO: Service account default in ns e2e-tests-pods-wbpfj had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:15:10.532: INFO: Service account default in ns e2e-tests-pods-wbpfj with secrets found. (2.029994773s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:15:10.532: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-wbpfj
Oct 23 23:15:10.534: INFO: Service account default in ns e2e-tests-pods-wbpfj with secrets found. (1.83082ms)
[It] should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:567
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-wbpfj
Oct 23 23:15:10.541: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Oct 23 23:15:10.578: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-wbpfj' status to be '!pending'(found phase: "Pending", readiness: false) (37.145496ms elapsed)
Oct 23 23:15:12.582: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-wbpfj' out of pending state (found '"Running"')
STEP: Started pod liveness-exec in namespace e2e-tests-pods-wbpfj
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-exec is 0
STEP: Restart count of pod e2e-tests-pods-wbpfj/liveness-exec is now 1 (50.080554653s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:16:02.676: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:16:02.704: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:16:02.704: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:16:02.704: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:16:02.704: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:16:02.704: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:16:02.704: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:16:02.704: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:16:02.704: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:16:02.704: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:16:02.704: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:16:02.704: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:16:02.704: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-wbpfj" for this suite.
• [SLOW TEST:59.225 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:567
------------------------------
S
------------------------------
[BeforeEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:16:07.726: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-3yj5c
Oct 23 23:16:07.755: INFO: Service account default in ns e2e-tests-var-expansion-3yj5c with secrets found. (29.173008ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:16:07.755: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-3yj5c
Oct 23 23:16:07.757: INFO: Service account default in ns e2e-tests-var-expansion-3yj5c with secrets found. (1.911186ms)
[It] should allow substituting values in a container's command [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:97
STEP: Creating a pod to test substitution in container's command
Oct 23 23:16:07.762: INFO: Waiting up to 5m0s for pod var-expansion-0a51e16b-79dc-11e5-9772-42010af00002 status to be success or failure
Oct 23 23:16:07.788: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-0a51e16b-79dc-11e5-9772-42010af00002' yet
Oct 23 23:16:07.788: INFO: Waiting for pod var-expansion-0a51e16b-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-3yj5c' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.321161ms elapsed)
Oct 23 23:16:09.792: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0a51e16b-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-3yj5c' so far
Oct 23 23:16:09.792: INFO: Waiting for pod var-expansion-0a51e16b-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-3yj5c' status to be 'success or failure'(found phase: "Running", readiness: true) (2.029914781s elapsed)
Oct 23 23:16:11.794: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0a51e16b-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-3yj5c' so far
Oct 23 23:16:11.794: INFO: Waiting for pod var-expansion-0a51e16b-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-3yj5c' status to be 'success or failure'(found phase: "Running", readiness: true) (4.032752458s elapsed)
Oct 23 23:16:13.797: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0a51e16b-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-3yj5c' so far
Oct 23 23:16:13.797: INFO: Waiting for pod var-expansion-0a51e16b-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-3yj5c' status to be 'success or failure'(found phase: "Running", readiness: true) (6.035658488s elapsed)
Oct 23 23:16:15.800: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0a51e16b-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-3yj5c' so far
Oct 23 23:16:15.800: INFO: Waiting for pod var-expansion-0a51e16b-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-3yj5c' status to be 'success or failure'(found phase: "Running", readiness: true) (8.038495588s elapsed)
Oct 23 23:16:17.803: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0a51e16b-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-3yj5c' so far
Oct 23 23:16:17.803: INFO: Waiting for pod var-expansion-0a51e16b-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-3yj5c' status to be 'success or failure'(found phase: "Running", readiness: true) (10.041271652s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-dp0i pod var-expansion-0a51e16b-79dc-11e5-9772-42010af00002 container dapi-container: <nil>
STEP: Successfully fetched pod logs:test-value
[AfterEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:16:19.822: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:16:19.851: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:16:19.851: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:16:19.851: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:16:19.851: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:16:19.851: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:16:19.851: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:16:19.851: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:16:19.851: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:16:19.851: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:16:19.851: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:16:19.851: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:16:19.851: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-var-expansion-3yj5c" for this suite.
• [SLOW TEST:17.144 seconds]
Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:129
should allow substituting values in a container's command [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:97
------------------------------
S
------------------------------
[BeforeEach] Addon update
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:16:24.873: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-addon-update-test-8mek5
Oct 23 23:16:24.910: INFO: Service account default in ns e2e-tests-addon-update-test-8mek5 with secrets found. (36.315943ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:16:24.910: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-addon-update-test-8mek5
Oct 23 23:16:24.911: INFO: Service account default in ns e2e-tests-addon-update-test-8mek5 with secrets found. (1.682415ms)
[BeforeEach] Addon update
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:214
Oct 23 23:16:25.207: INFO: Executing 'sudo TEST_ADDON_CHECK_INTERVAL_SEC=1 /etc/init.d/kube-addons restart' on 104.196.0.155:22
[It] should propagate add-on file changes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:322
Oct 23 23:16:25.225: INFO: Executing 'mkdir -p addon-test-dir/e2e-tests-addon-update-test-8mek5' on 104.196.0.155:22
Oct 23 23:16:25.232: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-controller-v1.yaml' on 104.196.0.155:22
Oct 23 23:16:25.249: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-controller-v2.yaml' on 104.196.0.155:22
Oct 23 23:16:25.252: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-service-v1.yaml' on 104.196.0.155:22
Oct 23 23:16:25.255: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-service-v2.yaml' on 104.196.0.155:22
Oct 23 23:16:25.259: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-8mek5/invalid-addon-controller-v1.yaml' on 104.196.0.155:22
Oct 23 23:16:25.263: INFO: Writing remote file 'addon-test-dir/e2e-tests-addon-update-test-8mek5/invalid-addon-service-v1.yaml' on 104.196.0.155:22
Oct 23 23:16:25.266: INFO: Executing 'sudo rm -rf /etc/kubernetes/addons/addon-test-dir' on 104.196.0.155:22
Oct 23 23:16:25.275: INFO: Executing 'sudo mkdir -p /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5' on 104.196.0.155:22
STEP: copy invalid manifests to the destination dir (without kubernetes.io/cluster-service label)
Oct 23 23:16:25.280: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-8mek5/invalid-addon-controller-v1.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/invalid-addon-controller-v1.yaml' on 104.196.0.155:22
Oct 23 23:16:25.288: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-8mek5/invalid-addon-service-v1.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/invalid-addon-service-v1.yaml' on 104.196.0.155:22
STEP: copy new manifests
Oct 23 23:16:25.297: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-controller-v1.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-controller-v1.yaml' on 104.196.0.155:22
Oct 23 23:16:25.303: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-service-v1.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-service-v1.yaml' on 104.196.0.155:22
Oct 23 23:16:31.325: INFO: Service addon-test in namespace e2e-tests-addon-update-test-8mek5 found.
Oct 23 23:16:31.333: INFO: ReplicationController addon-test-v1 in namespace default found.
STEP: update manifests
Oct 23 23:16:31.333: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-controller-v2.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-controller-v2.yaml' on 104.196.0.155:22
Oct 23 23:16:31.371: INFO: Executing 'sudo cp addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-service-v2.yaml /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-service-v2.yaml' on 104.196.0.155:22
Oct 23 23:16:31.404: INFO: Executing 'sudo rm /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-controller-v1.yaml' on 104.196.0.155:22
Oct 23 23:16:31.411: INFO: Executing 'sudo rm /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-service-v1.yaml' on 104.196.0.155:22
Oct 23 23:16:40.463: INFO: Service addon-test-updated in namespace e2e-tests-addon-update-test-8mek5 found.
Oct 23 23:16:40.490: INFO: ReplicationController addon-test-v2 in namespace e2e-tests-addon-update-test-8mek5 found.
Oct 23 23:16:40.505: INFO: Service addon-test in namespace e2e-tests-addon-update-test-8mek5 disappeared.
Oct 23 23:16:40.525: INFO: Get ReplicationController addon-test-v1 in namespace default failed (replicationControllers "addon-test-v1" not found).
STEP: remove manifests
Oct 23 23:16:40.525: INFO: Executing 'sudo rm /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-controller-v2.yaml' on 104.196.0.155:22
Oct 23 23:16:40.564: INFO: Executing 'sudo rm /etc/kubernetes/addons/addon-test-dir/e2e-tests-addon-update-test-8mek5/addon-service-v2.yaml' on 104.196.0.155:22
Oct 23 23:16:46.600: INFO: Service addon-test-updated in namespace e2e-tests-addon-update-test-8mek5 disappeared.
Oct 23 23:16:46.603: INFO: Get ReplicationController addon-test-v2 in namespace e2e-tests-addon-update-test-8mek5 failed (replicationControllers "addon-test-v2" not found).
STEP: verify invalid API addons weren't created
Oct 23 23:16:46.620: INFO: Executing 'sudo rm -rf /etc/kubernetes/addons/addon-test-dir' on 104.196.0.155:22
Oct 23 23:16:46.637: INFO: Executing 'rm -rf addon-test-dir' on 104.196.0.155:22
[AfterEach] Addon update
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:16:46.642: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:16:46.650: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:16:46.650: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:16:46.650: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:16:46.650: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:16:46.650: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:16:46.650: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:16:46.650: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:16:46.650: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:16:46.650: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:16:46.650: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:16:46.650: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:16:46.650: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-addon-update-test-8mek5" for this suite.
[AfterEach] Addon update
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:222
Oct 23 23:16:51.706: INFO: Executing 'sudo /etc/init.d/kube-addons restart' on 104.196.0.155:22
• [SLOW TEST:26.954 seconds]
Addon update
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:323
should propagate add-on file changes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:322
------------------------------
SS
------------------------------
P [PENDING]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should get a host IP [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:227
------------------------------
P [PENDING]
Namespaces
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:120
should always delete fast (ALL of 100 namespaces in 150 seconds)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:119
------------------------------
[BeforeEach] Port forwarding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:16:51.833: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-cdgf2
Oct 23 23:16:51.882: INFO: Service account default in ns e2e-tests-port-forwarding-cdgf2 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:16:53.887: INFO: Service account default in ns e2e-tests-port-forwarding-cdgf2 with secrets found. (2.054337742s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:16:53.887: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-cdgf2
Oct 23 23:16:53.900: INFO: Service account default in ns e2e-tests-port-forwarding-cdgf2 with secrets found. (13.306547ms)
[It] should support a client that connects, sends no data, and disconnects [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:155
STEP: creating the target pod
Oct 23 23:16:53.913: INFO: Waiting up to 5m0s for pod pfpod status to be running
Oct 23 23:16:54.006: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-cdgf2' status to be 'running'(found phase: "Pending", readiness: false) (92.835886ms elapsed)
Oct 23 23:16:56.014: INFO: Found pod 'pfpod' on node 'pull-e2e-0-minion-l2bc'
STEP: Running 'kubectl port-forward'
Oct 23 23:16:56.014: INFO: starting port-forward command and streaming output
Oct 23 23:16:56.014: INFO: Asynchronously running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config port-forward --namespace=e2e-tests-port-forwarding-cdgf2 pfpod :80'
Oct 23 23:16:56.015: INFO: reading from `kubectl port-forward` command's stderr
STEP: Dialing the local port
STEP: Closing the connection to the local port
Oct 23 23:16:56.483: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config logs --namespace=e2e-tests-port-forwarding-cdgf2 -f pfpod'
[AfterEach] Port forwarding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-port-forwarding-cdgf2".
Oct 23 23:17:26.490: INFO: event for pfpod: {scheduler } Scheduled: Successfully assigned pfpod to pull-e2e-0-minion-l2bc
Oct 23 23:17:26.490: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Pulled: Container image "beta.gcr.io/google_containers/pause:2.0" already present on machine
Oct 23 23:17:26.490: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Created: Created with docker id 38a027310ad1
Oct 23 23:17:26.490: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Started: Started with docker id 38a027310ad1
Oct 23 23:17:26.490: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Pulled: Container image "gcr.io/google_containers/portforwardtester:1.0" already present on machine
Oct 23 23:17:26.490: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Created: Created with docker id c458ee9bac40
Oct 23 23:17:26.490: INFO: event for pfpod: {kubelet pull-e2e-0-minion-l2bc} Started: Started with docker id c458ee9bac40
Oct 23 23:17:26.528: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 23 23:17:26.528: INFO: pod-back-off-image pull-e2e-0-minion-n5ko Running [{Ready False 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:16:44 +0000 UTC ContainersNotReady containers with unready status: [back-off]}]
Oct 23 23:17:26.528: INFO: pfpod pull-e2e-0-minion-l2bc Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:16:54 +0000 UTC }]
Oct 23 23:17:26.528: INFO: elasticsearch-logging-v1-3gvtt pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:28 +0000 UTC }]
Oct 23 23:17:26.528: INFO: elasticsearch-logging-v1-t59p5 pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:10 +0000 UTC }]
Oct 23 23:17:26.528: INFO: heapster-v10-u3l0u pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:57 +0000 UTC }]
Oct 23 23:17:26.528: INFO: kibana-logging-v1-skk6z pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:00 +0000 UTC }]
Oct 23 23:17:26.528: INFO: kube-dns-v9-6u0vh pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:04 +0000 UTC }]
Oct 23 23:17:26.528: INFO: kube-ui-v3-laljg pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:49 +0000 UTC }]
Oct 23 23:17:26.528: INFO: monitoring-influxdb-grafana-v2-bpbto pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:57 +0000 UTC }]
Oct 23 23:17:26.528: INFO:
Oct 23 23:17:26.528: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:17:26.532: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:17:26.532: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:17:26.532: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:17:26.532: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:17:26.532: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:17:26.532: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:17:26.532: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:17:26.532: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:17:26.532: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:17:26.532: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:17:26.532: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:17:26.532: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-port-forwarding-cdgf2" for this suite.
Oct 23 23:17:26.574: INFO:
• Failure [39.719 seconds]
Port forwarding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:239
With a server that expects a client request
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:202
should support a client that connects, sends no data, and disconnects [Conformance] [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:155
Oct 23 23:17:26.483: kubectl timed out
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:122
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:17:31.553: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-pizbs
Oct 23 23:17:31.579: INFO: Service account default in ns e2e-tests-pods-pizbs had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:17:33.582: INFO: Service account default in ns e2e-tests-pods-pizbs with secrets found. (2.029031989s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:17:33.582: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-pizbs
Oct 23 23:17:33.584: INFO: Service account default in ns e2e-tests-pods-pizbs with secrets found. (2.133252ms)
[It] should be schedule with cpu and memory limits [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:264
STEP: creating the pod
Oct 23 23:17:33.589: INFO: Waiting up to 5m0s for pod pod-update-3d7a18cd-79dc-11e5-9772-42010af00002 status to be running
Oct 23 23:17:33.620: INFO: Waiting for pod pod-update-3d7a18cd-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-pods-pizbs' status to be 'running'(found phase: "Pending", readiness: false) (30.688648ms elapsed)
Oct 23 23:17:35.623: INFO: Found pod 'pod-update-3d7a18cd-79dc-11e5-9772-42010af00002' on node 'pull-e2e-0-minion-dp0i'
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:17:35.630: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:17:35.635: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:17:35.635: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:17:35.635: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:17:35.635: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:17:35.635: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:17:35.635: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:17:35.635: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:17:35.635: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:17:35.635: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:17:35.635: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:17:35.635: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:17:35.635: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-pizbs" for this suite.
• [SLOW TEST:9.099 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should be schedule with cpu and memory limits [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:264
------------------------------
S
------------------------------
[BeforeEach] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:17:40.655: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-pdw2s
Oct 23 23:17:40.682: INFO: Service account default in ns e2e-tests-replication-controller-pdw2s had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:17:42.684: INFO: Service account default in ns e2e-tests-replication-controller-pdw2s with secrets found. (2.029658094s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:17:42.684: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-pdw2s
Oct 23 23:17:42.686: INFO: Service account default in ns e2e-tests-replication-controller-pdw2s with secrets found. (1.667476ms)
[It] should serve a basic image on each replica with a public image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
STEP: Creating replication controller my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002
Oct 23 23:17:42.734: INFO: Pod name my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002: Found 2 pods out of 2
STEP: Ensuring each pod is running
Oct 23 23:17:42.734: INFO: Waiting up to 5m0s for pod my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002-sq9jd status to be running
Oct 23 23:17:42.737: INFO: Waiting for pod my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002-sq9jd in namespace 'e2e-tests-replication-controller-pdw2s' status to be 'running'(found phase: "Pending", readiness: false) (2.593293ms elapsed)
Oct 23 23:17:44.740: INFO: Found pod 'my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002-sq9jd' on node 'pull-e2e-0-minion-l2bc'
Oct 23 23:17:44.740: INFO: Waiting up to 5m0s for pod my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002-w71qy status to be running
Oct 23 23:17:44.743: INFO: Found pod 'my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002-w71qy' on node 'pull-e2e-0-minion-n5ko'
STEP: Trying to dial each unique pod
Oct 23 23:17:49.752: INFO: Controller my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002: Got expected result from replica 1 [my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002-sq9jd]: "my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002-sq9jd", 1 of 2 required successes so far
Oct 23 23:17:49.757: INFO: Controller my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002: Got expected result from replica 2 [my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002-w71qy]: "my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002-w71qy", 2 of 2 required successes so far
STEP: deleting replication controller my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002 in namespace e2e-tests-replication-controller-pdw2s
Oct 23 23:17:51.808: INFO: Deleting RC my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002 took: 2.048326923s
Oct 23 23:18:01.813: INFO: Terminating RC my-hostname-basic-42e6f413-79dc-11e5-9772-42010af00002 pods took: 10.005702168s
[AfterEach] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:18:01.814: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:18:01.818: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:18:01.818: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:18:01.818: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:18:01.818: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:18:01.818: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:18:01.818: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:18:01.818: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:18:01.818: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:18:01.818: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:18:01.818: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:18:01.818: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:18:01.818: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-replication-controller-pdw2s" for this suite.
• [SLOW TEST:26.195 seconds]
ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:46
should serve a basic image on each replica with a public image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
------------------------------
S
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:18:06.854: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ku8l3
Oct 23 23:18:06.880: INFO: Service account default in ns e2e-tests-pods-ku8l3 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:18:08.882: INFO: Service account default in ns e2e-tests-pods-ku8l3 with secrets found. (2.028382235s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:18:08.882: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ku8l3
Oct 23 23:18:08.886: INFO: Service account default in ns e2e-tests-pods-ku8l3 with secrets found. (3.377188ms)
[It] should be updated [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:454
STEP: creating the pod
STEP: submitting the pod to kubernetes
Oct 23 23:18:08.891: INFO: Waiting up to 5m0s for pod pod-update-5284ac26-79dc-11e5-9772-42010af00002 status to be running
Oct 23 23:18:08.928: INFO: Waiting for pod pod-update-5284ac26-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-pods-ku8l3' status to be 'running'(found phase: "Pending", readiness: false) (37.495123ms elapsed)
Oct 23 23:18:10.932: INFO: Found pod 'pod-update-5284ac26-79dc-11e5-9772-42010af00002' on node 'pull-e2e-0-minion-l2bc'
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 23 23:18:11.437: INFO: Conflicting update to pod, re-get and re-update: pods "pod-update-5284ac26-79dc-11e5-9772-42010af00002" cannot be updated: the object has been modified; please apply your changes to the latest version and try again
STEP: updating the pod
Oct 23 23:18:11.968: INFO: Successfully updated pod
Oct 23 23:18:11.968: INFO: Waiting up to 5m0s for pod pod-update-5284ac26-79dc-11e5-9772-42010af00002 status to be running
Oct 23 23:18:11.996: INFO: Found pod 'pod-update-5284ac26-79dc-11e5-9772-42010af00002' on node 'pull-e2e-0-minion-l2bc'
STEP: verifying the updated pod is in kubernetes
Oct 23 23:18:11.998: INFO: Pod update OK
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:18:12.010: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:18:12.037: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:18:12.037: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:18:12.037: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:18:12.037: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:18:12.037: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:18:12.037: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:18:12.037: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:18:12.037: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:18:12.037: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:18:12.037: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:18:12.037: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:18:12.037: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-ku8l3" for this suite.
• [SLOW TEST:10.203 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should be updated [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:454
------------------------------
S
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:18:17.059: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-iv2mb
Oct 23 23:18:17.087: INFO: Service account default in ns e2e-tests-emptydir-iv2mb with secrets found. (27.909411ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:18:17.087: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-iv2mb
Oct 23 23:18:17.089: INFO: Service account default in ns e2e-tests-emptydir-iv2mb with secrets found. (2.167181ms)
[It] volume on tmpfs should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:42
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 23 23:18:17.094: INFO: Waiting up to 5m0s for pod pod-57687ec3-79dc-11e5-9772-42010af00002 status to be success or failure
Oct 23 23:18:17.124: INFO: No Status.Info for container 'test-container' in pod 'pod-57687ec3-79dc-11e5-9772-42010af00002' yet
Oct 23 23:18:17.124: INFO: Waiting for pod pod-57687ec3-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-iv2mb' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.165046ms elapsed)
Oct 23 23:18:19.128: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-57687ec3-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-iv2mb' so far
Oct 23 23:18:19.128: INFO: Waiting for pod pod-57687ec3-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-iv2mb' status to be 'success or failure'(found phase: "Running", readiness: true) (2.033599619s elapsed)
Oct 23 23:18:21.131: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-57687ec3-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-iv2mb' so far
Oct 23 23:18:21.131: INFO: Waiting for pod pod-57687ec3-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-iv2mb' status to be 'success or failure'(found phase: "Running", readiness: true) (4.036887865s elapsed)
Oct 23 23:18:23.135: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-57687ec3-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-iv2mb' so far
Oct 23 23:18:23.135: INFO: Waiting for pod pod-57687ec3-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-iv2mb' status to be 'success or failure'(found phase: "Running", readiness: true) (6.040230903s elapsed)
Oct 23 23:18:25.138: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-57687ec3-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-iv2mb' so far
Oct 23 23:18:25.138: INFO: Waiting for pod pod-57687ec3-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-iv2mb' status to be 'success or failure'(found phase: "Running", readiness: true) (8.043458076s elapsed)
Oct 23 23:18:27.141: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-57687ec3-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-iv2mb' so far
Oct 23 23:18:27.141: INFO: Waiting for pod pod-57687ec3-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-iv2mb' status to be 'success or failure'(found phase: "Running", readiness: true) (10.047111327s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod pod-57687ec3-79dc-11e5-9772-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
perms of file "/test-volume": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:18:29.165: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:18:29.192: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:18:29.192: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:18:29.192: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:18:29.192: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:18:29.192: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:18:29.192: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:18:29.192: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:18:29.192: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:18:29.192: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:18:29.192: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:18:29.192: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:18:29.192: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-iv2mb" for this suite.
• [SLOW TEST:17.151 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
volume on tmpfs should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:42
------------------------------
S
------------------------------
[BeforeEach] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:18:34.214: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-kxams
Oct 23 23:18:34.242: INFO: Service account default in ns e2e-tests-prestop-kxams with secrets found. (27.860852ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:18:34.242: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-kxams
Oct 23 23:18:34.244: INFO: Service account default in ns e2e-tests-prestop-kxams with secrets found. (1.985552ms)
[It] should call prestop when killing a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:149
STEP: Creating server pod server in namespace e2e-tests-prestop-kxams
STEP: Waiting for pods to come up.
Oct 23 23:18:34.248: INFO: Waiting up to 5m0s for pod server status to be running
Oct 23 23:18:34.275: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-kxams' status to be 'running'(found phase: "Pending", readiness: false) (26.356789ms elapsed)
Oct 23 23:18:36.278: INFO: Found pod 'server' on node 'pull-e2e-0-minion-n5ko'
STEP: Creating tester pod server in namespace e2e-tests-prestop-kxams
Oct 23 23:18:36.286: INFO: Waiting up to 5m0s for pod tester status to be running
Oct 23 23:18:36.315: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-kxams' status to be 'running'(found phase: "Pending", readiness: false) (29.870876ms elapsed)
Oct 23 23:18:38.319: INFO: Found pod 'tester' on node 'pull-e2e-0-minion-l2bc'
STEP: Deleting pre-stop pod
Oct 23 23:18:43.331: INFO: Saw: {
"Hostname": "server",
"Sent": null,
"Received": {
"prestop": 1
},
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:18:43.337: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:18:43.342: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:18:43.342: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:18:43.342: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:18:43.342: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:18:43.342: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:18:43.342: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:18:43.342: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:18:43.342: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:18:43.342: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:18:43.342: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:18:43.342: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:18:43.342: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-prestop-kxams" for this suite.
• [SLOW TEST:14.145 seconds]
PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:150
should call prestop when killing a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:149
------------------------------
S
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:18:48.363: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-pn9nu
Oct 23 23:18:48.401: INFO: Service account default in ns e2e-tests-kubectl-pn9nu with secrets found. (38.784893ms)
[It] should add annotations for pods in rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:764
STEP: creating Redis RC
Oct 23 23:18:48.402: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-pn9nu'
Oct 23 23:18:48.623: INFO: replicationcontroller "redis-master" created
STEP: patching all pods
Oct 23 23:18:50.629: INFO: Waiting up to 5m0s for pod redis-master-hppb8 status to be running
Oct 23 23:18:50.631: INFO: Found pod 'redis-master-hppb8' on node 'pull-e2e-0-minion-l2bc'
Oct 23 23:18:50.631: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config patch pod redis-master-hppb8 --namespace=e2e-tests-kubectl-pn9nu -p {"metadata":{"annotations":{"x":"y"}}}'
Oct 23 23:18:50.817: INFO: "redis-master-hppb8" patched
STEP: checking annotations
Oct 23 23:18:50.821: INFO: Waiting up to 5m0s for pod redis-master-hppb8 status to be running
Oct 23 23:18:50.824: INFO: Found pod 'redis-master-hppb8' on node 'pull-e2e-0-minion-l2bc'
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-pn9nu
• [SLOW TEST:7.479 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl patch
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:765
should add annotations for pods in rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:764
------------------------------
S
------------------------------
[BeforeEach] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:39
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:18:55.844: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-lcybd
Oct 23 23:18:55.870: INFO: Service account default in ns e2e-tests-container-probe-lcybd with secrets found. (25.966132ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:18:55.870: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-lcybd
Oct 23 23:18:55.872: INFO: Service account default in ns e2e-tests-container-probe-lcybd with secrets found. (1.746328ms)
[It] with readiness probe should not be ready before initial delay and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:74
Oct 23 23:18:57.904: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:18:59.883: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:01.881: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:03.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:05.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:07.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:09.881: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:11.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:13.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:15.882: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:17.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:19.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:21.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:23.883: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:25.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:27.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:29.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:31.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:33.880: INFO: pod is not yet ready; pod has phase "Running".
Oct 23 23:19:35.881: INFO: pod is not yet ready; pod has phase "Running".
[AfterEach] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:41
Oct 23 23:19:37.886: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:19:37.889: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:19:37.890: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:19:37.890: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:19:37.890: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:19:37.890: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:19:37.890: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:19:37.890: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:19:37.890: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:19:37.890: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:19:37.890: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:19:37.890: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:19:37.890: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-container-probe-lcybd" for this suite.
• [SLOW TEST:47.064 seconds]
Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:101
with readiness probe should not be ready before initial delay and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:74
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:19:42.910: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-6xd26
Oct 23 23:19:42.936: INFO: Service account default in ns e2e-tests-kubectl-6xd26 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:19:44.939: INFO: Service account default in ns e2e-tests-kubectl-6xd26 with secrets found. (2.02882948s)
[BeforeEach] Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:829
[It] should create a pod from an image when restart is Never [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:871
STEP: running the image nginx
Oct 23 23:19:44.939: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config run e2e-test-nginx-pod --restart=Never --image=nginx --namespace=e2e-tests-kubectl-6xd26'
Oct 23 23:19:45.129: INFO: pod "e2e-test-nginx-pod" created
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:833
Oct 23 23:19:45.133: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6xd26'
Oct 23 23:19:45.322: INFO: pod "e2e-test-nginx-pod" deleted
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-6xd26
• [SLOW TEST:7.429 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:873
should create a pod from an image when restart is Never [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:871
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:13:04.182: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-isifu
Oct 23 23:13:04.210: INFO: Service account default in ns e2e-tests-pods-isifu with secrets found. (27.88884ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:13:04.210: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-isifu
Oct 23 23:13:04.212: INFO: Service account default in ns e2e-tests-pods-isifu with secrets found. (1.840875ms)
[It] should have their auto-restart back-off timer reset on image update
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:912
STEP: submitting the pod to kubernetes
Oct 23 23:13:04.217: INFO: Waiting up to 5m0s for pod pod-back-off-image status to be running
Oct 23 23:13:04.245: INFO: Waiting for pod pod-back-off-image in namespace 'e2e-tests-pods-isifu' status to be 'running'(found phase: "Pending", readiness: false) (28.472932ms elapsed)
Oct 23 23:13:04.364: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-zr43"
Oct 23 23:13:04.588: INFO: Missing info/stats for container "/system" on node "pull-e2e-0-minion-dp0i"
Oct 23 23:13:06.273: INFO: Found pod 'pod-back-off-image' on node 'pull-e2e-0-minion-n5ko'
STEP: verifying the pod is in kubernetes
STEP: getting restart delay-0
Oct 23 23:14:52.438: INFO: getRestartDelay: finishedAt=2015-10-23 23:14:09 +0000 UTC restartedAt=2015-10-23 23:14:52 +0000 UTC (43s)
STEP: getting restart delay-1
Oct 23 23:16:22.781: INFO: getRestartDelay: finishedAt=2015-10-23 23:14:57 +0000 UTC restartedAt=2015-10-23 23:16:22 +0000 UTC (1m25s)
STEP: getting restart delay-2
Oct 23 23:19:14.503: INFO: getRestartDelay: finishedAt=2015-10-23 23:16:27 +0000 UTC restartedAt=2015-10-23 23:19:13 +0000 UTC (2m46s)
STEP: updating the image
Oct 23 23:19:24.512: INFO: Waiting up to 5m0s for pod pod-back-off-image status to be running
Oct 23 23:19:24.538: INFO: Found pod 'pod-back-off-image' on node 'pull-e2e-0-minion-n5ko'
STEP: get restart delay after image update
Oct 23 23:19:45.620: INFO: getRestartDelay: finishedAt=2015-10-23 23:19:29 +0000 UTC restartedAt=2015-10-23 23:19:44 +0000 UTC (15s)
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:19:45.631: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:19:45.659: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:19:45.659: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:19:45.659: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:19:45.659: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:19:45.659: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:19:45.659: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:19:45.659: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:19:45.659: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:19:45.659: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:19:45.659: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:19:45.659: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:19:45.659: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-isifu" for this suite.
• [SLOW TEST:406.494 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should have their auto-restart back-off timer reset on image update
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:912
------------------------------
[BeforeEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:19:50.677: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-1dsqe
Oct 23 23:19:50.705: INFO: Service account default in ns e2e-tests-containers-1dsqe had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:19:52.707: INFO: Service account default in ns e2e-tests-containers-1dsqe with secrets found. (2.029793373s)
[It] should be able to override the image's default arguments (docker cmd) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62
STEP: Creating a pod to test override arguments
Oct 23 23:19:52.713: INFO: Waiting up to 5m0s for pod client-containers-90669c62-79dc-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:19:52.742: INFO: No Status.Info for container 'test-container' in pod 'client-containers-90669c62-79dc-11e5-ba1c-42010af00002' yet
Oct 23 23:19:52.742: INFO: Waiting for pod client-containers-90669c62-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-1dsqe' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.992561ms elapsed)
Oct 23 23:19:54.745: INFO: No Status.Info for container 'test-container' in pod 'client-containers-90669c62-79dc-11e5-ba1c-42010af00002' yet
Oct 23 23:19:54.745: INFO: Waiting for pod client-containers-90669c62-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-1dsqe' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.03180385s elapsed)
Oct 23 23:19:56.748: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-90669c62-79dc-11e5-ba1c-42010af00002' in namespace 'e2e-tests-containers-1dsqe' so far
Oct 23 23:19:56.748: INFO: Waiting for pod client-containers-90669c62-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-1dsqe' status to be 'success or failure'(found phase: "Running", readiness: true) (4.035323635s elapsed)
Oct 23 23:19:58.752: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-90669c62-79dc-11e5-ba1c-42010af00002' in namespace 'e2e-tests-containers-1dsqe' so far
Oct 23 23:19:58.752: INFO: Waiting for pod client-containers-90669c62-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-1dsqe' status to be 'success or failure'(found phase: "Running", readiness: true) (6.038733779s elapsed)
Oct 23 23:20:00.755: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-90669c62-79dc-11e5-ba1c-42010af00002' in namespace 'e2e-tests-containers-1dsqe' so far
Oct 23 23:20:00.755: INFO: Waiting for pod client-containers-90669c62-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-1dsqe' status to be 'success or failure'(found phase: "Running", readiness: true) (8.04181461s elapsed)
Oct 23 23:20:02.758: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-90669c62-79dc-11e5-ba1c-42010af00002' in namespace 'e2e-tests-containers-1dsqe' so far
Oct 23 23:20:02.758: INFO: Waiting for pod client-containers-90669c62-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-containers-1dsqe' status to be 'success or failure'(found phase: "Running", readiness: true) (10.044910965s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod client-containers-90669c62-79dc-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep override arguments]
[AfterEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:19.147 seconds]
Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should be able to override the image's default arguments (docker cmd) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62
------------------------------
[BeforeEach] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:19:50.340: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-7rb4n
Oct 23 23:19:50.375: INFO: Service account default in ns e2e-tests-dns-7rb4n with secrets found. (35.198509ms)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:19:50.375: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-7rb4n
Oct 23 23:19:50.378: INFO: Service account default in ns e2e-tests-dns-7rb4n with secrets found. (2.511674ms)
[It] should provide DNS for services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:307
STEP: Waiting for DNS Service to be Running
Oct 23 23:19:50.392: INFO: Waiting up to 5m0s for pod kube-dns-v9-6u0vh status to be running
Oct 23 23:19:50.395: INFO: Found pod 'kube-dns-v9-6u0vh' on node 'pull-e2e-0-minion-djcb'
STEP: Creating a test headless service
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
Oct 23 23:19:50.471: INFO: Waiting up to 5m0s for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 status to be running
Oct 23 23:19:50.499: INFO: Waiting for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-7rb4n' status to be 'running'(found phase: "Pending", readiness: false) (28.323088ms elapsed)
Oct 23 23:19:52.502: INFO: Waiting for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-7rb4n' status to be 'running'(found phase: "Pending", readiness: false) (2.03174782s elapsed)
Oct 23 23:19:54.507: INFO: Waiting for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-7rb4n' status to be 'running'(found phase: "Pending", readiness: false) (4.036599124s elapsed)
Oct 23 23:19:56.511: INFO: Waiting for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-7rb4n' status to be 'running'(found phase: "Pending", readiness: false) (6.040508558s elapsed)
Oct 23 23:19:58.515: INFO: Waiting for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-7rb4n' status to be 'running'(found phase: "Pending", readiness: false) (8.044179263s elapsed)
Oct 23 23:20:00.518: INFO: Waiting for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-7rb4n' status to be 'running'(found phase: "Pending", readiness: false) (10.047828271s elapsed)
Oct 23 23:20:02.522: INFO: Waiting for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-7rb4n' status to be 'running'(found phase: "Pending", readiness: false) (12.051539927s elapsed)
Oct 23 23:20:04.527: INFO: Waiting for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-7rb4n' status to be 'running'(found phase: "Pending", readiness: false) (14.056678074s elapsed)
Oct 23 23:20:06.531: INFO: Waiting for pod dns-test-8f0cb148-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-dns-7rb4n' status to be 'running'(found phase: "Pending", readiness: false) (16.060187346s elapsed)
Oct 23 23:20:08.535: INFO: Found pod 'dns-test-8f0cb148-79dc-11e5-9772-42010af00002' on node 'pull-e2e-0-minion-l2bc'
STEP: retrieving the pod
STEP: looking for the results for each expected name from probiers
Oct 23 23:20:12.539: INFO: DNS probes using dns-test-8f0cb148-79dc-11e5-9772-42010af00002 succeeded
STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:20:13.149: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:20:13.342: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:20:13.342: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:20:13.342: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:20:13.342: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:20:13.342: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:20:13.342: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:20:13.342: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:20:13.342: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:20:13.342: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:20:13.342: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:20:13.342: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:20:13.342: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-dns-7rb4n" for this suite.
• [SLOW TEST:28.412 seconds]
DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:309
should provide DNS for services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:307
------------------------------
SSS
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:20:09.827: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-f7jn0
Oct 23 23:20:09.857: INFO: Service account default in ns e2e-tests-kubectl-f7jn0 with secrets found. (30.758668ms)
[It] should create services for rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:643
STEP: creating Redis RC
Oct 23 23:20:09.857: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-f7jn0'
Oct 23 23:20:10.075: INFO: replicationcontroller "redis-master" created
Oct 23 23:20:12.081: INFO: Waiting up to 5m0s for pod redis-master-bypi1 status to be running
Oct 23 23:20:12.083: INFO: Found pod 'redis-master-bypi1' on node 'pull-e2e-0-minion-dp0i'
Oct 23 23:20:12.083: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config log redis-master-bypi1 redis-master --namespace=e2e-tests-kubectl-f7jn0'
Oct 23 23:20:12.261: INFO: 1:C 23 Oct 23:20:10.384 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.0.5 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
1:M 23 Oct 23:20:10.386 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 23 Oct 23:20:10.386 # Server started, Redis version 3.0.5
1:M 23 Oct 23:20:10.386 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 23 Oct 23:20:10.386 * The server is now ready to accept connections on port 6379
STEP: exposing RC
Oct 23 23:20:12.261: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-f7jn0'
Oct 23 23:20:12.452: INFO: service "rm2" exposed
Oct 23 23:20:12.456: INFO: Service rm2 in namespace e2e-tests-kubectl-f7jn0 found.
STEP: exposing service
Oct 23 23:20:14.463: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-f7jn0'
Oct 23 23:20:14.653: INFO: service "rm3" exposed
Oct 23 23:20:14.656: INFO: Service rm3 in namespace e2e-tests-kubectl-f7jn0 found.
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-f7jn0
• [SLOW TEST:11.853 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl expose
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:644
should create services for rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:643
------------------------------
[BeforeEach] SSH
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:39
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
[It] should SSH to all nodes and run commands
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:97
STEP: Getting all nodes' SSH-able IP addresses
• Failure [0.006 seconds]
SSH
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:98
should SSH to all nodes and run commands [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:97
Oct 23 23:20:21.680: Error getting node hostnames: only found 0 external IPs on nodes, but found 6 nodes. Nodelist: &{{ } {/api/v1/nodes 4865} [{{ } {pull-e2e-0-minion-1dli /api/v1/nodes/pull-e2e-0-minion-1dli 473e964e-79d8-11e5-b1b8-42010af00002 4819 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-1dli] map[]} {10.245.0.0/24 pull-e2e-0-minion-1dli false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:11 +0000 UTC 2015-10-23 22:49:42 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.5} {InternalIP 10.240.0.5}] {{10250}} {06067045f01b156572deffb722c59e21 06067045-F01B-1565-72DE-FFB722C59E21 c1b93dc7-9f9c-4317-81d7-1d063eba1c00 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-djcb /api/v1/nodes/pull-e2e-0-minion-djcb 48872259-79d8-11e5-b1b8-42010af00002 4818 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-djcb] map[]} {10.245.1.0/24 pull-e2e-0-minion-djcb false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:11 +0000 UTC 2015-10-23 22:49:44 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.6} {InternalIP 10.240.0.6}] {{10250}} {b1aa0c82a3d85173231543dc6bda5cc2 B1AA0C82-A3D8-5173-2315-43DC6BDA5CC2 3aa605f8-62d1-433f-a90d-ef2ac6c1745e 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-dp0i /api/v1/nodes/pull-e2e-0-minion-dp0i 4a304db6-79d8-11e5-b1b8-42010af00002 4856 0 2015-10-23 22:49:16 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-dp0i] map[]} {10.245.4.0/24 pull-e2e-0-minion-dp0i false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:17 +0000 UTC 2015-10-23 22:50:07 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.8} {InternalIP 10.240.0.8}] {{10250}} {804cf75d9249d4fc0a14982a39e001fe 804CF75D-9249-D4FC-0A14-982A39E001FE a506e0e8-5682-4acb-b54c-91020168ed4f 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-l2bc /api/v1/nodes/pull-e2e-0-minion-l2bc 48f64a03-79d8-11e5-b1b8-42010af00002 4838 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-l2bc] map[]} {10.245.2.0/24 pull-e2e-0-minion-l2bc false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI}] [{Ready True 2015-10-23 23:20:13 +0000 UTC 2015-10-23 22:49:55 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.4} {InternalIP 10.240.0.4}] {{10250}} {dfc1c1a2abfb382eacf83a4da9490f83 DFC1C1A2-ABFB-382E-ACF8-3A4DA9490F83 258d6dfd-ef7f-41e7-b15f-60faf4203bd1 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-n5ko /api/v1/nodes/pull-e2e-0-minion-n5ko 4725d39b-79d8-11e5-b1b8-42010af00002 4820 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-n5ko] map[]} {10.245.3.0/24 pull-e2e-0-minion-n5ko false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:12 +0000 UTC 2015-10-23 22:49:52 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.3} {InternalIP 10.240.0.3}] {{10250}} {4445e34b51378526c1736108982220c7 4445E34B-5137-8526-C173-6108982220C7 4bfd0525-d31e-425c-8eab-0d8868fea7a7 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-zr43 /api/v1/nodes/pull-e2e-0-minion-zr43 46d4a2a4-79d8-11e5-b1b8-42010af00002 4861 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-zr43] map[]} {10.245.5.0/24 pull-e2e-0-minion-zr43 false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI}] [{Ready True 2015-10-23 23:20:19 +0000 UTC 2015-10-23 22:49:41 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.7} {InternalIP 10.240.0.7}] {{10250}} {6d724240d0b36bd02e7ac95bb50c19a5 6D724240-D0B3-6BD0-2E7A-C95BB50C19A5 baa0ce0e-dcdc-428c-b65d-89e0a0e33cb3 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}}]}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:46
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:20:18.756: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-rd0qs
Oct 23 23:20:18.782: INFO: Service account default in ns e2e-tests-pods-rd0qs had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:20:20.785: INFO: Service account default in ns e2e-tests-pods-rd0qs with secrets found. (2.029277074s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:20:20.785: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-rd0qs
Oct 23 23:20:20.787: INFO: Service account default in ns e2e-tests-pods-rd0qs with secrets found. (1.704264ms)
[It] should support retrieving logs from the container over websockets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:828
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
Oct 23 23:20:20.794: INFO: Waiting up to 5m0s for pod pod-logs-websocket-a1236d9f-79dc-11e5-9772-42010af00002 status to be running
Oct 23 23:20:20.823: INFO: Waiting for pod pod-logs-websocket-a1236d9f-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-pods-rd0qs' status to be 'running'(found phase: "Pending", readiness: false) (28.593211ms elapsed)
Oct 23 23:20:22.826: INFO: Found pod 'pod-logs-websocket-a1236d9f-79dc-11e5-9772-42010af00002' on node 'pull-e2e-0-minion-dp0i'
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:20:22.990: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:20:23.020: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:20:23.021: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:20:23.021: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:20:23.021: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:20:23.021: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:20:23.021: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:20:23.021: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:20:23.021: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:20:23.021: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:20:23.021: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:20:23.021: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:20:23.021: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-rd0qs" for this suite.
• [SLOW TEST:9.290 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should support retrieving logs from the container over websockets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:828
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:20:21.689: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-1k4o6
Oct 23 23:20:21.717: INFO: Service account default in ns e2e-tests-kubectl-1k4o6 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:20:23.720: INFO: Service account default in ns e2e-tests-kubectl-1k4o6 with secrets found. (2.031048256s)
[It] should apply a new configuration to an existing RC
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:466
STEP: creating Redis RC
Oct 23 23:20:23.720: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-1k4o6'
Oct 23 23:20:23.929: INFO: replicationcontroller "redis-master" created
STEP: applying a modified configuration
Oct 23 23:20:23.930: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config apply -f - --namespace=e2e-tests-kubectl-1k4o6'
Oct 23 23:20:24.213: INFO: replicationcontroller "redis-master" configured
STEP: checking the result
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-1k4o6
• [SLOW TEST:7.630 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl apply
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:467
should apply a new configuration to an existing RC
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:466
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:20:28.053: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-fv458
Oct 23 23:20:28.100: INFO: Service account default in ns e2e-tests-emptydir-fv458 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:20:30.102: INFO: Service account default in ns e2e-tests-emptydir-fv458 with secrets found. (2.049197536s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:20:30.102: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-fv458
Oct 23 23:20:30.104: INFO: Service account default in ns e2e-tests-emptydir-fv458 with secrets found. (1.84839ms)
[It] should support (root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:46
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 23 23:20:30.110: INFO: Waiting up to 5m0s for pod pod-a6b0e656-79dc-11e5-9772-42010af00002 status to be success or failure
Oct 23 23:20:30.173: INFO: No Status.Info for container 'test-container' in pod 'pod-a6b0e656-79dc-11e5-9772-42010af00002' yet
Oct 23 23:20:30.173: INFO: Waiting for pod pod-a6b0e656-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-fv458' status to be 'success or failure'(found phase: "Pending", readiness: false) (63.038005ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod pod-a6b0e656-79dc-11e5-9772-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:20:32.248: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:20:32.287: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:20:32.287: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:20:32.287: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:20:32.287: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:20:32.287: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:20:32.287: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:20:32.287: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:20:32.287: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:20:32.287: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:20:32.287: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:20:32.287: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:20:32.287: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-fv458" for this suite.
• [SLOW TEST:9.261 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:46
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:20:29.321: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ztew1
Oct 23 23:20:29.366: INFO: Service account default in ns e2e-tests-emptydir-ztew1 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:20:31.375: INFO: Service account default in ns e2e-tests-emptydir-ztew1 with secrets found. (2.053779421s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:20:31.375: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ztew1
Oct 23 23:20:31.388: INFO: Service account default in ns e2e-tests-emptydir-ztew1 with secrets found. (13.241941ms)
[It] should support (non-root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:86
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 23 23:20:31.408: INFO: Waiting up to 5m0s for pod pod-a774db6e-79dc-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:20:31.444: INFO: No Status.Info for container 'test-container' in pod 'pod-a774db6e-79dc-11e5-ba1c-42010af00002' yet
Oct 23 23:20:31.444: INFO: Waiting for pod pod-a774db6e-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-emptydir-ztew1' status to be 'success or failure'(found phase: "Pending", readiness: false) (36.688385ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod pod-a774db6e-79dc-11e5-ba1c-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:20:33.475: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:20:33.512: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:20:33.512: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:20:33.512: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:20:33.512: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:20:33.512: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:20:33.512: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:20:33.512: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:20:33.512: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:20:33.512: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:20:33.512: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:20:33.512: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:20:33.512: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-ztew1" for this suite.
• [SLOW TEST:9.213 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:86
------------------------------
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:20:38.536: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-0z4we
Oct 23 23:20:38.582: INFO: Service account default in ns e2e-tests-services-0z4we had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:20:40.585: INFO: Service account default in ns e2e-tests-services-0z4we with secrets found. (2.048639236s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:20:40.585: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-0z4we
Oct 23 23:20:40.587: INFO: Service account default in ns e2e-tests-services-0z4we with secrets found. (2.065152ms)
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
[It] should release NodePorts on delete
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:774
STEP: creating service nodeport-reuse with type NodePort in namespace e2e-tests-services-0z4we
STEP: deleting original service nodeport-reuse
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-services-0z4we".
Oct 23 23:20:40.751: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 23 23:20:40.751: INFO: update-demo-nautilus-eio23 pull-e2e-0-minion-dp0i Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:20:40 +0000 UTC }]
Oct 23 23:20:40.751: INFO: update-demo-nautilus-oumga pull-e2e-0-minion-n5ko Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 23:20:40 +0000 UTC }]
Oct 23 23:20:40.751: INFO: elasticsearch-logging-v1-3gvtt pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:28 +0000 UTC }]
Oct 23 23:20:40.751: INFO: elasticsearch-logging-v1-t59p5 pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:10 +0000 UTC }]
Oct 23 23:20:40.751: INFO: heapster-v10-u3l0u pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:57 +0000 UTC }]
Oct 23 23:20:40.751: INFO: kibana-logging-v1-skk6z pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:00 +0000 UTC }]
Oct 23 23:20:40.751: INFO: kube-dns-v9-6u0vh pull-e2e-0-minion-djcb Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:51:04 +0000 UTC }]
Oct 23 23:20:40.751: INFO: kube-ui-v3-laljg pull-e2e-0-minion-1dli Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:49:49 +0000 UTC }]
Oct 23 23:20:40.751: INFO: monitoring-influxdb-grafana-v2-bpbto pull-e2e-0-minion-zr43 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-23 22:50:57 +0000 UTC }]
Oct 23 23:20:40.751: INFO:
Oct 23 23:20:40.751: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:20:40.755: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:20:40.755: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:20:40.755: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:20:40.755: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:20:40.755: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:20:40.755: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:20:40.755: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:20:40.755: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:20:40.755: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:20:40.755: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:20:40.755: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:20:40.755: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-services-0z4we" for this suite.
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• Failure [7.240 seconds]
Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:871
should release NodePorts on delete [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:774
Expected error:
<*errors.errorString | 0xc208392d80>: {
s: "only found 0 external IPs on nodes, but found 6 nodes. Nodelist: &{{ } {/api/v1/nodes 4968} [{{ } {pull-e2e-0-minion-1dli /api/v1/nodes/pull-e2e-0-minion-1dli 473e964e-79d8-11e5-b1b8-42010af00002 4921 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-1dli] map[]} {10.245.0.0/24 pull-e2e-0-minion-1dli false} {map[memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{Ready True 2015-10-23 23:20:31 +0000 UTC 2015-10-23 22:49:42 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.5} {InternalIP 10.240.0.5}] {{10250}} {06067045f01b156572deffb722c59e21 06067045-F01B-1565-72DE-FFB722C59E21 c1b93dc7-9f9c-4317-81d7-1d063eba1c00 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-djcb /api/v1/nodes/pull-e2e-0-minion-djcb 48872259-79d8-11e5-b1b8-42010af00002 4922 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-djcb] map[]} {10.245.1.0/24 pull-e2e-0-minion-djcb false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:31 +0000 UTC 2015-10-23 22:49:44 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.6} {InternalIP 10.240.0.6}] {{10250}} {b1aa0c82a3d85173231543dc6bda5cc2 B1AA0C82-A3D8-5173-2315-43DC6BDA5CC2 3aa605f8-62d1-433f-a90d-ef2ac6c1745e 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-dp0i /api/v1/nodes/pull-e2e-0-minion-dp0i 4a304db6-79d8-11e5-b1b8-42010af00002 4943 0 2015-10-23 22:49:16 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-dp0i] map[]} {10.245.4.0/24 pull-e2e-0-minion-dp0i false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:38 +0000 UTC 2015-10-23 22:50:07 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.8} {InternalIP 10.240.0.8}] {{10250}} {804cf75d9249d4fc0a14982a39e001fe 804CF75D-9249-D4FC-0A14-982A39E001FE a506e0e8-5682-4acb-b54c-91020168ed4f 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-l2bc /api/v1/nodes/pull-e2e-0-minion-l2bc 48f64a03-79d8-11e5-b1b8-42010af00002 4938 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-l2bc] map[]} {10.245.2.0/24 pull-e2e-0-minion-l2bc false} {map[memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{Ready True 2015-10-23 23:20:33 +0000 UTC 2015-10-23 22:49:55 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.4} {InternalIP 10.240.0.4}] {{10250}} {dfc1c1a2abfb382eacf83a4da9490f83 DFC1C1A2-ABFB-382E-ACF8-3A4DA9490F83 258d6dfd-ef7f-41e7-b15f-60faf4203bd1 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-n5ko /api/v1/nodes/pull-e2e-0-minion-n5ko 4725d39b-79d8-11e5-b1b8-42010af00002 4929 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-n5ko] map[]} {10.245.3.0/24 pull-e2e-0-minion-n5ko false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:32 +0000 UTC 2015-10-23 22:49:52 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.3} {InternalIP 10.240.0.3}] {{10250}} {4445e34b51378526c1736108982220c7 4445E34B-5137-8526-C173-6108982220C7 4bfd0525-d31e-425c-8eab-0d8868fea7a7 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-zr43 /api/v1/nodes/pull-e2e-0-minion-zr43 46d4a2a4-79d8-11e5-b1b8-42010af00002 4948 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-zr43] map[]} {10.245.5.0/24 pull-e2e-0-minion-zr43 false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI}] [{Ready True 2015-10-23 23:20:39 +0000 UTC 2015-10-23 22:49:41 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.7} {InternalIP 10.240.0.7}] {{10250}} {6d724240d0b36bd02e7ac95bb50c19a5 6D724240-D0B3-6BD0-2E7A-C95BB50C19A5 baa0ce0e-dcdc-428c-b65d-89e0a0e33cb3 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}}]}",
}
only found 0 external IPs on nodes, but found 6 nodes. Nodelist: &{{ } {/api/v1/nodes 4968} [{{ } {pull-e2e-0-minion-1dli /api/v1/nodes/pull-e2e-0-minion-1dli 473e964e-79d8-11e5-b1b8-42010af00002 4921 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-1dli] map[]} {10.245.0.0/24 pull-e2e-0-minion-1dli false} {map[memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{Ready True 2015-10-23 23:20:31 +0000 UTC 2015-10-23 22:49:42 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.5} {InternalIP 10.240.0.5}] {{10250}} {06067045f01b156572deffb722c59e21 06067045-F01B-1565-72DE-FFB722C59E21 c1b93dc7-9f9c-4317-81d7-1d063eba1c00 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-djcb /api/v1/nodes/pull-e2e-0-minion-djcb 48872259-79d8-11e5-b1b8-42010af00002 4922 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-djcb] map[]} {10.245.1.0/24 pull-e2e-0-minion-djcb false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:31 +0000 UTC 2015-10-23 22:49:44 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.6} {InternalIP 10.240.0.6}] {{10250}} {b1aa0c82a3d85173231543dc6bda5cc2 B1AA0C82-A3D8-5173-2315-43DC6BDA5CC2 3aa605f8-62d1-433f-a90d-ef2ac6c1745e 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-dp0i /api/v1/nodes/pull-e2e-0-minion-dp0i 4a304db6-79d8-11e5-b1b8-42010af00002 4943 0 2015-10-23 22:49:16 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-dp0i] map[]} {10.245.4.0/24 pull-e2e-0-minion-dp0i false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:38 +0000 UTC 2015-10-23 22:50:07 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.8} {InternalIP 10.240.0.8}] {{10250}} {804cf75d9249d4fc0a14982a39e001fe 804CF75D-9249-D4FC-0A14-982A39E001FE a506e0e8-5682-4acb-b54c-91020168ed4f 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-l2bc /api/v1/nodes/pull-e2e-0-minion-l2bc 48f64a03-79d8-11e5-b1b8-42010af00002 4938 0 2015-10-23 22:49:14 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-l2bc] map[]} {10.245.2.0/24 pull-e2e-0-minion-l2bc false} {map[memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI}] [{Ready True 2015-10-23 23:20:33 +0000 UTC 2015-10-23 22:49:55 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.4} {InternalIP 10.240.0.4}] {{10250}} {dfc1c1a2abfb382eacf83a4da9490f83 DFC1C1A2-ABFB-382E-ACF8-3A4DA9490F83 258d6dfd-ef7f-41e7-b15f-60faf4203bd1 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-n5ko /api/v1/nodes/pull-e2e-0-minion-n5ko 4725d39b-79d8-11e5-b1b8-42010af00002 4929 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-n5ko] map[]} {10.245.3.0/24 pull-e2e-0-minion-n5ko false} {map[cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-23 23:20:32 +0000 UTC 2015-10-23 22:49:52 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.3} {InternalIP 10.240.0.3}] {{10250}} {4445e34b51378526c1736108982220c7 4445E34B-5137-8526-C173-6108982220C7 4bfd0525-d31e-425c-8eab-0d8868fea7a7 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}} {{ } {pull-e2e-0-minion-zr43 /api/v1/nodes/pull-e2e-0-minion-zr43 46d4a2a4-79d8-11e5-b1b8-42010af00002 4948 0 2015-10-23 22:49:11 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:pull-e2e-0-minion-zr43] map[]} {10.245.5.0/24 pull-e2e-0-minion-zr43 false} {map[pods:{40.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7848468480.000 BinarySI}] [{Ready True 2015-10-23 23:20:39 +0000 UTC 2015-10-23 22:49:41 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 10.240.0.7} {InternalIP 10.240.0.7}] {{10250}} {6d724240d0b36bd02e7ac95bb50c19a5 6D724240-D0B3-6BD0-2E7A-C95BB50C19A5 baa0ce0e-dcdc-428c-b65d-89e0a0e33cb3 4.1.7-coreos CoreOS 766.4.0 docker://1.7.1 v1.2.0-alpha.2.246+f93f77766dd24b-dirty v1.2.0-alpha.2.246+f93f77766dd24b-dirty}}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:759
------------------------------
S
------------------------------
[BeforeEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:20:45.779: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-edh7q
Oct 23 23:20:45.824: INFO: Service account default in ns e2e-tests-proxy-edh7q had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:20:47.862: INFO: Service account default in ns e2e-tests-proxy-edh7q with secrets found. (2.083679239s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:20:47.862: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-edh7q
Oct 23 23:20:47.875: INFO: Service account default in ns e2e-tests-proxy-edh7q with secrets found. (12.587441ms)
[It] should proxy to cadvisor [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
Oct 23 23:20:47.886: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 6.874444ms)
Oct 23 23:20:47.889: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 3.650024ms)
Oct 23 23:20:47.892: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 3.029157ms)
Oct 23 23:20:47.895: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 3.029878ms)
Oct 23 23:20:47.898: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 2.992682ms)
Oct 23 23:20:47.902: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 3.391611ms)
Oct 23 23:20:47.905: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 3.056697ms)
Oct 23 23:20:47.977: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 72.472351ms)
Oct 23 23:20:48.194: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 216.670224ms)
Oct 23 23:20:48.385: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 190.389736ms)
Oct 23 23:20:48.579: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 194.196337ms)
Oct 23 23:20:48.829: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 250.222159ms)
Oct 23 23:20:48.979: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 149.889709ms)
Oct 23 23:20:49.219: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 240.375795ms)
Oct 23 23:20:49.384: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 165.032695ms)
Oct 23 23:20:49.578: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 193.057139ms)
Oct 23 23:20:49.778: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 199.991798ms)
Oct 23 23:20:49.978: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 199.922281ms)
Oct 23 23:20:50.179: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 201.218054ms)
Oct 23 23:20:50.381: INFO: /api/v1/proxy/nodes/pull-e2e-0-minion-1dli:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 201.780128ms)
[AfterEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:20:50.381: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:20:50.579: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:20:50.579: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:20:50.579: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:20:50.579: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:20:50.579: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:20:50.579: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:20:50.579: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:20:50.579: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:20:50.579: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:20:50.579: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:20:50.579: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:20:50.579: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-edh7q" for this suite.
• [SLOW TEST:5.404 seconds]
Proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy to cadvisor [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:20:37.311: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-9vy31
Oct 23 23:20:37.351: INFO: Service account default in ns e2e-tests-kubectl-9vy31 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:20:39.355: INFO: Service account default in ns e2e-tests-kubectl-9vy31 with secrets found. (2.044316482s)
[BeforeEach] Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:104
[It] should create and stop a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:111
STEP: creating a replication controller
Oct 23 23:20:39.355: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:39.577: INFO: replicationcontroller "update-demo-nautilus" created
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 23 23:20:39.578: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:39.838: INFO: update-demo-nautilus-eio23 update-demo-nautilus-oumga
Oct 23 23:20:39.838: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-eio23 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:40.090: INFO:
Oct 23 23:20:40.090: INFO: update-demo-nautilus-eio23 is created but not running
Oct 23 23:20:45.090: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:45.286: INFO: update-demo-nautilus-eio23 update-demo-nautilus-oumga
Oct 23 23:20:45.286: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-eio23 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:45.519: INFO: true
Oct 23 23:20:45.519: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-eio23 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:45.708: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 23:20:45.708: INFO: validating pod update-demo-nautilus-eio23
Oct 23 23:20:45.713: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 23:20:45.713: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 23:20:45.713: INFO: update-demo-nautilus-eio23 is verified up and running
Oct 23 23:20:45.713: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-oumga -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:45.949: INFO: true
Oct 23 23:20:45.949: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods update-demo-nautilus-oumga -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:46.141: INFO: gcr.io/google_containers/update-demo:nautilus
Oct 23 23:20:46.141: INFO: validating pod update-demo-nautilus-oumga
Oct 23 23:20:46.145: INFO: got data: {
"image": "nautilus.jpg"
}
Oct 23 23:20:46.145: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 23 23:20:46.145: INFO: update-demo-nautilus-oumga is verified up and running
STEP: using delete to clean up resources
Oct 23 23:20:46.145: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop --grace-period=0 -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:48.429: INFO: replicationcontroller "update-demo-nautilus" deleted
Oct 23 23:20:48.429: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-9vy31'
Oct 23 23:20:48.624: INFO:
Oct 23 23:20:48.624: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-9vy31 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 23 23:20:48.857: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-9vy31
• [SLOW TEST:16.568 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:136
should create and stop a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:111
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:20:53.881: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-orwg3
Oct 23 23:20:53.922: INFO: Service account default in ns e2e-tests-emptydir-orwg3 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:20:55.925: INFO: Service account default in ns e2e-tests-emptydir-orwg3 with secrets found. (2.04340632s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:20:55.925: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-orwg3
Oct 23 23:20:55.951: INFO: Service account default in ns e2e-tests-emptydir-orwg3 with secrets found. (26.12448ms)
[It] should support (root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:82
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 23 23:20:55.957: INFO: Waiting up to 5m0s for pod pod-b618d606-79dc-11e5-9772-42010af00002 status to be success or failure
Oct 23 23:20:55.995: INFO: No Status.Info for container 'test-container' in pod 'pod-b618d606-79dc-11e5-9772-42010af00002' yet
Oct 23 23:20:55.995: INFO: Waiting for pod pod-b618d606-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-orwg3' status to be 'success or failure'(found phase: "Pending", readiness: false) (38.560474ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod pod-b618d606-79dc-11e5-9772-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:20:58.076: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:20:58.112: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:20:58.112: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:20:58.112: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:20:58.112: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:20:58.112: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:20:58.112: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:20:58.112: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:20:58.112: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:20:58.112: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:20:58.112: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:20:58.112: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:20:58.112: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-orwg3" for this suite.
• [SLOW TEST:9.256 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:82
------------------------------
[BeforeEach] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:20:51.190: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-7t01i
Oct 23 23:20:51.232: INFO: Service account default in ns e2e-tests-downward-api-7t01i had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:20:53.235: INFO: Service account default in ns e2e-tests-downward-api-7t01i with secrets found. (2.045086287s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:20:53.235: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-7t01i
Oct 23 23:20:53.240: INFO: Service account default in ns e2e-tests-downward-api-7t01i with secrets found. (5.277205ms)
[It] should provide labels and annotations files [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:94
STEP: Creating a pod to test downward API volume plugin
Oct 23 23:20:53.246: INFO: Waiting up to 5m0s for pod metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002 status to be success or failure
Oct 23 23:20:53.286: INFO: No Status.Info for container 'client-container' in pod 'metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002' yet
Oct 23 23:20:53.286: INFO: Waiting for pod metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-7t01i' status to be 'success or failure'(found phase: "Pending", readiness: false) (39.167782ms elapsed)
Oct 23 23:20:55.289: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-7t01i' so far
Oct 23 23:20:55.289: INFO: Waiting for pod metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-7t01i' status to be 'success or failure'(found phase: "Running", readiness: true) (2.042681952s elapsed)
Oct 23 23:20:57.293: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-7t01i' so far
Oct 23 23:20:57.293: INFO: Waiting for pod metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-7t01i' status to be 'success or failure'(found phase: "Running", readiness: true) (4.046669249s elapsed)
Oct 23 23:20:59.306: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-7t01i' so far
Oct 23 23:20:59.306: INFO: Waiting for pod metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-7t01i' status to be 'success or failure'(found phase: "Running", readiness: true) (6.059889219s elapsed)
Oct 23 23:21:01.311: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-7t01i' so far
Oct 23 23:21:01.311: INFO: Waiting for pod metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-7t01i' status to be 'success or failure'(found phase: "Running", readiness: true) (8.064391405s elapsed)
Oct 23 23:21:03.314: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002' in namespace 'e2e-tests-downward-api-7t01i' so far
Oct 23 23:21:03.314: INFO: Waiting for pod metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002 in namespace 'e2e-tests-downward-api-7t01i' status to be 'success or failure'(found phase: "Running", readiness: true) (10.068000901s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-dp0i pod metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002 container client-container: <nil>
STEP: Successfully fetched pod logs:
cluster="rack10"
builder="john-doe"
kubernetes.io/config.seen="2015-10-23T23:20:53.280319454Z"
kubernetes.io/config.source="api"metadata-volume-b47b3795-79dc-11e5-ba1c-42010af00002
[AfterEach] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:21:05.492: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:21:05.597: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:21:05.597: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:21:05.597: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:21:05.597: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:21:05.597: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:21:05.597: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:21:05.597: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:21:05.597: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:21:05.597: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:21:05.597: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:21:05.597: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:21:05.597: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-downward-api-7t01i" for this suite.
• [SLOW TEST:19.444 seconds]
Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:95
should provide labels and annotations files [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:94
------------------------------
SSS
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:21:10.633: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-q530j
Oct 23 23:21:10.675: INFO: Service account default in ns e2e-tests-kubectl-q530j had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:21:12.709: INFO: Service account default in ns e2e-tests-kubectl-q530j with secrets found. (2.076371503s)
[It] should support --unix-socket=/path [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:923
STEP: Starting the proxy
Oct 23 23:21:12.710: INFO: Asynchronously running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix435444171/test'
STEP: retrieving proxy /api/ output
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-q530j
• [SLOW TEST:7.284 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Proxy server
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:924
should support --unix-socket=/path [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:923
------------------------------
S
------------------------------
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:21:03.138: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-6l181
Oct 23 23:21:03.237: INFO: Service account default in ns e2e-tests-nettest-6l181 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:21:05.243: INFO: Service account default in ns e2e-tests-nettest-6l181 with secrets found. (2.10580444s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:21:05.244: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-6l181
Oct 23 23:21:05.257: INFO: Service account default in ns e2e-tests-nettest-6l181 with secrets found. (13.695714ms)
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:52
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:82
STEP: Running container which tries to wget google.com
STEP: Verify that the pod succeed
Oct 23 23:21:05.309: INFO: Waiting up to 5m0s for pod wget-test status to be success or failure
Oct 23 23:21:05.433: INFO: No Status.Info for container 'wget-test-container' in pod 'wget-test' yet
Oct 23 23:21:05.433: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-6l181' status to be 'success or failure'(found phase: "Pending", readiness: false) (124.917377ms elapsed)
Oct 23 23:21:07.437: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-6l181' so far
Oct 23 23:21:07.437: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-6l181' status to be 'success or failure'(found phase: "Running", readiness: true) (2.128630026s elapsed)
Oct 23 23:21:09.483: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-6l181' so far
Oct 23 23:21:09.483: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-6l181' status to be 'success or failure'(found phase: "Running", readiness: true) (4.174436862s elapsed)
Oct 23 23:21:11.487: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-6l181' so far
Oct 23 23:21:11.487: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-6l181' status to be 'success or failure'(found phase: "Running", readiness: true) (6.178411044s elapsed)
Oct 23 23:21:13.491: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-6l181' so far
Oct 23 23:21:13.491: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-6l181' status to be 'success or failure'(found phase: "Running", readiness: true) (8.182236045s elapsed)
Oct 23 23:21:15.576: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-6l181' so far
Oct 23 23:21:15.576: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-6l181' status to be 'success or failure'(found phase: "Running", readiness: true) (10.267155688s elapsed)
STEP: Saw pod success
[AfterEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:21:17.596: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:21:17.635: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:21:17.635: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:21:17.635: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:21:17.635: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:21:17.635: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:21:17.635: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:21:17.635: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:21:17.635: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:21:17.635: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:21:17.635: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:21:17.635: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:21:17.635: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-nettest-6l181" for this suite.
• [SLOW TEST:19.531 seconds]
Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:249
should provide Internet connection for containers [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:82
------------------------------
S
------------------------------
[BeforeEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:21:22.672: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-dxj3f
Oct 23 23:21:22.712: INFO: Service account default in ns e2e-tests-containers-dxj3f had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:21:24.719: INFO: Service account default in ns e2e-tests-containers-dxj3f with secrets found. (2.04776048s)
[It] should use the image defaults if command and args are blank [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:53
STEP: Creating a pod to test use defaults
Oct 23 23:21:24.728: INFO: Waiting up to 5m0s for pod client-containers-c73e8c7f-79dc-11e5-9772-42010af00002 status to be success or failure
Oct 23 23:21:24.776: INFO: No Status.Info for container 'test-container' in pod 'client-containers-c73e8c7f-79dc-11e5-9772-42010af00002' yet
Oct 23 23:21:24.776: INFO: Waiting for pod client-containers-c73e8c7f-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-containers-dxj3f' status to be 'success or failure'(found phase: "Pending", readiness: false) (48.030026ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod client-containers-c73e8c7f-79dc-11e5-9772-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep default arguments]
[AfterEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:9.190 seconds]
Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should use the image defaults if command and args are blank [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:53
------------------------------
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:21:31.862: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-5ux7r
Oct 23 23:21:31.901: INFO: Service account default in ns e2e-tests-emptydir-5ux7r had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:21:33.908: INFO: Service account default in ns e2e-tests-emptydir-5ux7r with secrets found. (2.045317267s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:21:33.908: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-5ux7r
Oct 23 23:21:33.911: INFO: Service account default in ns e2e-tests-emptydir-5ux7r with secrets found. (3.497689ms)
[It] should support (root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:78
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 23 23:21:33.924: INFO: Waiting up to 5m0s for pod pod-ccb91cc0-79dc-11e5-9772-42010af00002 status to be success or failure
Oct 23 23:21:33.971: INFO: No Status.Info for container 'test-container' in pod 'pod-ccb91cc0-79dc-11e5-9772-42010af00002' yet
Oct 23 23:21:33.971: INFO: Waiting for pod pod-ccb91cc0-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-5ux7r' status to be 'success or failure'(found phase: "Pending", readiness: false) (47.253826ms elapsed)
Oct 23 23:21:35.977: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-ccb91cc0-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-5ux7r' so far
Oct 23 23:21:35.977: INFO: Waiting for pod pod-ccb91cc0-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-5ux7r' status to be 'success or failure'(found phase: "Running", readiness: true) (2.053468238s elapsed)
Oct 23 23:21:37.981: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-ccb91cc0-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-5ux7r' so far
Oct 23 23:21:37.981: INFO: Waiting for pod pod-ccb91cc0-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-5ux7r' status to be 'success or failure'(found phase: "Running", readiness: true) (4.057346095s elapsed)
Oct 23 23:21:39.985: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-ccb91cc0-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-5ux7r' so far
Oct 23 23:21:39.985: INFO: Waiting for pod pod-ccb91cc0-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-5ux7r' status to be 'success or failure'(found phase: "Running", readiness: true) (6.061394573s elapsed)
Oct 23 23:21:41.989: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-ccb91cc0-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-5ux7r' so far
Oct 23 23:21:41.989: INFO: Waiting for pod pod-ccb91cc0-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-5ux7r' status to be 'success or failure'(found phase: "Running", readiness: true) (8.065260061s elapsed)
Oct 23 23:21:43.997: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-ccb91cc0-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-emptydir-5ux7r' so far
Oct 23 23:21:43.997: INFO: Waiting for pod pod-ccb91cc0-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-emptydir-5ux7r' status to be 'success or failure'(found phase: "Running", readiness: true) (10.072942686s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod pod-ccb91cc0-79dc-11e5-9772-42010af00002 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:21:46.024: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:21:46.127: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:21:46.127: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:21:46.127: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:21:46.127: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:21:46.127: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:21:46.127: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:21:46.127: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:21:46.127: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:21:46.127: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:21:46.127: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:21:46.127: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:21:46.127: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-5ux7r" for this suite.
• [SLOW TEST:19.355 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:78
------------------------------
S
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:21:51.220: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-o6xjw
Oct 23 23:21:51.266: INFO: Service account default in ns e2e-tests-kubectl-o6xjw had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:21:53.269: INFO: Service account default in ns e2e-tests-kubectl-o6xjw with secrets found. (2.049414975s)
[It] should check if kubectl describe prints relevant information for rc and pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:573
Oct 23 23:21:53.269: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-o6xjw'
Oct 23 23:21:53.501: INFO: replicationcontroller "redis-master" created
Oct 23 23:21:53.501: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config create -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/examples/guestbook-go/redis-master-service.json --namespace=e2e-tests-kubectl-o6xjw'
Oct 23 23:21:53.800: INFO: service "redis-master" created
Oct 23 23:21:53.810: INFO: Waiting up to 5m0s for pod redis-master-ok0ev status to be running
Oct 23 23:21:53.820: INFO: Waiting for pod redis-master-ok0ev in namespace 'e2e-tests-kubectl-o6xjw' status to be 'running'(found phase: "Pending", readiness: false) (9.462083ms elapsed)
Oct 23 23:21:55.825: INFO: Found pod 'redis-master-ok0ev' on node 'pull-e2e-0-minion-dp0i'
Oct 23 23:21:55.826: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config describe pod redis-master-ok0ev --namespace=e2e-tests-kubectl-o6xjw'
Oct 23 23:21:56.021: INFO: Name: redis-master-ok0ev
Namespace: e2e-tests-kubectl-o6xjw
Image(s): redis
Node: pull-e2e-0-minion-dp0i/10.240.0.8
Start Time: Fri, 23 Oct 2015 23:21:53 +0000
Labels: app=redis,role=master
Status: Running
Reason:
Message:
IP: 10.245.4.42
Replication Controllers: redis-master (1/1 replicas created)
Containers:
redis-master:
Container ID: docker://7ff1a99ac1386edd8e797fb4f07bc7b8b6c510a78de88a16348c1c3255bcd663
Image: redis
Image ID: docker://c08dd1f8fad9ff2622a1b5d74650a8e494ee380b74030e21584fea05079c2818
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Running
Started: Fri, 23 Oct 2015 23:21:53 +0000
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready True
Volumes:
default-token-9x2jm:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-9x2jm
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
3s 3s 1 {scheduler } Scheduled Successfully assigned redis-master-ok0ev to pull-e2e-0-minion-dp0i
3s 3s 1 {kubelet pull-e2e-0-minion-dp0i} implicitly required container POD Pulled Container image "beta.gcr.io/google_containers/pause:2.0" already present on machine
3s 3s 1 {kubelet pull-e2e-0-minion-dp0i} implicitly required container POD Created Created with docker id 5b02518a2da7
3s 3s 1 {kubelet pull-e2e-0-minion-dp0i} implicitly required container POD Started Started with docker id 5b02518a2da7
3s 3s 1 {kubelet pull-e2e-0-minion-dp0i} spec.containers{redis-master} Pulled Container image "redis" already present on machine
3s 3s 1 {kubelet pull-e2e-0-minion-dp0i} spec.containers{redis-master} Created Created with docker id 7ff1a99ac138
3s 3s 1 {kubelet pull-e2e-0-minion-dp0i} spec.containers{redis-master} Started Started with docker id 7ff1a99ac138
Oct 23 23:21:56.021: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-o6xjw'
Oct 23 23:21:56.233: INFO: Name: redis-master
Namespace: e2e-tests-kubectl-o6xjw
Image(s): redis
Selector: app=redis,role=master
Labels: app=redis,role=master
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
3s 3s 1 {replication-controller } SuccessfulCreate Created pod: redis-master-ok0ev
Oct 23 23:21:56.233: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-o6xjw'
Oct 23 23:21:56.498: INFO: Name: redis-master
Namespace: e2e-tests-kubectl-o6xjw
Labels: app=redis,role=master
Selector: app=redis,role=master
Type: ClusterIP
IP: 10.0.17.213
Port: <unnamed> 6379/TCP
Endpoints: 10.245.4.42:6379
Session Affinity: None
No events.
Oct 23 23:21:56.513: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config describe node pull-e2e-0-minion-1dli'
Oct 23 23:21:56.745: INFO: Name: pull-e2e-0-minion-1dli
Labels: kubernetes.io/hostname=pull-e2e-0-minion-1dli
CreationTimestamp: Fri, 23 Oct 2015 22:49:11 +0000
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
──── ────── ───────────────── ────────────────── ────── ───────
Ready True Fri, 23 Oct 2015 23:21:52 +0000 Fri, 23 Oct 2015 22:49:42 +0000 KubeletReady kubelet is posting ready status
Addresses: 10.240.0.5,10.240.0.5
Capacity:
memory: 7664520Ki
pods: 40
cpu: 2
System Info:
Machine ID: 06067045f01b156572deffb722c59e21
System UUID: 06067045-F01B-1565-72DE-FFB722C59E21
Boot ID: c1b93dc7-9f9c-4317-81d7-1d063eba1c00
Kernel Version: 4.1.7-coreos
OS Image: CoreOS 766.4.0
Container Runtime Version: docker://1.7.1
Kubelet Version: v1.2.0-alpha.2.246+f93f77766dd24b-dirty
Kube-Proxy Version: v1.2.0-alpha.2.246+f93f77766dd24b-dirty
PodCIDR: 10.245.0.0/24
ExternalID: pull-e2e-0-minion-1dli
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
───────── ──── ──────────── ────────── ─────────────── ─────────────
kube-system elasticsearch-logging-v1-t59p5 100m (5%!)(MISSING) 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING)
kube-system kube-ui-v3-laljg 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
Allocated resources:
(Total limits may be over 100%!,(MISSING) i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
──────────── ────────── ─────────────── ─────────────
200m (10%!)(MISSING) 200m (10%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
32m 32m 5 {kubelet pull-e2e-0-minion-1dli} NodeNotReady Node pull-e2e-0-minion-1dli status is now: NodeNotReady
32m 32m 1 {kube-proxy pull-e2e-0-minion-1dli} Starting Starting kube-proxy.
32m 32m 1 {controllermanager } RegisteredNode Node pull-e2e-0-minion-1dli event: Registered Node pull-e2e-0-minion-1dli in NodeController
32m 32m 1 {kubelet pull-e2e-0-minion-1dli} Starting Starting kubelet.
32m 32m 1 {kubelet pull-e2e-0-minion-1dli} NodeReady Node pull-e2e-0-minion-1dli status is now: NodeReady
Oct 23 23:21:56.745: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config describe namespace e2e-tests-kubectl-o6xjw'
Oct 23 23:21:56.989: INFO: Name: e2e-tests-kubectl-o6xjw
Labels: <none>
Status: Active
No resource quota.
No resource limits.
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-o6xjw
• [SLOW TEST:10.790 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl describe
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:574
should check if kubectl describe prints relevant information for rc and pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:573
------------------------------
SSSSS
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:22:02.016: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-rlejn
Oct 23 23:22:02.069: INFO: Service account default in ns e2e-tests-kubectl-rlejn had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:22:04.129: INFO: Service account default in ns e2e-tests-kubectl-rlejn with secrets found. (2.113447375s)
[It] should check if v1 is in available api versions [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:447
STEP: validating api verions
Oct 23 23:22:04.129: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config api-versions'
Oct 23 23:22:04.319: INFO: Available Server Api Versions: v1
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-rlejn
• [SLOW TEST:7.326 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl api-versions
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:448
should check if v1 is in available api versions [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:447
------------------------------
S
------------------------------
[BeforeEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:22:09.344: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-qllz8
Oct 23 23:22:09.396: INFO: Service account default in ns e2e-tests-job-qllz8 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:22:11.400: INFO: Service account default in ns e2e-tests-job-qllz8 with secrets found. (2.055653918s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:22:11.400: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-qllz8
Oct 23 23:22:11.402: INFO: Service account default in ns e2e-tests-job-qllz8 with secrets found. (2.448814ms)
[It] should scale a job up
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:132
STEP: Creating a job
STEP: Ensuring active pods == startParallelism
STEP: scale job up
STEP: Ensuring active pods == endParallelism
[AfterEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:22:25.515: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:22:25.519: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:22:25.519: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:22:25.519: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:22:25.519: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:22:25.519: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:22:25.519: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:22:25.519: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:22:25.519: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:22:25.519: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:22:25.519: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:22:25.519: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:22:25.519: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-job-qllz8" for this suite.
• [SLOW TEST:21.199 seconds]
Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should scale a job up
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:132
------------------------------
[BeforeEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:22:30.546: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-km4f2
Oct 23 23:22:30.616: INFO: Service account default in ns e2e-tests-var-expansion-km4f2 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:22:32.619: INFO: Service account default in ns e2e-tests-var-expansion-km4f2 with secrets found. (2.072769259s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:22:32.619: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-km4f2
Oct 23 23:22:32.622: INFO: Service account default in ns e2e-tests-var-expansion-km4f2 with secrets found. (2.379428ms)
[It] should allow composing env vars into new env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:67
STEP: Creating a pod to test env composition
Oct 23 23:22:32.627: INFO: Waiting up to 5m0s for pod var-expansion-efb79854-79dc-11e5-9772-42010af00002 status to be success or failure
Oct 23 23:22:32.668: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-efb79854-79dc-11e5-9772-42010af00002' yet
Oct 23 23:22:32.668: INFO: Waiting for pod var-expansion-efb79854-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-km4f2' status to be 'success or failure'(found phase: "Pending", readiness: false) (40.690385ms elapsed)
Oct 23 23:22:34.672: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-efb79854-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-km4f2' so far
Oct 23 23:22:34.672: INFO: Waiting for pod var-expansion-efb79854-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-km4f2' status to be 'success or failure'(found phase: "Running", readiness: true) (2.044454995s elapsed)
Oct 23 23:22:36.676: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-efb79854-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-km4f2' so far
Oct 23 23:22:36.676: INFO: Waiting for pod var-expansion-efb79854-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-km4f2' status to be 'success or failure'(found phase: "Running", readiness: true) (4.048406614s elapsed)
Oct 23 23:22:38.691: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-efb79854-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-km4f2' so far
Oct 23 23:22:38.691: INFO: Waiting for pod var-expansion-efb79854-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-km4f2' status to be 'success or failure'(found phase: "Running", readiness: true) (6.063966513s elapsed)
Oct 23 23:22:40.695: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-efb79854-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-km4f2' so far
Oct 23 23:22:40.695: INFO: Waiting for pod var-expansion-efb79854-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-km4f2' status to be 'success or failure'(found phase: "Running", readiness: true) (8.067961777s elapsed)
Oct 23 23:22:42.699: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-efb79854-79dc-11e5-9772-42010af00002' in namespace 'e2e-tests-var-expansion-km4f2' so far
Oct 23 23:22:42.699: INFO: Waiting for pod var-expansion-efb79854-79dc-11e5-9772-42010af00002 in namespace 'e2e-tests-var-expansion-km4f2' status to be 'success or failure'(found phase: "Running", readiness: true) (10.071870589s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-l2bc pod var-expansion-efb79854-79dc-11e5-9772-42010af00002 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.0.0.1:443
FOOBAR=foo-value;;bar-value
HOSTNAME=var-expansion-efb79854-79dc-11e5-9772-42010af00002
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
BAR=bar-value
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
FOO=foo-value
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
[AfterEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:22:44.727: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:22:44.775: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:22:44.775: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:22:44.775: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:22:44.775: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:22:44.775: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:22:44.775: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:22:44.775: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:22:44.775: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:22:44.775: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:22:44.775: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:22:44.775: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:22:44.775: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-var-expansion-km4f2" for this suite.
• [SLOW TEST:19.254 seconds]
Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:129
should allow composing env vars into new env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:67
------------------------------
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Oct 23 23:22:49.800: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-9g1n5
Oct 23 23:22:49.840: INFO: Service account default in ns e2e-tests-kubectl-9g1n5 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:22:51.843: INFO: Service account default in ns e2e-tests-kubectl-9g1n5 with secrets found. (2.043190916s)
[BeforeEach] Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:829
[It] should create a pod from an image when restart is OnFailure [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:852
STEP: running the image nginx
Oct 23 23:22:51.843: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config run e2e-test-nginx-pod --restart=OnFailure --image=nginx --namespace=e2e-tests-kubectl-9g1n5'
Oct 23 23:22:52.030: INFO: pod "e2e-test-nginx-pod" created
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:833
Oct 23 23:22:52.037: INFO: Running '/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/platforms/linux/amd64/kubectl --server=https://104.196.0.155 --kubeconfig=/home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config stop pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9g1n5'
Oct 23 23:22:52.251: INFO: pod "e2e-test-nginx-pod" deleted
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-9g1n5
• [SLOW TEST:107.518 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:925
Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:873
should create a pod from an image when restart is OnFailure [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:852
------------------------------
S
------------------------------
[BeforeEach] hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:53
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:24:37.319: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-hchcb
Oct 23 23:24:37.357: INFO: Get service account default in ns e2e-tests-hostpath-hchcb failed, ignoring for 2s: serviceaccounts "default" not found
Oct 23 23:24:39.366: INFO: Service account default in ns e2e-tests-hostpath-hchcb with secrets found. (2.046947496s)
[It] should give a volume the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:77
STEP: Creating a pod to test hostPath mode
Oct 23 23:24:39.389: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Oct 23 23:24:39.445: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Oct 23 23:24:39.445: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-hchcb' status to be 'success or failure'(found phase: "Pending", readiness: false) (56.660421ms elapsed)
STEP: Saw pod success
Oct 23 23:24:41.449: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node pull-e2e-0-minion-n5ko pod pod-host-path-test container test-container-1: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
mode of file "/test-volume": dtrwxrwxrwx
[AfterEach] hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:60
STEP: Destroying namespace for this suite e2e-tests-hostpath-hchcb
• [SLOW TEST:9.255 seconds]
hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:104
should give a volume the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:77
------------------------------
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:24:46.576: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-qpnm4
Oct 23 23:24:46.630: INFO: Service account default in ns e2e-tests-services-qpnm4 had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:24:48.635: INFO: Service account default in ns e2e-tests-services-qpnm4 with secrets found. (2.059076727s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:24:48.635: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-qpnm4
Oct 23 23:24:48.639: INFO: Service account default in ns e2e-tests-services-qpnm4 with secrets found. (3.736492ms)
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
[It] should check NodePort out-of-range
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:714
STEP: creating service nodeport-range-test with type NodePort in namespace e2e-tests-services-qpnm4
STEP: changing service nodeport-range-test to out-of-range NodePort 23690
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 23690
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:24:48.869: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:24:48.904: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:24:48.904: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:24:48.904: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:24:48.904: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:24:48.904: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:24:48.904: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:24:48.904: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:24:48.904: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:24:48.904: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:24:48.904: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:24:48.904: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:24:48.904: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-services-qpnm4" for this suite.
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:7.407 seconds]
Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:871
should check NodePort out-of-range
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:714
------------------------------
SS
------------------------------
[BeforeEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:24:53.987: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-ckiiw
Oct 23 23:24:54.033: INFO: Service account default in ns e2e-tests-job-ckiiw had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:24:56.036: INFO: Service account default in ns e2e-tests-job-ckiiw with secrets found. (2.049239295s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:24:56.036: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-ckiiw
Oct 23 23:24:56.039: INFO: Service account default in ns e2e-tests-job-ckiiw with secrets found. (2.198876ms)
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:89
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:25:28.049: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:25:28.057: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:25:28.057: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:25:28.057: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:25:28.057: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:25:28.057: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:25:28.057: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:25:28.057: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:25:28.057: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:25:28.057: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:25:28.057: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:25:28.057: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:25:28.057: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-job-ckiiw" for this suite.
• [SLOW TEST:39.108 seconds]
Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should run a job to completion when tasks sometimes fail and are not locally restarted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:89
------------------------------
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: Building a namespace api object
Oct 23 23:25:33.095: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4fvrs
Oct 23 23:25:33.182: INFO: Service account default in ns e2e-tests-pods-4fvrs had 0 secrets, ignoring for 2s: <nil>
Oct 23 23:25:35.193: INFO: Service account default in ns e2e-tests-pods-4fvrs with secrets found. (2.097743496s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 23 23:25:35.193: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4fvrs
Oct 23 23:25:35.196: INFO: Service account default in ns e2e-tests-pods-4fvrs with secrets found. (2.511583ms)
[It] should support remote command execution over websockets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:759
>>> testContext.KubeConfig: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
Oct 23 23:25:35.209: INFO: Waiting up to 5m0s for pod pod-exec-websocket-5c8a6850-79dd-11e5-9772-42010af00002 status to be running
Oct 23 23:25:35.250: INFO: Waiting for pod pod-exec-websocket-5c8a6850-79dd-11e5-9772-42010af00002 in namespace 'e2e-tests-pods-4fvrs' status to be 'running'(found phase: "Pending", readiness: false) (40.576125ms elapsed)
Oct 23 23:25:37.253: INFO: Found pod 'pod-exec-websocket-5c8a6850-79dd-11e5-9772-42010af00002' on node 'pull-e2e-0-minion-dp0i'
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 23 23:25:37.759: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 23 23:25:37.797: INFO: Node pull-e2e-0-minion-1dli condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:42 +0000 UTC
Oct 23 23:25:37.797: INFO: Successfully found node pull-e2e-0-minion-1dli readiness to be true
Oct 23 23:25:37.797: INFO: Node pull-e2e-0-minion-djcb condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:44 +0000 UTC
Oct 23 23:25:37.797: INFO: Successfully found node pull-e2e-0-minion-djcb readiness to be true
Oct 23 23:25:37.797: INFO: Node pull-e2e-0-minion-dp0i condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:50:07 +0000 UTC
Oct 23 23:25:37.797: INFO: Successfully found node pull-e2e-0-minion-dp0i readiness to be true
Oct 23 23:25:37.797: INFO: Node pull-e2e-0-minion-l2bc condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:55 +0000 UTC
Oct 23 23:25:37.797: INFO: Successfully found node pull-e2e-0-minion-l2bc readiness to be true
Oct 23 23:25:37.797: INFO: Node pull-e2e-0-minion-n5ko condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:52 +0000 UTC
Oct 23 23:25:37.797: INFO: Successfully found node pull-e2e-0-minion-n5ko readiness to be true
Oct 23 23:25:37.797: INFO: Node pull-e2e-0-minion-zr43 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-23 22:49:41 +0000 UTC
Oct 23 23:25:37.797: INFO: Successfully found node pull-e2e-0-minion-zr43 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-4fvrs" for this suite.
• [SLOW TEST:9.723 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1187
should support remote command execution over websockets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:759
------------------------------
SS
Summarizing 10 Failures:
[Fail] Services [It] should be able to create a functioning NodePort service 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1186
[Fail] Port forwarding With a server that expects a client request [It] should support a client that connects, sends data, and disconnects [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:194
[Fail] Kubectl client Simple pod [It] should support port-forward 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:429
[Fail] Port forwarding With a server that expects no client request [It] should support a client that connects, sends no data, and disconnects [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:231
[Fail] PrivilegedPod [It] should test privileged pod 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:59
[Fail] DNS [It] should provide DNS for the cluster 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:151
[Fail] KubeletManagedEtcHosts [It] should test kubelet managed /etc/hosts file 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:113
[Fail] Port forwarding With a server that expects a client request [It] should support a client that connects, sends no data, and disconnects [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:122
[Fail] SSH [It] should SSH to all nodes and run commands 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:46
[Fail] Services [It] should release NodePorts on delete 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:759
Ran 103 of 190 Specs in 2047.131 seconds
FAIL! -- 93 Passed | 10 Failed | 2 Pending | 85 Skipped
Ginkgo ran 1 suite in 34m7.520735595s
Test Suite Failed
!!! Error in /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/hack/ginkgo-e2e.sh:94
'"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --repo-root="${KUBE_VERSION_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --num-nodes="${NUM_MINIONS:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" ${E2E_MIN_STARTUP_PODS:+"--minStartupPods=${E2E_MIN_STARTUP_PODS}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} "${@:-}"' exited with status 1
Call stack:
1: /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/kubernetes/hack/ginkgo-e2e.sh:94 main(...)
Exiting with status 1
2015/10/23 23:25:42 e2e.go:309: Error running Ginkgo tests: exit status 1
2015/10/23 23:25:42 e2e.go:305: Step 'Ginkgo tests' finished in 34m8.672128772s
exit status 1
+ exitcode=1
+ [[ '' == \t\r\u\e ]]
+ [[ '' == \t\r\u\e ]]
+ [[ true == \t\r\u\e ]]
+ sleep 30
+ go run ./hack/e2e.go -v --down
2015/10/23 23:26:13 e2e.go:303: Running: teardown
Project: coreos-gce-testing
Zone: us-east1-b
Shutting down test cluster in background.
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-pull-e2e-0-http-alt].
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-pull-e2e-0-nodeports].
Bringing down cluster using provider: gce
WARNING: Component [preview] no longer exists.
All components are up to date.
All components are up to date.
All components are up to date.
Project: coreos-gce-testing
Zone: us-east1-b
Bringing down cluster
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/zones/us-east1-b/instanceGroupManagers/pull-e2e-0-minion-group].
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/instanceTemplates/pull-e2e-0-minion-template].
Updated [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/zones/us-east1-b/instances/pull-e2e-0-master].
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/zones/us-east1-b/instances/pull-e2e-0-master].
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-master-https].
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/firewalls/pull-e2e-0-minion-all].
Deleting routes pull-e2e-0-46d4a2a4-79d8-11e5-b1b8-42010af00002 pull-e2e-0-4725d39b-79d8-11e5-b1b8-42010af00002 pull-e2e-0-48f64a03-79d8-11e5-b1b8-42010af00002 pull-e2e-0-4a304db6-79d8-11e5-b1b8-42010af00002
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/routes/pull-e2e-0-46d4a2a4-79d8-11e5-b1b8-42010af00002].
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/routes/pull-e2e-0-48f64a03-79d8-11e5-b1b8-42010af00002].
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/routes/pull-e2e-0-4a304db6-79d8-11e5-b1b8-42010af00002].
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/global/routes/pull-e2e-0-4725d39b-79d8-11e5-b1b8-42010af00002].
Deleted [https://www.googleapis.com/compute/v1/projects/coreos-gce-testing/regions/us-east1/addresses/pull-e2e-0-master-ip].
property "clusters.coreos-gce-testing_pull-e2e-0" unset.
property "users.coreos-gce-testing_pull-e2e-0" unset.
property "users.coreos-gce-testing_pull-e2e-0-basic-auth" unset.
property "contexts.coreos-gce-testing_pull-e2e-0" unset.
property "current-context" unset.
Cleared config for coreos-gce-testing_pull-e2e-0 from /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/.kube/config
Done
2015/10/23 23:30:25 e2e.go:305: Step 'teardown' finished in 4m12.422868869s
+ [[ true == \t\r\u\e ]]
+ ./cluster/gce/list-resources.sh
+ [[ -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts/gcp-resources-before.txt ]]
+ [[ -f /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts/gcp-resources-after.txt ]]
+ diff -sw -U0 '-F^\[.*\]$' /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts/gcp-resources-before.txt /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts/gcp-resources-after.txt
Files /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts/gcp-resources-before.txt and /home/jenkins/workspace/kubernetes-e2e-gce-coreos-docker/_artifacts/gcp-resources-after.txt are identical
Finished: SUCCESS
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment