Skip to content

Instantly share code, notes, and snippets.

@maiamcc
Created October 16, 2019 16:26
Show Gist options
  • Select an option

  • Save maiamcc/f37d35797ba7c44928034f4d074aecb3 to your computer and use it in GitHub Desktop.

Select an option

Save maiamcc/f37d35797ba7c44928034f4d074aecb3 to your computer and use it in GitHub Desktop.
==> Docker <==
-- Logs begin at Wed 2019-10-16 16:14:33 UTC, end at Wed 2019-10-16 16:25:55 UTC. --
Oct 16 16:14:50 minikube dockerd[2386]: time="2019-10-16T16:14:50.185609072Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:14:50 minikube dockerd[2386]: time="2019-10-16T16:14:50.186014338Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:14:50 minikube dockerd[2386]: time="2019-10-16T16:14:50.194803701Z" level=info msg="Docker daemon" commit=039a7df9ba graphdriver(s)=overlay2 version=18.09.9
Oct 16 16:14:50 minikube dockerd[2386]: time="2019-10-16T16:14:50.194999858Z" level=info msg="Daemon has completed initialization"
Oct 16 16:14:50 minikube systemd[1]: Started Docker Application Container Engine.
Oct 16 16:14:50 minikube dockerd[2386]: time="2019-10-16T16:14:50.224838704Z" level=info msg="API listen on /var/run/docker.sock"
Oct 16 16:14:50 minikube dockerd[2386]: time="2019-10-16T16:14:50.225447344Z" level=info msg="API listen on [::]:2376"
Oct 16 16:15:59 minikube dockerd[2386]: time="2019-10-16T16:15:59.643225568Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:15:59 minikube dockerd[2386]: time="2019-10-16T16:15:59.647437483Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:15:59 minikube dockerd[2386]: time="2019-10-16T16:15:59.746788943Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:15:59 minikube dockerd[2386]: time="2019-10-16T16:15:59.747676550Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:15:59 minikube dockerd[2386]: time="2019-10-16T16:15:59.864964131Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:15:59 minikube dockerd[2386]: time="2019-10-16T16:15:59.865771471Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:16:00 minikube dockerd[2386]: time="2019-10-16T16:16:00.251353574Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:16:00 minikube dockerd[2386]: time="2019-10-16T16:16:00.251929407Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:16:04 minikube dockerd[2386]: time="2019-10-16T16:16:04.532417549Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:16:04 minikube dockerd[2386]: time="2019-10-16T16:16:04.533249723Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:16:04 minikube dockerd[2386]: time="2019-10-16T16:16:04.541047042Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:16:04 minikube dockerd[2386]: time="2019-10-16T16:16:04.541405682Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:16:04 minikube dockerd[2386]: time="2019-10-16T16:16:04.575221763Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:16:04 minikube dockerd[2386]: time="2019-10-16T16:16:04.575829255Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:16:04 minikube dockerd[2386]: time="2019-10-16T16:16:04.585627590Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:16:04 minikube dockerd[2386]: time="2019-10-16T16:16:04.586437387Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:16:09 minikube dockerd[2386]: time="2019-10-16T16:16:09.669035263Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:16:09 minikube dockerd[2386]: time="2019-10-16T16:16:09.669551664Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:16:10 minikube dockerd[2386]: time="2019-10-16T16:16:10.554379148Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8bf98c4f43828c3cd44550ea4234fcd38911165aec8cfd1c076bab90b1427d08/shim.sock" debug=false pid=3414
Oct 16 16:16:10 minikube dockerd[2386]: time="2019-10-16T16:16:10.557197756Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c75b4d5ce8e8a33b2317452d53d7b386b0563e8a30c761b852f94e885c379ac6/shim.sock" debug=false pid=3419
Oct 16 16:16:10 minikube dockerd[2386]: time="2019-10-16T16:16:10.561819064Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1930af6e57d002d1b924fd2c546bdabd0a3a628e31d0b438407a79c00f2e6136/shim.sock" debug=false pid=3413
Oct 16 16:16:10 minikube dockerd[2386]: time="2019-10-16T16:16:10.573048407Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/23a7478f3d8849ad40ae5828a0b1a2a55e6828182c457fa615f68e5438c5cb79/shim.sock" debug=false pid=3449
Oct 16 16:16:10 minikube dockerd[2386]: time="2019-10-16T16:16:10.580731647Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c7ed36c65037e42e5802a3aa279e773c34f7147ceb762ad21a14490852cfc23b/shim.sock" debug=false pid=3459
Oct 16 16:16:11 minikube dockerd[2386]: time="2019-10-16T16:16:11.062821861Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dc8cf00c193b030d5ef6db343da6187521338b640840782cf432f71c6afb846a/shim.sock" debug=false pid=3635
Oct 16 16:16:11 minikube dockerd[2386]: time="2019-10-16T16:16:11.151779211Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f41240aa8344a4c8853100e8788e4eba855e1d465371a5be2f9f630805731f47/shim.sock" debug=false pid=3654
Oct 16 16:16:11 minikube dockerd[2386]: time="2019-10-16T16:16:11.156261783Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ffe609959bda83e65f62b7ac40e9024d5d3b33249ffdc4668f8151ecfd3ccb16/shim.sock" debug=false pid=3661
Oct 16 16:16:11 minikube dockerd[2386]: time="2019-10-16T16:16:11.157255630Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fb437c1f28bf43307f429cd5b82163acbbbe3202e5fb5f3bde6612ef5d559a2b/shim.sock" debug=false pid=3662
Oct 16 16:16:11 minikube dockerd[2386]: time="2019-10-16T16:16:11.172073994Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85d9faf593af5354f531bf95d6d57502961321d131c3a8956f21d89d0a4fa65b/shim.sock" debug=false pid=3690
Oct 16 16:16:27 minikube dockerd[2386]: time="2019-10-16T16:16:27.414745753Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c0c7b1ccc9efad6538cdf585f4bbadd07e10072be330032d8a580ca5defd967a/shim.sock" debug=false pid=4096
Oct 16 16:16:27 minikube dockerd[2386]: time="2019-10-16T16:16:27.425137782Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/47eacc7354a46b166a71be5b3d75dd6a7ea64179f3ef4ed1c80dec8db9236a58/shim.sock" debug=false pid=4119
Oct 16 16:16:27 minikube dockerd[2386]: time="2019-10-16T16:16:27.449310655Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8b6c70fb358f5d09723c2609103e58b3d1a9a5dda63d7503c755ed7a8018d569/shim.sock" debug=false pid=4134
Oct 16 16:16:27 minikube dockerd[2386]: time="2019-10-16T16:16:27.786712508Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/14ed640d4b7a840983850e1e87fa78e29883aaceb479a785bdacc2d5b6706e53/shim.sock" debug=false pid=4240
Oct 16 16:16:27 minikube dockerd[2386]: time="2019-10-16T16:16:27.969843609Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d620285536231d09ca4fdd7715407d0f45e9242e93dd48575d131dcd505767e5/shim.sock" debug=false pid=4296
Oct 16 16:16:28 minikube dockerd[2386]: time="2019-10-16T16:16:28.605097061Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/13ef383cf9cc358935484298ea9e74b602577d0376db980ad6799cf36def6d52/shim.sock" debug=false pid=4391
Oct 16 16:16:29 minikube dockerd[2386]: time="2019-10-16T16:16:29.088862158Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a0fa04669c38f46a89e4d2803229c1c1bf247239e5d570ea90f37b6854942feb/shim.sock" debug=false pid=4495
Oct 16 16:16:29 minikube dockerd[2386]: time="2019-10-16T16:16:29.257258381Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7d0037532c258fe86445483eddfe143ddcc106c3bf2cf104b29498ee939e2436/shim.sock" debug=false pid=4540
Oct 16 16:16:33 minikube dockerd[2386]: time="2019-10-16T16:16:33.474675537Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5603989f76250599b68ed51a3d97af6dc394223b5effea466706a739b180031c/shim.sock" debug=false pid=4632
Oct 16 16:16:33 minikube dockerd[2386]: time="2019-10-16T16:16:33.549188690Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b6944388a21b152141a3c879950e630dcf38e4e1c72b027a4b6dbbfc9d3f7760/shim.sock" debug=false pid=4662
Oct 16 16:16:34 minikube dockerd[2386]: time="2019-10-16T16:16:34.027789315Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3b8a74f3011291d88eb3212b711f3a6a9b6d4f91ef05487db9c1e936351f324f/shim.sock" debug=false pid=4774
Oct 16 16:16:38 minikube dockerd[2386]: time="2019-10-16T16:16:38.010208667Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c33ca4477e9f6e5b664e5f7e8d881bd55c60a7aaacdd3581e8a5eabe5ffe5ea1/shim.sock" debug=false pid=4896
Oct 16 16:22:45 minikube dockerd[2386]: time="2019-10-16T16:22:45.601315826Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
Oct 16 16:22:45 minikube dockerd[2386]: time="2019-10-16T16:22:45.601814301Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH"
Oct 16 16:22:55 minikube dockerd[2386]: time="2019-10-16T16:22:55.870288357Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/da1566a0bf568399f08e8d5825701230e96b44d6c781aad8365862e598852af9/shim.sock" debug=false pid=7791
Oct 16 16:22:56 minikube dockerd[2386]: time="2019-10-16T16:22:56.276891787Z" level=info msg="shim reaped" id=da1566a0bf568399f08e8d5825701230e96b44d6c781aad8365862e598852af9
Oct 16 16:22:56 minikube dockerd[2386]: time="2019-10-16T16:22:56.287728064Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 16 16:22:59 minikube dockerd[2386]: time="2019-10-16T16:22:59.169833630Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6ef06767630d4f67acb5b60a87e164b5b613e916e34baef0c539e44eff45fcd2/shim.sock" debug=false pid=7927
Oct 16 16:22:59 minikube dockerd[2386]: time="2019-10-16T16:22:59.565236188Z" level=info msg="shim reaped" id=6ef06767630d4f67acb5b60a87e164b5b613e916e34baef0c539e44eff45fcd2
Oct 16 16:22:59 minikube dockerd[2386]: time="2019-10-16T16:22:59.577141869Z" level=error msg="stream copy error: reading from a closed fifo"
Oct 16 16:22:59 minikube dockerd[2386]: time="2019-10-16T16:22:59.577465502Z" level=error msg="stream copy error: reading from a closed fifo"
Oct 16 16:22:59 minikube dockerd[2386]: time="2019-10-16T16:22:59.652610585Z" level=error msg="6ef06767630d4f67acb5b60a87e164b5b613e916e34baef0c539e44eff45fcd2 cleanup: failed to delete container from containerd: no such container"
Oct 16 16:23:27 minikube dockerd[2386]: time="2019-10-16T16:23:27.055773230Z" level=warning msg="38e70a4b51569dfecdd040edfda18a9e17b34484ab410f6327d663f24899ee7c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/38e70a4b51569dfecdd040edfda18a9e17b34484ab410f6327d663f24899ee7c/mounts/shm, flags: 0x2: no such file or directory"
Oct 16 16:23:27 minikube dockerd[2386]: time="2019-10-16T16:23:27.067294601Z" level=error msg="38e70a4b51569dfecdd040edfda18a9e17b34484ab410f6327d663f24899ee7c cleanup: failed to delete container from containerd: no such container"
Oct 16 16:23:27 minikube dockerd[2386]: time="2019-10-16T16:23:27.067406343Z" level=error msg="Handler for POST /v1.39/containers/38e70a4b51569dfecdd040edfda18a9e17b34484ab410f6327d663f24899ee7c/start returned error: exec: \"docker-init\": executable file not found in $PATH"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
c33ca4477e9f6 kubernetesui/metrics-scraper@sha256:35fcae4fd9232a541a8cb08f2853117ba7231750b75c2cb3b6a58a2aaa57f878 9 minutes ago Running dashboard-metrics-scraper 0 5603989f76250
3b8a74f301129 6802d83967b99 9 minutes ago Running kubernetes-dashboard 0 b6944388a21b1
7d0037532c258 4689081edb103 9 minutes ago Running storage-provisioner 0 a0fa04669c38f
13ef383cf9cc3 bf261d1579144 9 minutes ago Running coredns 0 8b6c70fb358f5
d620285536231 bf261d1579144 9 minutes ago Running coredns 0 c0c7b1ccc9efa
14ed640d4b7a8 c21b0c7400f98 9 minutes ago Running kube-proxy 0 47eacc7354a46
ffe609959bda8 301ddc62b80b1 9 minutes ago Running kube-scheduler 0 c7ed36c65037e
85d9faf593af5 b2756210eeabf 9 minutes ago Running etcd 0 1930af6e57d00
f41240aa8344a b305571ca60a5 9 minutes ago Running kube-apiserver 0 8bf98c4f43828
fb437c1f28bf4 06a629a7e51cd 9 minutes ago Running kube-controller-manager 0 23a7478f3d884
dc8cf00c193b0 bd12a212f9dcb 9 minutes ago Running kube-addon-manager 0 c75b4d5ce8e8a
==> coredns [13ef383cf9cc] <==
.:5E1016 16:163
2019-10-16T16:1:58.860028 1 reflector.go:126] pkg/mo6:33.859Z [INd/k8s.io/client-go@v11.0FO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2019-10-16T16:16:33.859Z [INFO] CoreDNS-1.6.2
2019-10-16T16:16:33.859Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
2019-.0+incompatible/tools/ca10-16T16:16:3che/reflector5..g658Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2019-10-16T16:16:45.658Z [INFO] plugin/reao:94: Failed dy: Still waiting on: "kubernetes"
2019-10to list *v1.S-16T16:16:55.ervice: Get h657Z [INFO] pttps://10.96.lugin/ready: 0.1:443/api/vS1/services?litill waiting moint=500&resourceVersion=0: dial tcp 10.96: "kubernetes"
I1016 16:.0.1:443: i/o timeout
16:E51016 16:16:588.859996 .860633 1 reflector. g 1 trace.go:8o:126] pkg/mo2] Trace[1735014813]: "Reflector pkg/md/k8s.io/client-go@v11.0.od/k8s.io/cli0+incompatible/tools/cache/reflector.goent-go@v11.0.:94: Failed t0+incompatiblo list *v1.Ene/tools/cachedp/reflector.go:94 ListAndWatch" (started: 2019-10-16 16:16:28.858971817 +0000 UTC m=+0.070298117) (total time: 30.000973273s):
Trace[1735014813]: [30.000973273s] [30.000973273s] END
E1016 16:16:58.860028 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.860028 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.860028 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1016 16:16:58.860617 1 trace.go:82] Trace[245703382]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-10-16 16:16:28.859207224 +0000 UTC m=+0.070533508) (total time: 30.00138487s):
Trace[245703382]: [30.00138487s] [30.00138487s] END
E1016 16:16:58.860633 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+ointincompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https:s/: Get https://10.96.0.1:44/10.96.0.1:443/api/v1/end3/api/v1/endpoints?limit=5points?limit=05000&resourceVersion=0: dial tcp 10.96.0.&resourceVers1i:o443: i/o timenout=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.860633 1 reflector.go:126] pkg/
mod/k8s.io/client-go@v11.0.0+incompatible/E1016 16:16:5t8o.o8l6s0747 1 /cache/reflecreflector.go:tor.go:94: Fa126] pkg/mod/k8s.io/client-go@v11.0.0+incompatibleiled to list *v1.Endpoint/tools/cache/s: Get https:r/e/f1lector.go:94: Failed to list *v1.Namespace: Get htt0.96.0.1:443/ps://10.96.0.1:443/api/v1/anpaim/evs1p/aecnedsp?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: ioints?limit=500&resourceVe/o timeout
rsion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.860633 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1016 16:16:58.860740 1 trace.go:82] Trace[798474526]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-10-16 16:16:28.858962223 +0000 UTC m=+0.070288528) (total time: 30.001764576s):
Trace[798474526]: [30.001764576s] [30.001764576s] END
E1016 16:16:58.860747 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.860747 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.860747 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
==> coredns [d62028553623] <==
E1016 16:16:58.399330 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.399868 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.402174 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
2019-10-16T16:16:29.146Z [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
2019-10-16T16:16:33.397Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2019-10-16T16:16:33.397Z [INFO] CoreDNS-1.6.2
2019-10-16T16:16:33.397Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
2019-10-16T16:16:39.147Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2019-10-16T16:16:49.146Z [INFO] plugin/ready: Still waiting on: "kubernetes"
I1016 16:16:58.399064 1 trace.go:82] Trace[1730934251]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-10-16 16:16:28.398354752 +0000 UTC m=+0.246232725) (total time: 30.000685019s):
Trace[1730934251]: [30.000685019s] [30.000685019s] END
E1016 16:16:58.399330 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.399330 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.399330 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1016 16:16:58.399726 1 trace.go:82] Trace[1837498302]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-10-16 16:16:28.39811384 +0000 UTC m=+0.245991844) (total time: 30.001597545s):
Trace[1837498302]: [30.001597545s] [30.001597545s] END
E1016 16:16:58.399868 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.399868 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.399868 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1016 16:16:58.401797 1 trace.go:82] Trace[1132386054]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-10-16 16:16:28.400379356 +0000 UTC m=+0.248257364) (total time: 30.001396635s):
Trace[1132386054]: [30.001396635s] [30.001396635s] END
E1016 16:16:58.402174 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.402174 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1016 16:16:58.402174 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
2019-10-16T16:16:59.147Z [INFO] plugin/ready: Still waiting on: "kubernetes"
==> dmesg <==
[Oct16 16:21] hpet1: lost 318 rtc interrupts
[ +5.010506] hpet1: lost 319 rtc interrupts
[ +5.004952] hpet1: lost 318 rtc interrupts
[ +5.004908] hpet1: lost 318 rtc interrupts
[ +5.003295] hpet1: lost 319 rtc interrupts
[ +5.003950] hpet1: lost 319 rtc interrupts
[ +5.003200] hpet1: lost 319 rtc interrupts
[ +5.004488] hpet1: lost 319 rtc interrupts
[ +5.003092] hpet1: lost 319 rtc interrupts
[ +5.003260] hpet1: lost 318 rtc interrupts
[ +5.003922] hpet1: lost 318 rtc interrupts
[ +5.002801] hpet1: lost 318 rtc interrupts
[Oct16 16:22] hpet1: lost 319 rtc interrupts
[ +5.002924] hpet1: lost 318 rtc interrupts
[ +5.003575] hpet1: lost 318 rtc interrupts
[ +4.603555] hrtimer: interrupt took 3882474 ns
[ +0.400894] hpet1: lost 318 rtc interrupts
[ +5.002555] hpet1: lost 319 rtc interrupts
[ +5.003636] hpet1: lost 318 rtc interrupts
[ +5.002850] hpet1: lost 318 rtc interrupts
[ +5.003761] hpet1: lost 318 rtc interrupts
[ +5.003583] hpet1: lost 318 rtc interrupts
[ +5.004437] hpet1: lost 319 rtc interrupts
[ +5.004330] hpet1: lost 318 rtc interrupts
[ +5.003911] hpet1: lost 318 rtc interrupts
[Oct16 16:23] hpet1: lost 319 rtc interrupts
[ +5.003320] hpet1: lost 318 rtc interrupts
[ +5.003293] hpet1: lost 318 rtc interrupts
[ +5.001520] hpet1: lost 318 rtc interrupts
[ +5.001129] hpet1: lost 318 rtc interrupts
[ +5.002795] hpet1: lost 318 rtc interrupts
[ +5.017545] hpet1: lost 320 rtc interrupts
[ +5.001794] hpet1: lost 318 rtc interrupts
[ +5.003012] hpet1: lost 318 rtc interrupts
[ +5.007731] hpet1: lost 318 rtc interrupts
[ +5.005504] hpet1: lost 319 rtc interrupts
[ +5.003732] hpet1: lost 318 rtc interrupts
[Oct16 16:24] hpet1: lost 318 rtc interrupts
[ +5.004346] hpet1: lost 319 rtc interrupts
[ +5.003806] hpet1: lost 318 rtc interrupts
[ +5.003207] hpet1: lost 318 rtc interrupts
[ +5.003267] hpet1: lost 318 rtc interrupts
[ +5.003672] hpet1: lost 318 rtc interrupts
[ +5.005446] hpet1: lost 319 rtc interrupts
[ +5.000944] hpet1: lost 319 rtc interrupts
[ +5.004566] hpet1: lost 318 rtc interrupts
[ +5.002545] hpet1: lost 318 rtc interrupts
[ +5.004216] hpet1: lost 320 rtc interrupts
[ +5.004480] hpet1: lost 318 rtc interrupts
[Oct16 16:25] hpet1: lost 319 rtc interrupts
[ +5.003181] hpet1: lost 318 rtc interrupts
[ +5.004160] hpet1: lost 319 rtc interrupts
[ +5.001906] hpet1: lost 319 rtc interrupts
[ +5.003856] hpet1: lost 318 rtc interrupts
[ +5.005056] hpet1: lost 318 rtc interrupts
[ +5.002607] hpet1: lost 319 rtc interrupts
[ +5.001896] hpet1: lost 320 rtc interrupts
[ +5.002608] hpet1: lost 318 rtc interrupts
[ +5.004198] hpet1: lost 318 rtc interrupts
[ +5.003640] hpet1: lost 318 rtc interrupts
==> kernel <==
16:25:55 up 11 min, 0 users, load average: 0.64, 0.53, 0.40
Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2018.05.3"
==> kube-addon-manager [dc8cf00c193b] <==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-16T16:25:42+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-16T16:25:44+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-16T16:25:48+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-16T16:25:48+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-16T16:25:52+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-16T16:25:54+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
==> kube-apiserver [f41240aa8344] <==
I1016 16:16:13.651453 1 client.go:361] parsed scheme: "endpoint"
I1016 16:16:13.651480 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1016 16:16:13.664473 1 client.go:361] parsed scheme: "endpoint"
I1016 16:16:13.664598 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1016 16:16:13.674641 1 client.go:361] parsed scheme: "endpoint"
I1016 16:16:13.674767 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1016 16:16:13.683887 1 client.go:361] parsed scheme: "endpoint"
I1016 16:16:13.683967 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
W1016 16:16:13.820073 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W1016 16:16:13.834823 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1016 16:16:13.850352 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1016 16:16:13.852887 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1016 16:16:13.865365 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1016 16:16:13.886529 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1016 16:16:13.886716 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1016 16:16:13.897033 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1016 16:16:13.897116 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1016 16:16:13.898949 1 client.go:361] parsed scheme: "endpoint"
I1016 16:16:13.899028 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1016 16:16:13.907932 1 client.go:361] parsed scheme: "endpoint"
I1016 16:16:13.907959 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1016 16:16:15.783559 1 secure_serving.go:123] Serving securely on [::]:8443
I1016 16:16:15.783706 1 autoregister_controller.go:140] Starting autoregister controller
I1016 16:16:15.783718 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1016 16:16:15.786612 1 crd_finalizer.go:274] Starting CRDFinalizer
I1016 16:16:15.808100 1 controller.go:85] Starting OpenAPI controller
I1016 16:16:15.808214 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I1016 16:16:15.808248 1 naming_controller.go:288] Starting NamingConditionController
I1016 16:16:15.808265 1 establishing_controller.go:73] Starting EstablishingController
I1016 16:16:15.808278 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1016 16:16:15.808298 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1016 16:16:15.786640 1 controller.go:81] Starting OpenAPI AggregationController
I1016 16:16:15.786653 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1016 16:16:15.808571 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1016 16:16:15.786657 1 available_controller.go:383] Starting AvailableConditionController
I1016 16:16:15.808795 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1016 16:16:15.786665 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1016 16:16:15.808810 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
E1016 16:16:15.840527 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.100, ResourceVersion: 0, AdditionalErrorMsg:
I1016 16:16:15.985596 1 cache.go:39] Caches are synced for autoregister controller
I1016 16:16:16.009709 1 shared_informer.go:204] Caches are synced for crd-autoregister
I1016 16:16:16.010083 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1016 16:16:16.010384 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1016 16:16:16.783863 1 controller.go:107] OpenAPI AggregationController: Processing item
I1016 16:16:16.783908 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1016 16:16:16.783920 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1016 16:16:16.796169 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1016 16:16:16.812980 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1016 16:16:16.813004 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1016 16:16:18.568592 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1016 16:16:18.848023 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1016 16:16:19.062876 1 controller.go:606] quota admission added evaluator for: endpoints
W1016 16:16:19.147112 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.99.100]
I1016 16:16:19.270731 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1016 16:16:19.733094 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1016 16:16:20.757138 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1016 16:16:21.053198 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1016 16:16:26.749134 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1016 16:16:26.823548 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1016 16:16:26.827490 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
==> kube-controller-manager [fb437c1f28bf] <==
I1016 16:16:25.428307 1 node_lifecycle_controller.go:421] Controller will reconcile labels.
I1016 16:16:25.428401 1 node_lifecycle_controller.go:434] Controller will taint node by condition.
I1016 16:16:25.428500 1 controllermanager.go:534] Started "nodelifecycle"
I1016 16:16:25.428636 1 node_lifecycle_controller.go:458] Starting node controller
I1016 16:16:25.428661 1 shared_informer.go:197] Waiting for caches to sync for taint
E1016 16:16:25.677209 1 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1016 16:16:25.677778 1 controllermanager.go:526] Skipping "service"
I1016 16:16:25.929705 1 controllermanager.go:534] Started "persistentvolume-binder"
W1016 16:16:25.929897 1 controllermanager.go:526] Skipping "root-ca-cert-publisher"
I1016 16:16:25.929963 1 pv_controller_base.go:282] Starting persistent volume controller
I1016 16:16:25.930212 1 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1016 16:16:26.177408 1 controllermanager.go:534] Started "replicationcontroller"
I1016 16:16:26.177772 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I1016 16:16:26.177910 1 replica_set.go:182] Starting replicationcontroller controller
I1016 16:16:26.177983 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController
W1016 16:16:26.193334 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1016 16:16:26.228259 1 shared_informer.go:204] Caches are synced for PV protection
I1016 16:16:26.228642 1 shared_informer.go:204] Caches are synced for certificate
I1016 16:16:26.239506 1 shared_informer.go:204] Caches are synced for certificate
I1016 16:16:26.240402 1 shared_informer.go:204] Caches are synced for TTL
I1016 16:16:26.279284 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1016 16:16:26.279642 1 shared_informer.go:204] Caches are synced for expand
I1016 16:16:26.279881 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I1016 16:16:26.291894 1 shared_informer.go:204] Caches are synced for namespace
E1016 16:16:26.297669 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I1016 16:16:26.379834 1 shared_informer.go:204] Caches are synced for service account
I1016 16:16:26.529943 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I1016 16:16:26.730854 1 shared_informer.go:204] Caches are synced for persistent volume
I1016 16:16:26.732436 1 shared_informer.go:204] Caches are synced for resource quota
I1016 16:16:26.747483 1 shared_informer.go:204] Caches are synced for deployment
I1016 16:16:26.753686 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"31193b21-4ef1-4d97-aeb9-4dfdfbfb0d59", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
I1016 16:16:26.762262 1 shared_informer.go:204] Caches are synced for PVC protection
I1016 16:16:26.776636 1 shared_informer.go:204] Caches are synced for GC
I1016 16:16:26.778152 1 shared_informer.go:204] Caches are synced for resource quota
I1016 16:16:26.778241 1 shared_informer.go:204] Caches are synced for endpoint
I1016 16:16:26.778385 1 shared_informer.go:204] Caches are synced for ReplicationController
I1016 16:16:26.779787 1 shared_informer.go:204] Caches are synced for garbage collector
I1016 16:16:26.780970 1 shared_informer.go:204] Caches are synced for stateful set
I1016 16:16:26.782326 1 shared_informer.go:204] Caches are synced for garbage collector
I1016 16:16:26.782545 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1016 16:16:26.786202 1 shared_informer.go:204] Caches are synced for ReplicaSet
I1016 16:16:26.811912 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"eb495fde-3444-436f-b6a7-ece5dfda6d67", APIVersion:"apps/v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-hszmm
I1016 16:16:26.819758 1 shared_informer.go:204] Caches are synced for daemon sets
I1016 16:16:26.827765 1 shared_informer.go:204] Caches are synced for job
I1016 16:16:26.828817 1 shared_informer.go:204] Caches are synced for attach detach
I1016 16:16:26.829106 1 shared_informer.go:204] Caches are synced for HPA
I1016 16:16:26.829227 1 shared_informer.go:204] Caches are synced for disruption
I1016 16:16:26.829369 1 disruption.go:341] Sending events to api server.
I1016 16:16:26.829660 1 shared_informer.go:204] Caches are synced for taint
I1016 16:16:26.829792 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
W1016 16:16:26.829921 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1016 16:16:26.830050 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal.
I1016 16:16:26.830093 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"42230749-56a2-47e1-9989-4b28589cb35e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I1016 16:16:26.830235 1 taint_manager.go:186] Starting NoExecuteTaintManager
I1016 16:16:26.865303 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"eb495fde-3444-436f-b6a7-ece5dfda6d67", APIVersion:"apps/v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-vzl2b
I1016 16:16:26.886559 1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"76d65b96-3fde-4bee-9abf-a18ff66974d2", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-p28ld
I1016 16:16:32.176368 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"870ca939-ad11-4c57-90aa-d428c13684b9", APIVersion:"apps/v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-76585494d8 to 1
I1016 16:16:32.190421 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-76585494d8", UID:"66bb8c6e-4d0d-40e8-915c-989a5c77c0ae", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-76585494d8-c8k7p
I1016 16:16:32.250267 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"c07e73cb-9449-48aa-9007-cdef590d4857", APIVersion:"apps/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-57f4cb4545 to 1
I1016 16:16:32.268990 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-57f4cb4545", UID:"0fc75133-3b8d-4de5-8b55-1e5ac3997b83", APIVersion:"apps/v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-57f4cb4545-s2t9k
==> kube-proxy [14ed640d4b7a] <==
W1016 16:16:28.801882 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
I1016 16:16:28.840043 1 node.go:135] Successfully retrieved node IP: 10.0.2.15
I1016 16:16:28.840230 1 server_others.go:149] Using iptables Proxier.
W1016 16:16:28.841415 1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1016 16:16:28.843413 1 server.go:529] Version: v1.16.0
I1016 16:16:28.847364 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1016 16:16:28.847393 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1016 16:16:28.849182 1 conntrack.go:83] Setting conntrack hashsize to 32768
I1016 16:16:28.855000 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1016 16:16:28.855103 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1016 16:16:28.856686 1 config.go:131] Starting endpoints config controller
I1016 16:16:28.856715 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1016 16:16:28.856739 1 config.go:313] Starting service config controller
I1016 16:16:28.856746 1 shared_informer.go:197] Waiting for caches to sync for service config
I1016 16:16:28.957390 1 shared_informer.go:204] Caches are synced for service config
I1016 16:16:28.957476 1 shared_informer.go:204] Caches are synced for endpoints config
==> kube-scheduler [ffe609959bda] <==
I1016 16:16:12.696451 1 serving.go:319] Generated self-signed cert in-memory
W1016 16:16:15.898164 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1016 16:16:15.898519 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1016 16:16:15.898717 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
W1016 16:16:15.898860 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1016 16:16:15.912448 1 server.go:143] Version: v1.16.0
I1016 16:16:15.913016 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W1016 16:16:15.937490 1 authorization.go:47] Authorization is disabled
W1016 16:16:15.940512 1 authentication.go:79] Authentication is disabled
I1016 16:16:15.940887 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1016 16:16:15.941741 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E1016 16:16:16.028080 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1016 16:16:16.028359 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1016 16:16:16.028509 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1016 16:16:16.028603 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1016 16:16:16.028679 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1016 16:16:16.028747 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1016 16:16:16.028836 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1016 16:16:16.028912 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1016 16:16:16.030184 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1016 16:16:16.030374 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1016 16:16:16.030531 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1016 16:16:17.030443 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1016 16:16:17.031801 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1016 16:16:17.033698 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1016 16:16:17.037052 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1016 16:16:17.038423 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1016 16:16:17.038470 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1016 16:16:17.041756 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1016 16:16:17.042941 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1016 16:16:17.042989 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1016 16:16:17.045696 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1016 16:16:17.047033 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I1016 16:16:19.051995 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler...
I1016 16:16:19.065834 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
-- Logs begin at Wed 2019-10-16 16:14:33 UTC, end at Wed 2019-10-16 16:25:55 UTC. --
Oct 16 16:16:14 minikube kubelet[3323]: E1016 16:16:14.997226 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:15 minikube kubelet[3323]: E1016 16:16:15.097742 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:15 minikube kubelet[3323]: E1016 16:16:15.198123 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:15 minikube kubelet[3323]: E1016 16:16:15.299074 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:15 minikube kubelet[3323]: E1016 16:16:15.399489 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:15 minikube kubelet[3323]: E1016 16:16:15.499752 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:15 minikube kubelet[3323]: E1016 16:16:15.599958 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:15 minikube kubelet[3323]: E1016 16:16:15.701319 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:15 minikube kubelet[3323]: E1016 16:16:15.804048 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:15 minikube kubelet[3323]: E1016 16:16:15.904399 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.004384 3323 controller.go:220] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.004581 3323 kubelet.go:2267] node "minikube" not found
Oct 16 16:16:16 minikube kubelet[3323]: I1016 16:16:16.016321 3323 reconciler.go:154] Reconciler: start to sync state
Oct 16 16:16:16 minikube kubelet[3323]: I1016 16:16:16.060726 3323 kubelet_node_status.go:75] Successfully registered node minikube
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.063297 3323 controller.go:135] failed to ensure node lease exists, will retry in 3.2s, error: namespaces "kube-node-lease" not found
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.107201 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccad286d8a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee72654f3ea3, ext:5171834609, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee72654f3ea3, ext:5171834609, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.161871 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9b3e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db219e2, ext:5312530925, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db219e2, ext:5312530925, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.219573 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9c7b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db22db4, ext:5312536000, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db22db4, ext:5312536000, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.274305 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9cffd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db235fd, ext:5312538120, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db235fd, ext:5312538120, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.334046 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9cffd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db235fd, ext:5312538120, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726da0316f, ext:5311357306, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.391784 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9b3e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db219e2, ext:5312530925, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726da014c6, ext:5311349970, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.454178 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9c7b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db22db4, ext:5312536000, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726da0290a, ext:5311355157, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.507745 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadbf19368", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726eb9f968, ext:5329824115, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726eb9f968, ext:5329824115, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.565660 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9b3e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db219e2, ext:5312530925, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee72789d09f4, ext:5495699971, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:16 minikube kubelet[3323]: E1016 16:16:16.964106 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9c7b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db22db4, ext:5312536000, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee72789d1f8b, ext:5495705498, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:17 minikube kubelet[3323]: E1016 16:16:17.363916 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9cffd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db235fd, ext:5312538120, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee72789d2902, ext:5495707921, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:17 minikube kubelet[3323]: E1016 16:16:17.765707 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9b3e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db219e2, ext:5312530925, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee7278e3e97d, ext:5500344716, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:18 minikube kubelet[3323]: E1016 16:16:18.164079 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9c7b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db22db4, ext:5312536000, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee7278e3f937, ext:5500348743, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:18 minikube kubelet[3323]: E1016 16:16:18.563814 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9cffd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db235fd, ext:5312538120, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee7278e401c0, ext:5500350928, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:18 minikube kubelet[3323]: E1016 16:16:18.963081 3323 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ce2ccadae9cffd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee726db235fd, ext:5312538120, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61ee727930a00b, ext:5505372183, loc:(*time.Location)(0x797f100)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 16 16:16:26 minikube kubelet[3323]: I1016 16:16:26.984358 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6fbecfb6-722e-41e8-9cb1-8a2edfc3a086-config-volume") pod "coredns-5644d7b6d9-hszmm" (UID: "6fbecfb6-722e-41e8-9cb1-8a2edfc3a086")
Oct 16 16:16:26 minikube kubelet[3323]: I1016 16:16:26.984845 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8cdd297c-7e67-4733-8dbd-bb32e77a5b88-config-volume") pod "coredns-5644d7b6d9-vzl2b" (UID: "8cdd297c-7e67-4733-8dbd-bb32e77a5b88")
Oct 16 16:16:26 minikube kubelet[3323]: I1016 16:16:26.984972 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-tcmql" (UniqueName: "kubernetes.io/secret/8cdd297c-7e67-4733-8dbd-bb32e77a5b88-coredns-token-tcmql") pod "coredns-5644d7b6d9-vzl2b" (UID: "8cdd297c-7e67-4733-8dbd-bb32e77a5b88")
Oct 16 16:16:26 minikube kubelet[3323]: I1016 16:16:26.985078 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/76d305b7-f70c-4ed6-a404-a0f2c3e94a21-xtables-lock") pod "kube-proxy-p28ld" (UID: "76d305b7-f70c-4ed6-a404-a0f2c3e94a21")
Oct 16 16:16:26 minikube kubelet[3323]: I1016 16:16:26.985177 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-f6fjd" (UniqueName: "kubernetes.io/secret/76d305b7-f70c-4ed6-a404-a0f2c3e94a21-kube-proxy-token-f6fjd") pod "kube-proxy-p28ld" (UID: "76d305b7-f70c-4ed6-a404-a0f2c3e94a21")
Oct 16 16:16:26 minikube kubelet[3323]: I1016 16:16:26.985238 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-tcmql" (UniqueName: "kubernetes.io/secret/6fbecfb6-722e-41e8-9cb1-8a2edfc3a086-coredns-token-tcmql") pod "coredns-5644d7b6d9-hszmm" (UID: "6fbecfb6-722e-41e8-9cb1-8a2edfc3a086")
Oct 16 16:16:26 minikube kubelet[3323]: I1016 16:16:26.985340 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/76d305b7-f70c-4ed6-a404-a0f2c3e94a21-kube-proxy") pod "kube-proxy-p28ld" (UID: "76d305b7-f70c-4ed6-a404-a0f2c3e94a21")
Oct 16 16:16:26 minikube kubelet[3323]: I1016 16:16:26.985399 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/76d305b7-f70c-4ed6-a404-a0f2c3e94a21-lib-modules") pod "kube-proxy-p28ld" (UID: "76d305b7-f70c-4ed6-a404-a0f2c3e94a21")
Oct 16 16:16:27 minikube kubelet[3323]: W1016 16:16:27.906965 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-hszmm through plugin: invalid network status for
Oct 16 16:16:28 minikube kubelet[3323]: W1016 16:16:28.305921 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-vzl2b through plugin: invalid network status for
Oct 16 16:16:28 minikube kubelet[3323]: W1016 16:16:28.308121 3323 pod_container_deletor.go:75] Container "8b6c70fb358f5d09723c2609103e58b3d1a9a5dda63d7503c755ed7a8018d569" not found in pod's containers
Oct 16 16:16:28 minikube kubelet[3323]: W1016 16:16:28.313896 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-vzl2b through plugin: invalid network status for
Oct 16 16:16:28 minikube kubelet[3323]: W1016 16:16:28.317986 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-hszmm through plugin: invalid network status for
Oct 16 16:16:28 minikube kubelet[3323]: W1016 16:16:28.377179 3323 pod_container_deletor.go:75] Container "c0c7b1ccc9efad6538cdf585f4bbadd07e10072be330032d8a580ca5defd967a" not found in pod's containers
Oct 16 16:16:28 minikube kubelet[3323]: W1016 16:16:28.406224 3323 pod_container_deletor.go:75] Container "47eacc7354a46b166a71be5b3d75dd6a7ea64179f3ef4ed1c80dec8db9236a58" not found in pod's containers
Oct 16 16:16:28 minikube kubelet[3323]: I1016 16:16:28.828970 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-w48v6" (UniqueName: "kubernetes.io/secret/79377bff-7cd3-45dd-ba54-a1cab12dcafe-storage-provisioner-token-w48v6") pod "storage-provisioner" (UID: "79377bff-7cd3-45dd-ba54-a1cab12dcafe")
Oct 16 16:16:28 minikube kubelet[3323]: I1016 16:16:28.829065 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/79377bff-7cd3-45dd-ba54-a1cab12dcafe-tmp") pod "storage-provisioner" (UID: "79377bff-7cd3-45dd-ba54-a1cab12dcafe")
Oct 16 16:16:29 minikube kubelet[3323]: W1016 16:16:29.426927 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-hszmm through plugin: invalid network status for
Oct 16 16:16:29 minikube kubelet[3323]: W1016 16:16:29.447514 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-vzl2b through plugin: invalid network status for
Oct 16 16:16:32 minikube kubelet[3323]: E1016 16:16:32.196659 3323 reflector.go:123] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-t8gc8": Failed to list *v1.Secret: secrets "kubernetes-dashboard-token-t8gc8" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node "minikube" and this object
Oct 16 16:16:32 minikube kubelet[3323]: I1016 16:16:32.247967 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/004ca19a-ae11-469b-9b52-920da71edf66-tmp-volume") pod "dashboard-metrics-scraper-76585494d8-c8k7p" (UID: "004ca19a-ae11-469b-9b52-920da71edf66")
Oct 16 16:16:32 minikube kubelet[3323]: I1016 16:16:32.248015 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-t8gc8" (UniqueName: "kubernetes.io/secret/004ca19a-ae11-469b-9b52-920da71edf66-kubernetes-dashboard-token-t8gc8") pod "dashboard-metrics-scraper-76585494d8-c8k7p" (UID: "004ca19a-ae11-469b-9b52-920da71edf66")
Oct 16 16:16:32 minikube kubelet[3323]: I1016 16:16:32.350158 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-t8gc8" (UniqueName: "kubernetes.io/secret/a53a1cfb-e59b-4a8b-bd9a-b9661f92b8ef-kubernetes-dashboard-token-t8gc8") pod "kubernetes-dashboard-57f4cb4545-s2t9k" (UID: "a53a1cfb-e59b-4a8b-bd9a-b9661f92b8ef")
Oct 16 16:16:32 minikube kubelet[3323]: I1016 16:16:32.350466 3323 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/a53a1cfb-e59b-4a8b-bd9a-b9661f92b8ef-tmp-volume") pod "kubernetes-dashboard-57f4cb4545-s2t9k" (UID: "a53a1cfb-e59b-4a8b-bd9a-b9661f92b8ef")
Oct 16 16:16:33 minikube kubelet[3323]: W1016 16:16:33.873248 3323 pod_container_deletor.go:75] Container "5603989f76250599b68ed51a3d97af6dc394223b5effea466706a739b180031c" not found in pod's containers
Oct 16 16:16:33 minikube kubelet[3323]: W1016 16:16:33.877490 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-c8k7p through plugin: invalid network status for
Oct 16 16:16:33 minikube kubelet[3323]: W1016 16:16:33.948279 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-57f4cb4545-s2t9k through plugin: invalid network status for
Oct 16 16:16:34 minikube kubelet[3323]: W1016 16:16:34.890735 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-57f4cb4545-s2t9k through plugin: invalid network status for
Oct 16 16:16:34 minikube kubelet[3323]: W1016 16:16:34.922776 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-c8k7p through plugin: invalid network status for
Oct 16 16:16:38 minikube kubelet[3323]: W1016 16:16:38.974625 3323 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-c8k7p through plugin: invalid network status for
==> kubernetes-dashboard [3b8a74f30112] <==
2019/10/16 16:16:34 Starting overwatch
2019/10/16 16:16:34 Using namespace: kubernetes-dashboard
2019/10/16 16:16:34 Using in-cluster config to connect to apiserver
2019/10/16 16:16:34 Using secret token for csrf signing
2019/10/16 16:16:34 Initializing csrf token from kubernetes-dashboard-csrf secret
2019/10/16 16:16:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2019/10/16 16:16:34 Successful initial request to the apiserver, version: v1.16.0
2019/10/16 16:16:34 Generating JWE encryption key
2019/10/16 16:16:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2019/10/16 16:16:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2019/10/16 16:16:34 Initializing JWE encryption key from synchronized object
2019/10/16 16:16:34 Creating in-cluster Sidecar client
2019/10/16 16:16:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/16 16:16:34 Serving insecurely on HTTP port: 9090
2019/10/16 16:17:04 Successful request to sidecar
==> storage-provisioner [7d0037532c25] <==
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment