Created
June 3, 2019 10:22
-
-
Save lukasheinrich/c92b2ff88c002a92b7b44f3f3775a8f1 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:18.219819018Z stdout F hostIP = 172.17.0.2 | |
2019-06-03T10:21:18.219840157Z stdout F podIP = 172.17.0.2 | |
2019-06-03T10:21:48.213045063Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout | |
2019-06-03T10:21:48.213157104Z stderr F | |
2019-06-03T10:21:48.213164689Z stderr F goroutine 1 [running]: | |
2019-06-03T10:21:48.213169299Z stderr F main.main() | |
2019-06-03T10:21:48.213174648Z stderr F /src/main.go:84 +0x423 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:50.113805916Z stdout F hostIP = 172.17.0.2 | |
2019-06-03T10:21:50.113869562Z stdout F podIP = 172.17.0.2 | |
2019-06-03T10:21:50.222991854Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T10:21:50.223026017Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T10:21:50.224991705Z stdout F Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} | |
2019-06-03T10:21:50.225013428Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T10:21:50.225019051Z stdout F handling current node | |
2019-06-03T10:21:50.230065812Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T10:21:50.230086882Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24 | |
2019-06-03T10:21:50.230092244Z stdout F Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} | |
2019-06-03T10:22:00.233650479Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T10:22:00.233681533Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T10:22:00.233687819Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T10:22:00.233692284Z stdout F handling current node | |
2019-06-03T10:22:00.233700353Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T10:22:00.233704558Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-- Logs begin at Mon 2019-06-03 10:20:11 UTC, end at Mon 2019-06-03 10:22:02 UTC. -- | |
Jun 03 10:20:11 kind-worker systemd[1]: Starting containerd container runtime... | |
Jun 03 10:20:11 kind-worker systemd[1]: Started containerd container runtime. | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.458848831Z" level=info msg="starting containerd" revision= version=1.2.6-0ubuntu1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.460345368Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.460738379Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.460826538Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.461233215Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.461951002Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.462128112Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.462353010Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.462651642Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.462993598Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463076365Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463127770Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463180579Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463226308Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463275315Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463373688Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463451280Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465298352Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465426785Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465523697Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465572384Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465617595Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465659897Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465720169Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465763568Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465805401Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465849697Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465893946Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465997425Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466049155Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466096650Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466138891Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466407227Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466503615Z" level=info msg="Connect containerd service" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466617701Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.468570267Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.469152387Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.469623173Z" level=info msg=serving... address="/run/containerd/containerd.sock" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.469794691Z" level=info msg="containerd successfully booted in 0.039326s" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.509143622Z" level=info msg="Start subscribing containerd event" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.509762883Z" level=info msg="Start recovering state" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.510856313Z" level=warning msg="The image docker.io/kindest/kindnetd:0.1.0 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.512860978Z" level=warning msg="The image k8s.gcr.io/coredns:1.3.1 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.515085573Z" level=warning msg="The image k8s.gcr.io/etcd:3.3.10 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.547039331Z" level=warning msg="The image k8s.gcr.io/ip-masq-agent:v2.4.1 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.547859217Z" level=warning msg="The image k8s.gcr.io/kube-apiserver:v1.14.2 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.548632367Z" level=warning msg="The image k8s.gcr.io/kube-controller-manager:v1.14.2 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.549205488Z" level=warning msg="The image k8s.gcr.io/kube-proxy:v1.14.2 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.553910800Z" level=warning msg="The image k8s.gcr.io/kube-scheduler:v1.14.2 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.554606381Z" level=warning msg="The image k8s.gcr.io/pause:3.1 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.556210263Z" level=warning msg="The image sha256:19bb968f77bba3a5b5f56b5c033d71f699c22bdc8bbe9412f0bfaf7f674a64cc is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.559338487Z" level=warning msg="The image sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.559865192Z" level=warning msg="The image sha256:5c24210246bb67af5f89150e947211a1c2a127fb3825eb18507c1039bc6e86f8 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.560246052Z" level=warning msg="The image sha256:5eeff402b659832b64b5634061eb3825008abb549e1d873faf3908beecea8dfc is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.560600521Z" level=warning msg="The image sha256:8be94bdae1399076ac29223a7f10230011d195e355dfc7027fa02dc95d34065f is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.560943892Z" level=warning msg="The image sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.561287745Z" level=warning msg="The image sha256:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.580347326Z" level=warning msg="The image sha256:ee18f350636d8e51ebb3749d1d7a1928da1d6e6fc0051852a6686c19b706c57c is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.580746775Z" level=warning msg="The image sha256:f227066bdc5f9aa2f8a9bb54854e5b7a23c6db8fce0f927e5c4feef8a9e74d46 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.580968028Z" level=info msg="Start event monitor" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.580997306Z" level=info msg="Start snapshots syncer" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.581003706Z" level=info msg="Start streaming server" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.582003819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.582371065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/etcd:3.3.10,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:20:46 kind-worker containerd[51]: time="2019-06-03T10:20:46.793730628Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:03 kind-worker containerd[51]: time="2019-06-03T10:21:03.206068584Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:08 kind-worker containerd[51]: time="2019-06-03T10:21:08.468892897Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:13 kind-worker containerd[51]: time="2019-06-03T10:21:13.469838898Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.113666463Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.114273835Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.371567119Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-kp2s9,Uid:51f164de-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.422038975Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/eb60082817446f8d04db3da087fff56e5e5285b4c4e6880c7c92ff159261b093/shim.sock" debug=false pid=195 | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.477581857Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ip-masq-agent-j4ssk,Uid:51f17b3e-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.490963996Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-s2mkf,Uid:51f61a08-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.491553732Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/1bcda36fce3673bc706148211d6c8b59647f136d14da8f03f73cc08fecafe9c6/shim.sock" debug=false pid=218 | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.503566831Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5/shim.sock" debug=false pid=234 | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.574873164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kp2s9,Uid:51f164de-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,} returns sandbox id "eb60082817446f8d04db3da087fff56e5e5285b4c4e6880c7c92ff159261b093"" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.578613799Z" level=info msg="CreateContainer within sandbox "eb60082817446f8d04db3da087fff56e5e5285b4c4e6880c7c92ff159261b093" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.672676348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ip-masq-agent-j4ssk,Uid:51f17b3e-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,} returns sandbox id "1bcda36fce3673bc706148211d6c8b59647f136d14da8f03f73cc08fecafe9c6"" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.675141845Z" level=info msg="CreateContainer within sandbox "1bcda36fce3673bc706148211d6c8b59647f136d14da8f03f73cc08fecafe9c6" for container &ContainerMetadata{Name:ip-masq-agent,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.846712975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-s2mkf,Uid:51f61a08-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,} returns sandbox id "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5"" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.858813639Z" level=info msg="CreateContainer within sandbox "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}" | |
Jun 03 10:21:17 kind-worker containerd[51]: time="2019-06-03T10:21:17.729353573Z" level=info msg="CreateContainer within sandbox "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5"" | |
Jun 03 10:21:17 kind-worker containerd[51]: time="2019-06-03T10:21:17.738460150Z" level=info msg="StartContainer for "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5"" | |
Jun 03 10:21:17 kind-worker containerd[51]: time="2019-06-03T10:21:17.742815877Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5/shim.sock" debug=false pid=361 | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.097116789Z" level=info msg="StartContainer for "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5" returns successfully" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.219503448Z" level=info msg="CreateContainer within sandbox "1bcda36fce3673bc706148211d6c8b59647f136d14da8f03f73cc08fecafe9c6" for &ContainerMetadata{Name:ip-masq-agent,Attempt:0,} returns container id "b20212643befb9ddba8c1f3820cd8fdf97b932373de9d493495d9689c01d2f91"" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.220586055Z" level=info msg="StartContainer for "b20212643befb9ddba8c1f3820cd8fdf97b932373de9d493495d9689c01d2f91"" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.221297651Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/b20212643befb9ddba8c1f3820cd8fdf97b932373de9d493495d9689c01d2f91/shim.sock" debug=false pid=413 | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.471093196Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.494869041Z" level=info msg="StartContainer for "b20212643befb9ddba8c1f3820cd8fdf97b932373de9d493495d9689c01d2f91" returns successfully" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.703579165Z" level=info msg="CreateContainer within sandbox "eb60082817446f8d04db3da087fff56e5e5285b4c4e6880c7c92ff159261b093" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id "216901c038af5b9ea7e90c28a34dba5ed5798111fef9cb2f4701a0ec68bfff23"" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.705293675Z" level=info msg="StartContainer for "216901c038af5b9ea7e90c28a34dba5ed5798111fef9cb2f4701a0ec68bfff23"" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.708858899Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/216901c038af5b9ea7e90c28a34dba5ed5798111fef9cb2f4701a0ec68bfff23/shim.sock" debug=false pid=472 | |
Jun 03 10:21:19 kind-worker containerd[51]: time="2019-06-03T10:21:19.326600760Z" level=info msg="StartContainer for "216901c038af5b9ea7e90c28a34dba5ed5798111fef9cb2f4701a0ec68bfff23" returns successfully" | |
Jun 03 10:21:23 kind-worker containerd[51]: time="2019-06-03T10:21:23.472425300Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:28 kind-worker containerd[51]: time="2019-06-03T10:21:28.473503092Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:33 kind-worker containerd[51]: time="2019-06-03T10:21:33.474557451Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:38 kind-worker containerd[51]: time="2019-06-03T10:21:38.475983340Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:43 kind-worker containerd[51]: time="2019-06-03T10:21:43.477191693Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.231235474Z" level=info msg="Finish piping stderr of container "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5"" | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.231951351Z" level=info msg="Finish piping stdout of container "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5"" | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.272453869Z" level=info msg="TaskExit event &TaskExit{ContainerID:0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5,ID:0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5,Pid:379,ExitStatus:2,ExitedAt:2019-06-03 10:21:48.231824992 +0000 UTC,}" | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.328015948Z" level=info msg="shim reaped" id=0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5 | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.478591425Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.798177502Z" level=info msg="CreateContainer within sandbox "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}" | |
Jun 03 10:21:49 kind-worker containerd[51]: time="2019-06-03T10:21:49.820243562Z" level=info msg="CreateContainer within sandbox "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id "88e666e8e4aff5a60087abd17e816e44f1bd0e43e616a65d1ec6978f262de45a"" | |
Jun 03 10:21:49 kind-worker containerd[51]: time="2019-06-03T10:21:49.832841088Z" level=info msg="StartContainer for "88e666e8e4aff5a60087abd17e816e44f1bd0e43e616a65d1ec6978f262de45a"" | |
Jun 03 10:21:49 kind-worker containerd[51]: time="2019-06-03T10:21:49.834039492Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/88e666e8e4aff5a60087abd17e816e44f1bd0e43e616a65d1ec6978f262de45a/shim.sock" debug=false pid=660 | |
Jun 03 10:21:50 kind-worker containerd[51]: time="2019-06-03T10:21:50.134363502Z" level=info msg="StartContainer for "88e666e8e4aff5a60087abd17e816e44f1bd0e43e616a65d1ec6978f262de45a" returns successfully" | |
Jun 03 10:22:00 kind-worker containerd[51]: time="2019-06-03T10:22:00.204830712Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:hello-6d6586c69c-vvqlx,Uid:54dca26d-85e9-11e9-a310-0242ac110004,Namespace:default,Attempt:0,}" | |
Jun 03 10:22:00 kind-worker containerd[51]: time="2019-06-03T10:22:00.263049961Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/7e23ebf5d912b181c762d0f4f734178ec85ae890a0a76611a94d00ff38d620ed/shim.sock" debug=false pid=737 | |
Jun 03 10:22:00 kind-worker containerd[51]: time="2019-06-03T10:22:00.410639457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:hello-6d6586c69c-vvqlx,Uid:54dca26d-85e9-11e9-a310-0242ac110004,Namespace:default,Attempt:0,} returns sandbox id "7e23ebf5d912b181c762d0f4f734178ec85ae890a0a76611a94d00ff38d620ed"" | |
Jun 03 10:22:00 kind-worker containerd[51]: time="2019-06-03T10:22:00.414264663Z" level=info msg="PullImage "alpine:latest"" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.553584163Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/alpine:latest,Labels:map[string]string{},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.561164261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.562052118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.709456287Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.711844778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.713955263Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.714349758Z" level=info msg="PullImage "alpine:latest" returns image reference "sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1"" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.719356458Z" level=info msg="CreateContainer within sandbox "7e23ebf5d912b181c762d0f4f734178ec85ae890a0a76611a94d00ff38d620ed" for container &ContainerMetadata{Name:hello,Attempt:0,}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.770342126Z" level=info msg="CreateContainer within sandbox "7e23ebf5d912b181c762d0f4f734178ec85ae890a0a76611a94d00ff38d620ed" for &ContainerMetadata{Name:hello,Attempt:0,} returns container id "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c"" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.771556889Z" level=info msg="StartContainer for "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c"" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.772951939Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c/shim.sock" debug=false pid=788 | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.928688408Z" level=info msg="StartContainer for "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c" returns successfully" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.208916455Z" level=info msg="Attach for "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c" with tty true and stdin true" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.209020697Z" level=info msg="Attach for "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c" returns URL "http://127.0.0.1:35752/attach/UVD12SgG"" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.424458004Z" level=info msg="Finish piping stdout of container "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c"" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.424700235Z" level=info msg="Attach stream "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c-attach-d09da7617401192029166757ed95e5c6eac264bed66b4f0408af7728e420da6b-stdout" closed" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.424890193Z" level=info msg="Attach stream "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c-attach-d09da7617401192029166757ed95e5c6eac264bed66b4f0408af7728e420da6b-stdin" closed" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.481653631Z" level=info msg="TaskExit event &TaskExit{ContainerID:76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c,ID:76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c,Pid:805,ExitStatus:0,ExitedAt:2019-06-03 10:22:02.423456752 +0000 UTC,}" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.543754510Z" level=info msg="shim reaped" id=76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:43.295920253Z stdout F .:53 | |
2019-06-03T10:21:43.295986183Z stdout F 2019-06-03T10:21:43.292Z [INFO] CoreDNS-1.3.1 | |
2019-06-03T10:21:43.295994294Z stdout F 2019-06-03T10:21:43.292Z [INFO] linux/amd64, go1.11.4, 6b56a9c | |
2019-06-03T10:21:43.296000629Z stdout F CoreDNS-1.3.1 | |
2019-06-03T10:21:43.296005262Z stdout F linux/amd64, go1.11.4, 6b56a9c | |
2019-06-03T10:21:43.296010809Z stdout F 2019-06-03T10:21:43.292Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:43.788392368Z stdout F .:53 | |
2019-06-03T10:21:43.788446146Z stdout F 2019-06-03T10:21:43.787Z [INFO] CoreDNS-1.3.1 | |
2019-06-03T10:21:43.788459589Z stdout F 2019-06-03T10:21:43.787Z [INFO] linux/amd64, go1.11.4, 6b56a9c | |
2019-06-03T10:21:43.7884673Z stdout F CoreDNS-1.3.1 | |
2019-06-03T10:21:43.788471826Z stdout F linux/amd64, go1.11.4, 6b56a9c | |
2019-06-03T10:21:43.788479432Z stdout F 2019-06-03T10:21:43.787Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Containers: 3 | |
Running: 3 | |
Paused: 0 | |
Stopped: 0 | |
Images: 12 | |
Server Version: 17.09.0-ce | |
Storage Driver: overlay2 | |
Backing Filesystem: extfs | |
Supports d_type: true | |
Native Overlay Diff: true | |
Logging Driver: json-file | |
Cgroup Driver: cgroupfs | |
Plugins: | |
Volume: local | |
Network: bridge host macvlan null overlay | |
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog | |
Swarm: inactive | |
Runtimes: runc | |
Default Runtime: runc | |
Init Binary: docker-init | |
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0 | |
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64 | |
init version: 949e6fa | |
Kernel Version: 4.4.0-101-generic | |
Operating System: Ubuntu 14.04.5 LTS | |
OSType: linux | |
Architecture: x86_64 | |
CPUs: 2 | |
Total Memory: 7.305GiB | |
Name: travis-job-ebdd1480-95f7-4860-af04-cccf93bafb1e | |
ID: DH3M:23FP:35CF:LCVT:ROBH:CV5W:C5W2:JSP4:7G7W:NH4L:6FOS:WJOW | |
Docker Root Dir: /var/lib/docker | |
Debug Mode (client): false | |
Debug Mode (server): false | |
Registry: https://index.docker.io/v1/ | |
Experimental: false | |
Insecure Registries: | |
127.0.0.0/8 | |
Live Restore Enabled: false | |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:20:35.849765708Z stderr F 2019-06-03 10:20:35.849572 I | etcdmain: etcd Version: 3.3.10 | |
2019-06-03T10:20:35.849873114Z stderr F 2019-06-03 10:20:35.849839 I | etcdmain: Git SHA: 27fc7e2 | |
2019-06-03T10:20:35.849943199Z stderr F 2019-06-03 10:20:35.849893 I | etcdmain: Go Version: go1.10.4 | |
2019-06-03T10:20:35.849984358Z stderr F 2019-06-03 10:20:35.849963 I | etcdmain: Go OS/Arch: linux/amd64 | |
2019-06-03T10:20:35.850050743Z stderr F 2019-06-03 10:20:35.850018 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2 | |
2019-06-03T10:20:35.850193043Z stderr F 2019-06-03 10:20:35.850160 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = | |
2019-06-03T10:20:35.851539718Z stderr F 2019-06-03 10:20:35.851458 I | embed: listening for peers on https://172.17.0.4:2380 | |
2019-06-03T10:20:35.85167188Z stderr F 2019-06-03 10:20:35.851622 I | embed: listening for client requests on 127.0.0.1:2379 | |
2019-06-03T10:20:35.851752451Z stderr F 2019-06-03 10:20:35.851710 I | embed: listening for client requests on 172.17.0.4:2379 | |
2019-06-03T10:20:35.856492387Z stderr F 2019-06-03 10:20:35.856420 I | etcdserver: name = kind-control-plane | |
2019-06-03T10:20:35.856627734Z stderr F 2019-06-03 10:20:35.856580 I | etcdserver: data dir = /var/lib/etcd | |
2019-06-03T10:20:35.856676058Z stderr F 2019-06-03 10:20:35.856653 I | etcdserver: member dir = /var/lib/etcd/member | |
2019-06-03T10:20:35.856737446Z stderr F 2019-06-03 10:20:35.856711 I | etcdserver: heartbeat = 100ms | |
2019-06-03T10:20:35.856783617Z stderr F 2019-06-03 10:20:35.856754 I | etcdserver: election = 1000ms | |
2019-06-03T10:20:35.85683453Z stderr F 2019-06-03 10:20:35.856809 I | etcdserver: snapshot count = 10000 | |
2019-06-03T10:20:35.856904456Z stderr F 2019-06-03 10:20:35.856862 I | etcdserver: advertise client URLs = https://172.17.0.4:2379 | |
2019-06-03T10:20:35.856956982Z stderr F 2019-06-03 10:20:35.856931 I | etcdserver: initial advertise peer URLs = https://172.17.0.4:2380 | |
2019-06-03T10:20:35.857021334Z stderr F 2019-06-03 10:20:35.856978 I | etcdserver: initial cluster = kind-control-plane=https://172.17.0.4:2380 | |
2019-06-03T10:20:35.860960941Z stderr F 2019-06-03 10:20:35.860898 I | etcdserver: starting member 40fd14fa28910cab in cluster a6ea9ad1b116d02f | |
2019-06-03T10:20:35.86107481Z stderr F 2019-06-03 10:20:35.861034 I | raft: 40fd14fa28910cab became follower at term 0 | |
2019-06-03T10:20:35.861134481Z stderr F 2019-06-03 10:20:35.861103 I | raft: newRaft 40fd14fa28910cab [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] | |
2019-06-03T10:20:35.861187836Z stderr F 2019-06-03 10:20:35.861162 I | raft: 40fd14fa28910cab became follower at term 1 | |
2019-06-03T10:20:35.871343105Z stderr F 2019-06-03 10:20:35.871266 W | auth: simple token is not cryptographically signed | |
2019-06-03T10:20:35.87541606Z stderr F 2019-06-03 10:20:35.875350 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided] | |
2019-06-03T10:20:35.878683126Z stderr F 2019-06-03 10:20:35.878624 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = | |
2019-06-03T10:20:35.87887987Z stderr F 2019-06-03 10:20:35.878836 I | etcdserver: 40fd14fa28910cab as single-node; fast-forwarding 9 ticks (election ticks 10) | |
2019-06-03T10:20:35.879287206Z stderr F 2019-06-03 10:20:35.879231 I | etcdserver/membership: added member 40fd14fa28910cab [https://172.17.0.4:2380] to cluster a6ea9ad1b116d02f | |
2019-06-03T10:20:36.561675946Z stderr F 2019-06-03 10:20:36.561517 I | raft: 40fd14fa28910cab is starting a new election at term 1 | |
2019-06-03T10:20:36.561733915Z stderr F 2019-06-03 10:20:36.561560 I | raft: 40fd14fa28910cab became candidate at term 2 | |
2019-06-03T10:20:36.561740593Z stderr F 2019-06-03 10:20:36.561588 I | raft: 40fd14fa28910cab received MsgVoteResp from 40fd14fa28910cab at term 2 | |
2019-06-03T10:20:36.561745728Z stderr F 2019-06-03 10:20:36.561604 I | raft: 40fd14fa28910cab became leader at term 2 | |
2019-06-03T10:20:36.561815152Z stderr F 2019-06-03 10:20:36.561632 I | raft: raft.node: 40fd14fa28910cab elected leader 40fd14fa28910cab at term 2 | |
2019-06-03T10:20:36.561931668Z stderr F 2019-06-03 10:20:36.561864 I | etcdserver: setting up the initial cluster version to 3.3 | |
2019-06-03T10:20:36.563014064Z stderr F 2019-06-03 10:20:36.562927 N | etcdserver/membership: set the initial cluster version to 3.3 | |
2019-06-03T10:20:36.563031729Z stderr F 2019-06-03 10:20:36.562974 I | etcdserver/api: enabled capabilities for version 3.3 | |
2019-06-03T10:20:36.563081562Z stderr F 2019-06-03 10:20:36.563046 I | etcdserver: published {Name:kind-control-plane ClientURLs:[https://172.17.0.4:2379]} to cluster a6ea9ad1b116d02f | |
2019-06-03T10:20:36.563157962Z stderr F 2019-06-03 10:20:36.563105 I | embed: ready to serve client requests | |
2019-06-03T10:20:36.563412921Z stderr F 2019-06-03 10:20:36.563325 I | embed: ready to serve client requests | |
2019-06-03T10:20:36.565754499Z stderr F 2019-06-03 10:20:36.565580 I | embed: serving client requests on 172.17.0.4:2379 | |
2019-06-03T10:20:36.572647676Z stderr F 2019-06-03 10:20:36.572570 I | embed: serving client requests on 127.0.0.1:2379 | |
2019-06-03T10:20:40.994827346Z stderr F proto: no coders for int | |
2019-06-03T10:20:40.996364374Z stderr F proto: no encoder for ValueSize int [GetProperties] | |
2019-06-03T10:21:02.967911978Z stderr F 2019-06-03 10:21:02.911135 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (131.26183ms) to execute | |
2019-06-03T10:21:03.806502777Z stderr F 2019-06-03 10:21:03.771166 W | etcdserver: request "header:<ID:912941121148328131 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/ip-masq-agent-d84kg.15a4a9117b3e274c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/ip-masq-agent-d84kg.15a4a9117b3e274c\" value_size:366 lease:912941121148327661 >> failure:<>>" with result "size:16" took too long (196.200526ms) to execute | |
2019-06-03T10:21:09.629250978Z stderr F 2019-06-03 10:21:09.629081 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:458" took too long (231.553758ms) to execute | |
2019-06-03T10:21:09.629344363Z stderr F 2019-06-03 10:21:09.629245 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (185.230267ms) to execute | |
2019-06-03T10:21:09.629579245Z stderr F 2019-06-03 10:21:09.629496 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (197.808796ms) to execute | |
2019-06-03T10:21:10.18625562Z stderr F 2019-06-03 10:21:10.186114 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (242.317212ms) to execute | |
2019-06-03T10:21:10.190051964Z stderr F 2019-06-03 10:21:10.188199 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (255.697386ms) to execute | |
2019-06-03T10:21:10.639472764Z stderr F 2019-06-03 10:21:10.639073 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (195.246824ms) to execute | |
2019-06-03T10:21:10.639910531Z stderr F 2019-06-03 10:21:10.639843 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (208.508695ms) to execute | |
2019-06-03T10:21:12.214343279Z stderr F 2019-06-03 10:21:12.214200 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (283.053152ms) to execute | |
2019-06-03T10:21:12.214806137Z stderr F 2019-06-03 10:21:12.214751 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (269.782812ms) to execute | |
2019-06-03T10:21:12.877071333Z stderr F 2019-06-03 10:21:12.876892 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (433.08992ms) to execute | |
2019-06-03T10:21:12.877179142Z stderr F 2019-06-03 10:21:12.877130 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (106.814013ms) to execute | |
2019-06-03T10:21:12.877676066Z stderr F 2019-06-03 10:21:12.877588 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (445.238232ms) to execute | |
2019-06-03T10:21:49.371873489Z stderr F 2019-06-03 10:21:49.371705 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-vpjb6\" " with result "range_response_count:1 size:1838" took too long (497.167785ms) to execute | |
2019-06-03T10:21:49.771017302Z stderr F 2019-06-03 10:21:49.770714 W | etcdserver: read-only range request "key:\"/registry/resourcequotas\" range_end:\"/registry/resourcequotat\" count_only:true " with result "range_response_count:0 size:5" took too long (394.398094ms) to execute | |
2019-06-03T10:21:49.7724428Z stderr F 2019-06-03 10:21:49.772343 W | etcdserver: request "header:<ID:912941121148328620 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-fb8b8dccf-vpjb6\" mod_revision:593 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-fb8b8dccf-vpjb6\" value_size:1645 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-fb8b8dccf-vpjb6\" > >>" with result "size:16" took too long (227.286715ms) to execute | |
2019-06-03T10:21:49.776252468Z stderr F 2019-06-03 10:21:49.775530 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kindnet-zj8wn.15a4a914bdc89344\" " with result "range_response_count:1 size:473" took too long (387.729868ms) to execute | |
2019-06-03T10:21:49.779395547Z stderr F 2019-06-03 10:21:49.778877 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kindnet-zj8wn\" " with result "range_response_count:1 size:1912" took too long (390.617077ms) to execute |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:22:01.913529264Z stdout F fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz | |
2019-06-03T10:22:02.064670275Z stdout F fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz | |
2019-06-03T10:22:02.174816167Z stdout F (1/5) Installing ca-certificates (20190108-r0) | |
2019-06-03T10:22:02.207569349Z stdout F 7 0% 8[0K7 1% 8[0K7 2% # 8[0K7 3% # 8[0K7 4% # 8[0K7 4% ## 8[0K7 5% ## 8[0K7 6% ## 8[0K7 6% ### 8[0K7 7% ### 8[0K7 8% ### 8[0K7 9% #### 8[0K7 10% #### 8[0K7 11% #### 8[0K7 11% ##### 8[0K7 12% ##### 8[0K7 13% ##### 8[0K7 13% ###### 8[0K7 14% ###### 8[0K7 15% ###### 8[0K7 15% ####### 8[0K7 16% ####### 8[0K7 17% ####### 8[0K7 18% ####### 8[0K7 18% ######## 8[0K7 19% ######## 8[0K7 20% ######## 8[0K7 20% ######### 8[0K7 21% ######### 8[0K7 22% ######### 8[0K7 22% ########## 8[0K7 23% ########## 8[0K7 24% ########## 8[0K7 25% ########### 8[0K7 26% ########### 8[0K7 27% ########### 8[0K7 27% ############ 8[0K7 28% ############ 8[0K7 29% ############ 8[0K7 29% ############# 8[0K7 30% ############# 8[0K7 31% ############# 8[0K7 31% ############## 8[0K7 32% ############## 8[0K7 33% ############## 8[0K7 34% ############## 8[0K7 34% ############### 8[0K7 35% ############### 8[0K7 36% ############### 8[0K7 36% ################ 8[0K(2/5) Installing nghttp2-libs (1.35.1-r0) | |
2019-06-03T10:22:02.220471109Z stdout F 7 41% ################## 8[0K(3/5) Installing libssh2 (1.8.2-r0) | |
2019-06-03T10:22:02.233324267Z stdout F 7 49% ##################### 8[0K(4/5) Installing libcurl (7.64.0-r1) | |
2019-06-03T10:22:02.249941559Z stdout F 7 59% ########################## 8[0K(5/5) Installing curl (7.64.0-r1) | |
2019-06-03T10:22:02.263465664Z stdout F 7 86% ###################################### 8[0K7100% ############################################8[0KExecuting busybox-1.29.3-r10.trigger | |
2019-06-03T10:22:02.267596428Z stdout F Executing ca-certificates-20190108-r0.trigger | |
2019-06-03T10:22:02.300729438Z stdout F OK: 7 MiB in 19 packages | |
2019-06-03T10:22:02.409040998Z stdout F <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> | |
2019-06-03T10:22:02.409149631Z stdout F <TITLE>301 Moved</TITLE></HEAD><BODY> | |
2019-06-03T10:22:02.40915611Z stdout F <H1>301 Moved</H1> | |
2019-06-03T10:22:02.409160643Z stdout F The document has moved | |
2019-06-03T10:22:02.409163661Z stdout F <A HREF="http://www.google.com/">here</A>. | |
2019-06-03T10:22:02.409167442Z stdout F </BODY></HTML> |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[ | |
{ | |
"Id": "03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee", | |
"Created": "2019-06-03T10:19:19.707544134Z", | |
"Path": "/usr/local/bin/entrypoint", | |
"Args": [ | |
"/sbin/init" | |
], | |
"State": { | |
"Status": "running", | |
"Running": true, | |
"Paused": false, | |
"Restarting": false, | |
"OOMKilled": false, | |
"Dead": false, | |
"Pid": 11529, | |
"ExitCode": 0, | |
"Error": "", | |
"StartedAt": "2019-06-03T10:20:11.087841948Z", | |
"FinishedAt": "0001-01-01T00:00:00Z" | |
}, | |
"Image": "sha256:1714e84e31af8a0689f5e7a00fd4457df3cda616c56d25b095717332a57e29b5", | |
"ResolvConfPath": "/var/lib/docker/containers/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/resolv.conf", | |
"HostnamePath": "/var/lib/docker/containers/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/hostname", | |
"HostsPath": "/var/lib/docker/containers/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/hosts", | |
"LogPath": "/var/lib/docker/containers/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee-json.log", | |
"Name": "/kind-worker", | |
"RestartCount": 0, | |
"Driver": "overlay2", | |
"Platform": "linux", | |
"MountLabel": "", | |
"ProcessLabel": "", | |
"AppArmorProfile": "", | |
"ExecIDs": [ | |
"b018c533e34e7b3cdd5f5d6c0d8f9a55ec3fab361c2994b5f7d7a8cab4fba69c", | |
"8be8a1df6adae728041db7f219e9dc205491b0464b6c8540562cd20a3f54d919", | |
"715daa884b2019da78f81444ddefcefa35258b1191989c0d863071d719f94aba", | |
"07c89d896469a806722f968df0405d323806e9b099cba4c0a6bac76837975f7e" | |
], | |
"HostConfig": { | |
"Binds": [ | |
"/lib/modules:/lib/modules:ro" | |
], | |
"ContainerIDFile": "", | |
"LogConfig": { | |
"Type": "json-file", | |
"Config": {} | |
}, | |
"NetworkMode": "default", | |
"PortBindings": {}, | |
"RestartPolicy": { | |
"Name": "no", | |
"MaximumRetryCount": 0 | |
}, | |
"AutoRemove": false, | |
"VolumeDriver": "", | |
"VolumesFrom": null, | |
"CapAdd": null, | |
"CapDrop": null, | |
"Dns": [], | |
"DnsOptions": [], | |
"DnsSearch": [], | |
"ExtraHosts": null, | |
"GroupAdd": null, | |
"IpcMode": "shareable", | |
"Cgroup": "", | |
"Links": null, | |
"OomScoreAdj": 0, | |
"PidMode": "", | |
"Privileged": true, | |
"PublishAllPorts": false, | |
"ReadonlyRootfs": false, | |
"SecurityOpt": [ | |
"seccomp=unconfined", | |
"label=disable" | |
], | |
"Tmpfs": { | |
"/run": "", | |
"/tmp": "" | |
}, | |
"UTSMode": "", | |
"UsernsMode": "", | |
"ShmSize": 67108864, | |
"Runtime": "runc", | |
"ConsoleSize": [ | |
0, | |
0 | |
], | |
"Isolation": "", | |
"CpuShares": 0, | |
"Memory": 0, | |
"NanoCpus": 0, | |
"CgroupParent": "", | |
"BlkioWeight": 0, | |
"BlkioWeightDevice": [], | |
"BlkioDeviceReadBps": null, | |
"BlkioDeviceWriteBps": null, | |
"BlkioDeviceReadIOps": null, | |
"BlkioDeviceWriteIOps": null, | |
"CpuPeriod": 0, | |
"CpuQuota": 0, | |
"CpuRealtimePeriod": 0, | |
"CpuRealtimeRuntime": 0, | |
"CpusetCpus": "", | |
"CpusetMems": "", | |
"Devices": [], | |
"DeviceCgroupRules": null, | |
"DiskQuota": 0, | |
"KernelMemory": 0, | |
"MemoryReservation": 0, | |
"MemorySwap": 0, | |
"MemorySwappiness": null, | |
"OomKillDisable": false, | |
"PidsLimit": 0, | |
"Ulimits": null, | |
"CpuCount": 0, | |
"CpuPercent": 0, | |
"IOMaximumIOps": 0, | |
"IOMaximumBandwidth": 0 | |
}, | |
"GraphDriver": { | |
"Data": { | |
"LowerDir": "/var/lib/docker/overlay2/6c967903f46515bd6d908ecc0039e7453dc4de2766d98c2e7721f8cbef8e1ab1-init/diff:/var/lib/docker/overlay2/39fb75e9939427abf32246b837f4059df79a2f16d94825347116564763795451/diff:/var/lib/docker/overlay2/982840c333edee9c238785dcad0fe362568fa13e0ec6a965922cd774c06bb381/diff:/var/lib/docker/overlay2/fd78971bb9e8d761146c6cdcd2abe7aaeee36d3cfe287f5068ad023a53a8028f/diff:/var/lib/docker/overlay2/f283346766ed8bf9cb50c4e88040e0e1609f2fa6b08d636e29af6aeeacc754fc/diff:/var/lib/docker/overlay2/a1e768c939cd84e105c9ef231a294061ff3cac8562b5039e444e31f5f63e268d/diff:/var/lib/docker/overlay2/42c7f1140f56383aa6ce452644ff0d70cc67566853cbf060ae5ee8972fac79ea/diff:/var/lib/docker/overlay2/ae4ae2382314931059619deeba4a0112b4866d89c23006c84be3696934c5972c/diff:/var/lib/docker/overlay2/f9ec8e6d5e030b031e9af2a74dfcd0816f79a90c8f2f68332408e6c125a76da2/diff:/var/lib/docker/overlay2/991d51a8af5db3ce82655079367a9950736fd069327aa0dfa84868bd6166bc35/diff:/var/lib/docker/overlay2/21a850ebe5fbc1297788d35d0109df2fa0834d4c5dbc4a2b577194453ab2c2d7/diff:/var/lib/docker/overlay2/e1d2f39d2474320a79c1b12a0e0ee3cfb2942f5e7bb053d5352bdf42379463a8/diff:/var/lib/docker/overlay2/cf462d1cf0e5c8b659d84595de67c31930ccbb154116d7931ca4ec8d01b5dba0/diff:/var/lib/docker/overlay2/30e5a53cba8d39f7bb8e4f91aacdeb90f8eeab1198e51eb62e28ae19229ac6d0/diff", | |
"MergedDir": "/var/lib/docker/overlay2/6c967903f46515bd6d908ecc0039e7453dc4de2766d98c2e7721f8cbef8e1ab1/merged", | |
"UpperDir": "/var/lib/docker/overlay2/6c967903f46515bd6d908ecc0039e7453dc4de2766d98c2e7721f8cbef8e1ab1/diff", | |
"WorkDir": "/var/lib/docker/overlay2/6c967903f46515bd6d908ecc0039e7453dc4de2766d98c2e7721f8cbef8e1ab1/work" | |
}, | |
"Name": "overlay2" | |
}, | |
"Mounts": [ | |
{ | |
"Type": "bind", | |
"Source": "/lib/modules", | |
"Destination": "/lib/modules", | |
"Mode": "ro", | |
"RW": false, | |
"Propagation": "rprivate" | |
}, | |
{ | |
"Type": "volume", | |
"Name": "d2727358956ca7da2267ba7bbdbbd1d4f4b4cbd4cad333fca75209524aefb713", | |
"Source": "/var/lib/docker/volumes/d2727358956ca7da2267ba7bbdbbd1d4f4b4cbd4cad333fca75209524aefb713/_data", | |
"Destination": "/var/lib/containerd", | |
"Driver": "local", | |
"Mode": "", | |
"RW": true, | |
"Propagation": "" | |
} | |
], | |
"Config": { | |
"Hostname": "kind-worker", | |
"Domainname": "", | |
"User": "", | |
"AttachStdin": false, | |
"AttachStdout": false, | |
"AttachStderr": false, | |
"Tty": true, | |
"OpenStdin": false, | |
"StdinOnce": false, | |
"Env": [ | |
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", | |
"container=docker" | |
], | |
"Cmd": null, | |
"Image": "kindest/node:latest", | |
"Volumes": { | |
"/var/lib/containerd": {} | |
}, | |
"WorkingDir": "", | |
"Entrypoint": [ | |
"/usr/local/bin/entrypoint", | |
"/sbin/init" | |
], | |
"OnBuild": null, | |
"Labels": { | |
"io.k8s.sigs.kind.build": "2019-06-03T10:17:14.570951875Z", | |
"io.k8s.sigs.kind.cluster": "kind", | |
"io.k8s.sigs.kind.role": "worker" | |
}, | |
"StopSignal": "SIGRTMIN+3" | |
}, | |
"NetworkSettings": { | |
"Bridge": "", | |
"SandboxID": "15f6d793e5029694d7118276e6816bde07396acc76430e27c40ab15d73b32080", | |
"HairpinMode": false, | |
"LinkLocalIPv6Address": "", | |
"LinkLocalIPv6PrefixLen": 0, | |
"Ports": {}, | |
"SandboxKey": "/var/run/docker/netns/15f6d793e502", | |
"SecondaryIPAddresses": null, | |
"SecondaryIPv6Addresses": null, | |
"EndpointID": "defcefc4aeb6988de9ccb53a3dea2ce78225c0a6a29eefa3fa833612c19080f4", | |
"Gateway": "172.17.0.1", | |
"GlobalIPv6Address": "", | |
"GlobalIPv6PrefixLen": 0, | |
"IPAddress": "172.17.0.2", | |
"IPPrefixLen": 16, | |
"IPv6Gateway": "", | |
"MacAddress": "02:42:ac:11:00:02", | |
"Networks": { | |
"bridge": { | |
"IPAMConfig": null, | |
"Links": null, | |
"Aliases": null, | |
"NetworkID": "87013fc51355e0d7e372b12315c7a62585796e555edecf0ea33a1d4d92bd2a5b", | |
"EndpointID": "defcefc4aeb6988de9ccb53a3dea2ce78225c0a6a29eefa3fa833612c19080f4", | |
"Gateway": "172.17.0.1", | |
"IPAddress": "172.17.0.2", | |
"IPPrefixLen": 16, | |
"IPv6Gateway": "", | |
"GlobalIPv6Address": "", | |
"GlobalIPv6PrefixLen": 0, | |
"MacAddress": "02:42:ac:11:00:02", | |
"DriverOpts": null | |
} | |
} | |
} | |
} | |
] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-- Logs begin at Mon 2019-06-03 10:20:11 UTC, end at Mon 2019-06-03 10:22:02 UTC. -- | |
Jun 03 10:20:11 kind-worker systemd-journald[42]: Journal started | |
Jun 03 10:20:11 kind-worker systemd-journald[42]: Runtime journal (/run/log/journal/faf8883bb54d41c0bc4dc6e73411d540) is 8.0M, max 373.9M, 365.9M free. | |
Jun 03 10:20:11 kind-worker systemd-sysusers[35]: Creating group systemd-coredump with gid 999. | |
Jun 03 10:20:11 kind-worker systemd-sysusers[35]: Creating user systemd-coredump (systemd Core Dumper) with uid 999 and gid 999. | |
Jun 03 10:20:11 kind-worker systemd-sysctl[40]: Couldn't write 'fq_codel' to 'net/core/default_qdisc', ignoring: No such file or directory | |
Jun 03 10:20:11 kind-worker systemd[1]: Starting Flush Journal to Persistent Storage... | |
Jun 03 10:20:11 kind-worker systemd[1]: Started Create System Users. | |
Jun 03 10:20:11 kind-worker systemd[1]: Starting Create Static Device Nodes in /dev... | |
Jun 03 10:20:11 kind-worker systemd[1]: Started Create Static Device Nodes in /dev. | |
Jun 03 10:20:11 kind-worker systemd[1]: Condition check resulted in udev Kernel Device Manager being skipped. | |
Jun 03 10:20:11 kind-worker systemd[1]: Reached target System Initialization. | |
Jun 03 10:20:11 kind-worker systemd[1]: Started Daily Cleanup of Temporary Directories. | |
Jun 03 10:20:11 kind-worker systemd[1]: Reached target Timers. | |
Jun 03 10:20:11 kind-worker systemd[1]: Reached target Basic System. | |
Jun 03 10:20:11 kind-worker systemd[1]: Starting containerd container runtime... | |
Jun 03 10:20:11 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:11 kind-worker systemd-journald[42]: Runtime journal (/run/log/journal/faf8883bb54d41c0bc4dc6e73411d540) is 8.0M, max 373.9M, 365.9M free. | |
Jun 03 10:20:11 kind-worker systemd[1]: Started Flush Journal to Persistent Storage. | |
Jun 03 10:20:11 kind-worker systemd[1]: Started containerd container runtime. | |
Jun 03 10:20:11 kind-worker systemd[1]: Reached target Multi-User System. | |
Jun 03 10:20:11 kind-worker systemd[1]: Reached target Graphical Interface. | |
Jun 03 10:20:11 kind-worker systemd[1]: Starting Update UTMP about System Runlevel Changes... | |
Jun 03 10:20:11 kind-worker systemd[1]: systemd-update-utmp-runlevel.service: Succeeded. | |
Jun 03 10:20:11 kind-worker systemd[1]: Started Update UTMP about System Runlevel Changes. | |
Jun 03 10:20:11 kind-worker systemd[1]: Startup finished in 236ms. | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.458848831Z" level=info msg="starting containerd" revision= version=1.2.6-0ubuntu1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.460345368Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.460738379Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.460826538Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.461233215Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.461951002Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.462128112Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.462353010Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.462651642Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.462993598Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463076365Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463127770Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463180579Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463226308Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463275315Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463373688Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.463451280Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465298352Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465426785Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465523697Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465572384Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465617595Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465659897Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465720169Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465763568Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465805401Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465849697Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465893946Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.465997425Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466049155Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466096650Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466138891Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466407227Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466503615Z" level=info msg="Connect containerd service" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.466617701Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.468570267Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.469152387Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.469623173Z" level=info msg=serving... address="/run/containerd/containerd.sock" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.469794691Z" level=info msg="containerd successfully booted in 0.039326s" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.509143622Z" level=info msg="Start subscribing containerd event" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.509762883Z" level=info msg="Start recovering state" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.510856313Z" level=warning msg="The image docker.io/kindest/kindnetd:0.1.0 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.512860978Z" level=warning msg="The image k8s.gcr.io/coredns:1.3.1 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.515085573Z" level=warning msg="The image k8s.gcr.io/etcd:3.3.10 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.547039331Z" level=warning msg="The image k8s.gcr.io/ip-masq-agent:v2.4.1 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.547859217Z" level=warning msg="The image k8s.gcr.io/kube-apiserver:v1.14.2 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.548632367Z" level=warning msg="The image k8s.gcr.io/kube-controller-manager:v1.14.2 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.549205488Z" level=warning msg="The image k8s.gcr.io/kube-proxy:v1.14.2 is not unpacked." | |
Jun 03 10:20:11 kind-worker kubelet[46]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:11 kind-worker kubelet[46]: F0603 10:20:11.549405 46 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.553910800Z" level=warning msg="The image k8s.gcr.io/kube-scheduler:v1.14.2 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.554606381Z" level=warning msg="The image k8s.gcr.io/pause:3.1 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.556210263Z" level=warning msg="The image sha256:19bb968f77bba3a5b5f56b5c033d71f699c22bdc8bbe9412f0bfaf7f674a64cc is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.559338487Z" level=warning msg="The image sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.559865192Z" level=warning msg="The image sha256:5c24210246bb67af5f89150e947211a1c2a127fb3825eb18507c1039bc6e86f8 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.560246052Z" level=warning msg="The image sha256:5eeff402b659832b64b5634061eb3825008abb549e1d873faf3908beecea8dfc is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.560600521Z" level=warning msg="The image sha256:8be94bdae1399076ac29223a7f10230011d195e355dfc7027fa02dc95d34065f is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.560943892Z" level=warning msg="The image sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.561287745Z" level=warning msg="The image sha256:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c is not unpacked." | |
Jun 03 10:20:11 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:11 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.580347326Z" level=warning msg="The image sha256:ee18f350636d8e51ebb3749d1d7a1928da1d6e6fc0051852a6686c19b706c57c is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.580746775Z" level=warning msg="The image sha256:f227066bdc5f9aa2f8a9bb54854e5b7a23c6db8fce0f927e5c4feef8a9e74d46 is not unpacked." | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.580968028Z" level=info msg="Start event monitor" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.580997306Z" level=info msg="Start snapshots syncer" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.581003706Z" level=info msg="Start streaming server" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.582003819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:20:11 kind-worker containerd[51]: time="2019-06-03T10:20:11.582371065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/etcd:3.3.10,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:20:21 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 10:20:21 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. | |
Jun 03 10:20:21 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:21 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:21 kind-worker kubelet[67]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:21 kind-worker kubelet[67]: F0603 10:20:21.831682 67 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:21 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:21 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:20:32 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 10:20:32 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. | |
Jun 03 10:20:32 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:32 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:32 kind-worker kubelet[75]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:32 kind-worker kubelet[75]: F0603 10:20:32.119910 75 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:32 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:32 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:20:42 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 10:20:42 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. | |
Jun 03 10:20:42 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:42 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:42 kind-worker kubelet[83]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:42 kind-worker kubelet[83]: F0603 10:20:42.354467 83 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:42 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:42 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:20:46 kind-worker containerd[51]: time="2019-06-03T10:20:46.793730628Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:20:52 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 10:20:52 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. | |
Jun 03 10:20:52 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:52 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:52 kind-worker kubelet[122]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:52 kind-worker kubelet[122]: F0603 10:20:52.577058 122 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:52 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:52 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:21:02 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:21:02 kind-worker systemd[1]: Reloading. | |
Jun 03 10:21:02 kind-worker systemd[1]: Configuration file /etc/systemd/system/containerd.service.d/10-restart.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. | |
Jun 03 10:21:02 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:21:02 kind-worker kubelet[156]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:21:02 kind-worker kubelet[156]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:21:03 kind-worker systemd[1]: Started Kubernetes systemd probe. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.097582 156 server.go:417] Version: v1.14.2 | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.097852 156 plugins.go:103] No cloud provider specified. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.097877 156 server.go:754] Client rotation is on, will bootstrap in background | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.124382 156 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.124875 156 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: [] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.124968 156 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.125094 156 container_manager_linux.go:286] Creating device plugin manager: true | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.125269 156 state_mem.go:36] [cpumanager] initializing new in-memory state store | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.133707 156 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.133958 156 kubelet.go:304] Watching apiserver | |
Jun 03 10:21:03 kind-worker kubelet[156]: W0603 10:21:03.182502 156 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock". | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.186862 156 remote_runtime.go:62] parsed scheme: "" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187078 156 remote_runtime.go:62] scheme "" not registered, fallback to default scheme | |
Jun 03 10:21:03 kind-worker kubelet[156]: W0603 10:21:03.187176 156 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock". | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187254 156 remote_image.go:50] parsed scheme: "" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187317 156 remote_image.go:50] scheme "" not registered, fallback to default scheme | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187582 156 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187651 156 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187765 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000232900, CONNECTING | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.188406 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000232900, READY | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.188542 156 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.188605 156 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.188698 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000232a90, CONNECTING | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.189215 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000232a90, READY | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.190833 156 kuberuntime_manager.go:210] Container runtime containerd initialized, version: 1.2.6-0ubuntu1, apiVersion: v1alpha2 | |
Jun 03 10:21:03 kind-worker kubelet[156]: W0603 10:21:03.191373 156 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.192001 156 server.go:1037] Started kubelet | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196382 156 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196545 156 status_manager.go:152] Starting to sync pod status with apiserver | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196619 156 kubelet.go:1806] Starting kubelet main sync loop. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196694 156 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196882 156 server.go:141] Starting to listen on 0.0.0.0:10250 | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.199137 156 server.go:343] Adding debug handlers to kubelet server. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.202685 156 volume_manager.go:248] Starting Kubelet Volume Manager | |
Jun 03 10:21:03 kind-worker containerd[51]: time="2019-06-03T10:21:03.206068584Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.212410 156 desired_state_of_world_populator.go:130] Desired state populator starts to run | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.214391 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.218329 156 clientconn.go:440] parsed scheme: "unix" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.218456 156 clientconn.go:440] scheme "unix" not registered, fallback to default scheme | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.218565 156 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.218645 156 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.234560 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0007953a0, CONNECTING | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.235083 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0007953a0, READY | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.310544 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.319930 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.320753 156 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.320955 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.321755 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.375905 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.377119 156 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.381945 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.382177 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.382341 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.386001 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.386747 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9116770308f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03cb715a8f, ext:781637649, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03cb715a8f, ext:781637649, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.397122 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.408320 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.415475 156 cpu_manager.go:155] [cpumanager] starting with none policy | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.415721 156 cpu_manager.go:156] [cpumanager] reconciling every 10s | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.415795 156 policy_none.go:42] [cpumanager] none policy: Start | |
Jun 03 10:21:03 kind-worker kubelet[156]: W0603 10:21:03.466521 156 manager.go:538] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.478066 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.480200 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.480360 156 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.491793 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.494660 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d8c2ff2a, ext:1005091659, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.497631 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d8c2ac48, ext:1005070452, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.498930 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d8c2e756, ext:1005085564, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.500004 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117843f532", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03dc451f32, ext:1063951205, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03dc451f32, ext:1063951205, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.578888 156 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.580557 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.586669 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.587877 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.590217 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.590481 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03e309baa2, ext:1177499338, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.591718 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03e309ed92, ext:1177512375, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.593533 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03e309da03, ext:1177507368, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.680828 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.781554 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.881724 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.981947 156 controller.go:115] failed to ensure node lease exists, will retry in 800ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.982279 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.990458 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.992600 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.993963 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.994053 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03fb1bb45e, ext:1581330572, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.995132 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03fb295419, ext:1582223411, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.996810 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03fb297e2c, ext:1582234192, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.083553 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.183917 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.284107 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.313716 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.321335 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.383774 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.384262 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.387787 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.398665 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.484551 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.584750 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.684923 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.783434 156 controller.go:115] failed to ensure node lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.785096 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: I0603 10:21:04.794137 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:04 kind-worker kubelet[156]: I0603 10:21:04.795283 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.796423 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b042f666490, ext:2384898740, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.796860 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.797348 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b042f669c9b, ext:2384913087, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.799176 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b042f66b161, ext:2384918400, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.885321 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.985488 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.085737 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.185906 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.286113 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.315717 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.322562 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.385322 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.386266 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.388681 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.399681 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.486428 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.586587 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.686788 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.786966 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.887249 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.987412 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.087599 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.188021 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.288214 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.317308 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.323585 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.390929 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.391095 156 controller.go:115] failed to ensure node lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.391183 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.392449 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: I0603 10:21:06.397411 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:06 kind-worker kubelet[156]: I0603 10:21:06.398449 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.399383 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.399677 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0497bf4505, ext:3988070187, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.400556 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0497bf81d8, ext:3988085749, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.401438 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.401402 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0497bf667f, ext:3988078749, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.491354 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.591594 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.691803 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.792070 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.892379 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.992858 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.093093 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.193560 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.293811 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.318826 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.324852 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.393058 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.393681 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.393953 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.402488 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.494246 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.594464 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.694705 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.795014 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.895207 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.995412 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.095613 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.195818 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.296071 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.320319 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.326178 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.394938 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.395654 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.396398 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.403692 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:08 kind-worker containerd[51]: time="2019-06-03T10:21:08.468892897Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.469143 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.496578 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.596999 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.697747 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.797958 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.898164 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.998355 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.098555 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.198712 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.298925 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.321765 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.327442 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.396365 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.397348 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.399058 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.404847 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.499254 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.593401 156 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.599414 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: I0603 10:21:09.599628 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:09 kind-worker kubelet[156]: I0603 10:21:09.600696 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.601821 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.602202 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0563cd5bec, ext:7190320132, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.603099 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0563cd784e, ext:7190327399, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.603876 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0563cd88d3, ext:7190331627, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.699576 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.799717 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.899878 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.000069 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.100290 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.200465 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.300681 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.323574 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.328958 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.399118 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.399686 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.400879 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.406398 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.501030 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.601198 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.701358 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.801536 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.901717 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.001860 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.102035 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.202151 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.302330 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.325189 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.330461 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.400541 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.401162 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.402684 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.407683 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.502875 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.603128 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.703303 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.803931 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.904220 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.004375 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.104546 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.204722 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.304927 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.328523 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.331738 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.402128 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.402488 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.405067 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.408764 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.505219 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.605504 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.706760 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.806941 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.907103 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.007281 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.107469 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: I0603 10:21:13.120783 156 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.207682 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.307866 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.408047 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker containerd[51]: time="2019-06-03T10:21:13.469838898Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.470052 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.480607 156 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: I0603 10:21:13.506209 156 reconciler.go:154] Reconciler: start to sync state | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.508375 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.608533 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.708718 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.808905 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.909044 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.009241 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.109411 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.209574 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.309746 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.409938 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.510151 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.610296 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.710494 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.810776 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.910966 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.011314 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.111502 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.211678 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.311878 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.412066 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.512259 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.612443 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.712629 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.812784 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.912958 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.997702 156 controller.go:194] failed to get node "kind-worker" when trying to set owner ref to the node lease: nodes "kind-worker" not found | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.002013 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.003161 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.006713 156 kubelet_node_status.go:75] Successfully registered node kind-worker | |
Jun 03 10:21:16 kind-worker kubelet[156]: E0603 10:21:16.007686 156 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "kind-worker": nodes "kind-worker" not found | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.113280 156 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.1.0/24 | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.113666463Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.114014 156 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.1.0/24 | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.114273835Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:16 kind-worker kubelet[156]: E0603 10:21:16.114473 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.211425 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/51f164de-85e9-11e9-a310-0242ac110004-kube-proxy") pod "kube-proxy-kp2s9" (UID: "51f164de-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.211474 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/51f164de-85e9-11e9-a310-0242ac110004-xtables-lock") pod "kube-proxy-kp2s9" (UID: "51f164de-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.211510 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/51f164de-85e9-11e9-a310-0242ac110004-lib-modules") pod "kube-proxy-kp2s9" (UID: "51f164de-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.211543 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-b29kb" (UniqueName: "kubernetes.io/secret/51f164de-85e9-11e9-a310-0242ac110004-kube-proxy-token-b29kb") pod "kube-proxy-kp2s9" (UID: "51f164de-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.311789 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-c7lrr" (UniqueName: "kubernetes.io/secret/51f61a08-85e9-11e9-a310-0242ac110004-kindnet-token-c7lrr") pod "kindnet-s2mkf" (UID: "51f61a08-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.311940 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/51f17b3e-85e9-11e9-a310-0242ac110004-config") pod "ip-masq-agent-j4ssk" (UID: "51f17b3e-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.311973 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ip-masq-agent-token-njsg6" (UniqueName: "kubernetes.io/secret/51f17b3e-85e9-11e9-a310-0242ac110004-ip-masq-agent-token-njsg6") pod "ip-masq-agent-j4ssk" (UID: "51f17b3e-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.312002 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/51f61a08-85e9-11e9-a310-0242ac110004-cni-cfg") pod "kindnet-s2mkf" (UID: "51f61a08-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.371567119Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-kp2s9,Uid:51f164de-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount680475924.mount: Succeeded. | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.422038975Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/eb60082817446f8d04db3da087fff56e5e5285b4c4e6880c7c92ff159261b093/shim.sock" debug=false pid=195 | |
Jun 03 10:21:16 kind-worker systemd[1]: run-containerd-runc-k8s.io-eb60082817446f8d04db3da087fff56e5e5285b4c4e6880c7c92ff159261b093-runc.LPPWVg.mount: Succeeded. | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.477581857Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ip-masq-agent-j4ssk,Uid:51f17b3e-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.490963996Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-s2mkf,Uid:51f61a08-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.491553732Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/1bcda36fce3673bc706148211d6c8b59647f136d14da8f03f73cc08fecafe9c6/shim.sock" debug=false pid=218 | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.503566831Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5/shim.sock" debug=false pid=234 | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.574873164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kp2s9,Uid:51f164de-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,} returns sandbox id "eb60082817446f8d04db3da087fff56e5e5285b4c4e6880c7c92ff159261b093"" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.578613799Z" level=info msg="CreateContainer within sandbox "eb60082817446f8d04db3da087fff56e5e5285b4c4e6880c7c92ff159261b093" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.672676348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ip-masq-agent-j4ssk,Uid:51f17b3e-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,} returns sandbox id "1bcda36fce3673bc706148211d6c8b59647f136d14da8f03f73cc08fecafe9c6"" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.675141845Z" level=info msg="CreateContainer within sandbox "1bcda36fce3673bc706148211d6c8b59647f136d14da8f03f73cc08fecafe9c6" for container &ContainerMetadata{Name:ip-masq-agent,Attempt:0,}" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.846712975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-s2mkf,Uid:51f61a08-85e9-11e9-a310-0242ac110004,Namespace:kube-system,Attempt:0,} returns sandbox id "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5"" | |
Jun 03 10:21:16 kind-worker containerd[51]: time="2019-06-03T10:21:16.858813639Z" level=info msg="CreateContainer within sandbox "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}" | |
Jun 03 10:21:17 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount030826032.mount: Succeeded. | |
Jun 03 10:21:17 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752553466.mount: Succeeded. | |
Jun 03 10:21:17 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317451490.mount: Succeeded. | |
Jun 03 10:21:17 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632371417.mount: Succeeded. | |
Jun 03 10:21:17 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662292843.mount: Succeeded. | |
Jun 03 10:21:17 kind-worker containerd[51]: time="2019-06-03T10:21:17.729353573Z" level=info msg="CreateContainer within sandbox "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5"" | |
Jun 03 10:21:17 kind-worker containerd[51]: time="2019-06-03T10:21:17.738460150Z" level=info msg="StartContainer for "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5"" | |
Jun 03 10:21:17 kind-worker containerd[51]: time="2019-06-03T10:21:17.742815877Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5/shim.sock" debug=false pid=361 | |
Jun 03 10:21:17 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465789043.mount: Succeeded. | |
Jun 03 10:21:17 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121846232.mount: Succeeded. | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.097116789Z" level=info msg="StartContainer for "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5" returns successfully" | |
Jun 03 10:21:18 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456014988.mount: Succeeded. | |
Jun 03 10:21:18 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778363493.mount: Succeeded. | |
Jun 03 10:21:18 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount391981184.mount: Succeeded. | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.219503448Z" level=info msg="CreateContainer within sandbox "1bcda36fce3673bc706148211d6c8b59647f136d14da8f03f73cc08fecafe9c6" for &ContainerMetadata{Name:ip-masq-agent,Attempt:0,} returns container id "b20212643befb9ddba8c1f3820cd8fdf97b932373de9d493495d9689c01d2f91"" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.220586055Z" level=info msg="StartContainer for "b20212643befb9ddba8c1f3820cd8fdf97b932373de9d493495d9689c01d2f91"" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.221297651Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/b20212643befb9ddba8c1f3820cd8fdf97b932373de9d493495d9689c01d2f91/shim.sock" debug=false pid=413 | |
Jun 03 10:21:18 kind-worker systemd[1]: run-containerd-runc-k8s.io-b20212643befb9ddba8c1f3820cd8fdf97b932373de9d493495d9689c01d2f91-runc.cVQCn6.mount: Succeeded. | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.471093196Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:18 kind-worker kubelet[156]: E0603 10:21:18.471623 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.494869041Z" level=info msg="StartContainer for "b20212643befb9ddba8c1f3820cd8fdf97b932373de9d493495d9689c01d2f91" returns successfully" | |
Jun 03 10:21:18 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356559050.mount: Succeeded. | |
Jun 03 10:21:18 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210082228.mount: Succeeded. | |
Jun 03 10:21:18 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156161411.mount: Succeeded. | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.703579165Z" level=info msg="CreateContainer within sandbox "eb60082817446f8d04db3da087fff56e5e5285b4c4e6880c7c92ff159261b093" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id "216901c038af5b9ea7e90c28a34dba5ed5798111fef9cb2f4701a0ec68bfff23"" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.705293675Z" level=info msg="StartContainer for "216901c038af5b9ea7e90c28a34dba5ed5798111fef9cb2f4701a0ec68bfff23"" | |
Jun 03 10:21:18 kind-worker containerd[51]: time="2019-06-03T10:21:18.708858899Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/216901c038af5b9ea7e90c28a34dba5ed5798111fef9cb2f4701a0ec68bfff23/shim.sock" debug=false pid=472 | |
Jun 03 10:21:18 kind-worker systemd[1]: run-containerd-runc-k8s.io-216901c038af5b9ea7e90c28a34dba5ed5798111fef9cb2f4701a0ec68bfff23-runc.gY54XG.mount: Succeeded. | |
Jun 03 10:21:19 kind-worker containerd[51]: time="2019-06-03T10:21:19.326600760Z" level=info msg="StartContainer for "216901c038af5b9ea7e90c28a34dba5ed5798111fef9cb2f4701a0ec68bfff23" returns successfully" | |
Jun 03 10:21:23 kind-worker containerd[51]: time="2019-06-03T10:21:23.472425300Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:23 kind-worker kubelet[156]: E0603 10:21:23.472680 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:23 kind-worker kubelet[156]: E0603 10:21:23.507716 156 summary_sys_containers.go:47] Failed to get system container stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get cgroup stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get container info for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": unknown container "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service" | |
Jun 03 10:21:28 kind-worker containerd[51]: time="2019-06-03T10:21:28.473503092Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:28 kind-worker kubelet[156]: E0603 10:21:28.473827 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:33 kind-worker containerd[51]: time="2019-06-03T10:21:33.474557451Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:33 kind-worker kubelet[156]: E0603 10:21:33.474996 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:33 kind-worker kubelet[156]: E0603 10:21:33.527584 156 summary_sys_containers.go:47] Failed to get system container stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get cgroup stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get container info for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": unknown container "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service" | |
Jun 03 10:21:38 kind-worker containerd[51]: time="2019-06-03T10:21:38.475983340Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:38 kind-worker kubelet[156]: E0603 10:21:38.476265 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:43 kind-worker containerd[51]: time="2019-06-03T10:21:43.477191693Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:43 kind-worker kubelet[156]: E0603 10:21:43.477508 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:43 kind-worker kubelet[156]: E0603 10:21:43.561059 156 summary_sys_containers.go:47] Failed to get system container stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get cgroup stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get container info for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": unknown container "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service" | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.231235474Z" level=info msg="Finish piping stderr of container "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5"" | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.231951351Z" level=info msg="Finish piping stdout of container "0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5"" | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.272453869Z" level=info msg="TaskExit event &TaskExit{ContainerID:0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5,ID:0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5,Pid:379,ExitStatus:2,ExitedAt:2019-06-03 10:21:48.231824992 +0000 UTC,}" | |
Jun 03 10:21:48 kind-worker systemd[1]: run-containerd-io.containerd.runtime.v1.linux-k8s.io-0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5-rootfs.mount: Succeeded. | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.328015948Z" level=info msg="shim reaped" id=0c1882c769be39ccdd1cc482bea881c574c4621fb50c9f26286ca509ea51e7d5 | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.478591425Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 10:21:48 kind-worker kubelet[156]: E0603 10:21:48.478908 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:48 kind-worker containerd[51]: time="2019-06-03T10:21:48.798177502Z" level=info msg="CreateContainer within sandbox "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}" | |
Jun 03 10:21:49 kind-worker containerd[51]: time="2019-06-03T10:21:49.820243562Z" level=info msg="CreateContainer within sandbox "aff44954e16e5fc6b0d3cf5b39c394c4c263812a077ce8412e0b553748a05eb5" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id "88e666e8e4aff5a60087abd17e816e44f1bd0e43e616a65d1ec6978f262de45a"" | |
Jun 03 10:21:49 kind-worker containerd[51]: time="2019-06-03T10:21:49.832841088Z" level=info msg="StartContainer for "88e666e8e4aff5a60087abd17e816e44f1bd0e43e616a65d1ec6978f262de45a"" | |
Jun 03 10:21:49 kind-worker containerd[51]: time="2019-06-03T10:21:49.834039492Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/88e666e8e4aff5a60087abd17e816e44f1bd0e43e616a65d1ec6978f262de45a/shim.sock" debug=false pid=660 | |
Jun 03 10:21:49 kind-worker systemd[1]: run-containerd-runc-k8s.io-88e666e8e4aff5a60087abd17e816e44f1bd0e43e616a65d1ec6978f262de45a-runc.NwQkg8.mount: Succeeded. | |
Jun 03 10:21:50 kind-worker containerd[51]: time="2019-06-03T10:21:50.134363502Z" level=info msg="StartContainer for "88e666e8e4aff5a60087abd17e816e44f1bd0e43e616a65d1ec6978f262de45a" returns successfully" | |
Jun 03 10:21:53 kind-worker kubelet[156]: E0603 10:21:53.581512 156 summary_sys_containers.go:47] Failed to get system container stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get cgroup stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get container info for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": unknown container "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service" | |
Jun 03 10:22:00 kind-worker kubelet[156]: I0603 10:22:00.011221 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-8d6cv" (UniqueName: "kubernetes.io/secret/54dca26d-85e9-11e9-a310-0242ac110004-default-token-8d6cv") pod "hello-6d6586c69c-vvqlx" (UID: "54dca26d-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:22:00 kind-worker containerd[51]: time="2019-06-03T10:22:00.204830712Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:hello-6d6586c69c-vvqlx,Uid:54dca26d-85e9-11e9-a310-0242ac110004,Namespace:default,Attempt:0,}" | |
Jun 03 10:22:00 kind-worker containerd[51]: time="2019-06-03T10:22:00.263049961Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/7e23ebf5d912b181c762d0f4f734178ec85ae890a0a76611a94d00ff38d620ed/shim.sock" debug=false pid=737 | |
Jun 03 10:22:00 kind-worker systemd[1]: run-containerd-runc-k8s.io-7e23ebf5d912b181c762d0f4f734178ec85ae890a0a76611a94d00ff38d620ed-runc.LJTvCQ.mount: Succeeded. | |
Jun 03 10:22:00 kind-worker containerd[51]: time="2019-06-03T10:22:00.410639457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:hello-6d6586c69c-vvqlx,Uid:54dca26d-85e9-11e9-a310-0242ac110004,Namespace:default,Attempt:0,} returns sandbox id "7e23ebf5d912b181c762d0f4f734178ec85ae890a0a76611a94d00ff38d620ed"" | |
Jun 03 10:22:00 kind-worker containerd[51]: time="2019-06-03T10:22:00.414264663Z" level=info msg="PullImage "alpine:latest"" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.553584163Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/alpine:latest,Labels:map[string]string{},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.561164261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.562052118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905440119.mount: Succeeded. | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.709456287Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.711844778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.713955263Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.714349758Z" level=info msg="PullImage "alpine:latest" returns image reference "sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1"" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.719356458Z" level=info msg="CreateContainer within sandbox "7e23ebf5d912b181c762d0f4f734178ec85ae890a0a76611a94d00ff38d620ed" for container &ContainerMetadata{Name:hello,Attempt:0,}" | |
Jun 03 10:22:01 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount653545627.mount: Succeeded. | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.770342126Z" level=info msg="CreateContainer within sandbox "7e23ebf5d912b181c762d0f4f734178ec85ae890a0a76611a94d00ff38d620ed" for &ContainerMetadata{Name:hello,Attempt:0,} returns container id "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c"" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.771556889Z" level=info msg="StartContainer for "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c"" | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.772951939Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c/shim.sock" debug=false pid=788 | |
Jun 03 10:22:01 kind-worker systemd[1]: run-containerd-runc-k8s.io-76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c-runc.0MUZWw.mount: Succeeded. | |
Jun 03 10:22:01 kind-worker containerd[51]: time="2019-06-03T10:22:01.928688408Z" level=info msg="StartContainer for "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c" returns successfully" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.208916455Z" level=info msg="Attach for "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c" with tty true and stdin true" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.209020697Z" level=info msg="Attach for "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c" returns URL "http://127.0.0.1:35752/attach/UVD12SgG"" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.424458004Z" level=info msg="Finish piping stdout of container "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c"" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.424700235Z" level=info msg="Attach stream "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c-attach-d09da7617401192029166757ed95e5c6eac264bed66b4f0408af7728e420da6b-stdout" closed" | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.424890193Z" level=info msg="Attach stream "76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c-attach-d09da7617401192029166757ed95e5c6eac264bed66b4f0408af7728e420da6b-stdin" closed" | |
Jun 03 10:22:02 kind-worker kubelet[156]: E0603 10:22:02.429218 156 upgradeaware.go:384] Error proxying data from backend to client: read tcp 127.0.0.1:45862->127.0.0.1:35752: read: connection reset by peer | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.481653631Z" level=info msg="TaskExit event &TaskExit{ContainerID:76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c,ID:76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c,Pid:805,ExitStatus:0,ExitedAt:2019-06-03 10:22:02.423456752 +0000 UTC,}" | |
Jun 03 10:22:02 kind-worker systemd[1]: run-containerd-io.containerd.runtime.v1.linux-k8s.io-76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c-rootfs.mount: Succeeded. | |
Jun 03 10:22:02 kind-worker containerd[51]: time="2019-06-03T10:22:02.543754510Z" level=info msg="shim reaped" id=76d72017a2172ccfe1653ea0b44d5ed96b52a7f75b6fef0653af13afa90df59c |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:35.604729916Z stdout F hostIP = 172.17.0.4 | |
2019-06-03T10:21:35.604777403Z stdout F podIP = 172.17.0.4 | |
2019-06-03T10:21:35.616418174Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T10:21:35.616450905Z stdout F handling current node | |
2019-06-03T10:21:35.621187056Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T10:21:35.621306066Z stdout F Node kind-worker has CIDR 10.244.1.0/24 | |
2019-06-03T10:21:35.621810876Z stdout F Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} | |
2019-06-03T10:21:35.622041495Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T10:21:35.622053678Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24 | |
2019-06-03T10:21:35.622139547Z stdout F Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} | |
2019-06-03T10:21:45.626391249Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T10:21:45.626430361Z stdout F handling current node | |
2019-06-03T10:21:45.626435273Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T10:21:45.626438788Z stdout F Node kind-worker has CIDR 10.244.1.0/24 | |
2019-06-03T10:21:45.62651614Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T10:21:45.626524077Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24 | |
2019-06-03T10:21:55.630783773Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T10:21:55.703044941Z stdout F handling current node | |
2019-06-03T10:21:55.703080274Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T10:21:55.703108673Z stdout F Node kind-worker has CIDR 10.244.1.0/24 | |
2019-06-03T10:21:55.703131316Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T10:21:55.703137113Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:04.213898382Z stdout F hostIP = 172.17.0.4 | |
2019-06-03T10:21:04.213945881Z stdout F podIP = 172.17.0.4 | |
2019-06-03T10:21:34.216562617Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout | |
2019-06-03T10:21:34.216602266Z stderr F | |
2019-06-03T10:21:34.216609486Z stderr F goroutine 1 [running]: | |
2019-06-03T10:21:34.216615451Z stderr F main.main() | |
2019-06-03T10:21:34.216620434Z stderr F /src/main.go:84 +0x423 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:18.219819018Z stdout F hostIP = 172.17.0.2 | |
2019-06-03T10:21:18.219840157Z stdout F podIP = 172.17.0.2 | |
2019-06-03T10:21:48.213045063Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout | |
2019-06-03T10:21:48.213157104Z stderr F | |
2019-06-03T10:21:48.213164689Z stderr F goroutine 1 [running]: | |
2019-06-03T10:21:48.213169299Z stderr F main.main() | |
2019-06-03T10:21:48.213174648Z stderr F /src/main.go:84 +0x423 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:50.113805916Z stdout F hostIP = 172.17.0.2 | |
2019-06-03T10:21:50.113869562Z stdout F podIP = 172.17.0.2 | |
2019-06-03T10:21:50.222991854Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T10:21:50.223026017Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T10:21:50.224991705Z stdout F Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} | |
2019-06-03T10:21:50.225013428Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T10:21:50.225019051Z stdout F handling current node | |
2019-06-03T10:21:50.230065812Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T10:21:50.230086882Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24 | |
2019-06-03T10:21:50.230092244Z stdout F Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} | |
2019-06-03T10:22:00.233650479Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T10:22:00.233681533Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T10:22:00.233687819Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T10:22:00.233692284Z stdout F handling current node | |
2019-06-03T10:22:00.233700353Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T10:22:00.233704558Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:18.816534307Z stdout F hostIP = 172.17.0.3 | |
2019-06-03T10:21:18.818771376Z stdout F podIP = 172.17.0.3 | |
2019-06-03T10:21:48.82069408Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout | |
2019-06-03T10:21:48.820739605Z stderr F | |
2019-06-03T10:21:48.820747214Z stderr F goroutine 1 [running]: | |
2019-06-03T10:21:48.820753008Z stderr F main.main() | |
2019-06-03T10:21:48.820761007Z stderr F /src/main.go:84 +0x423 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:50.213765024Z stdout F hostIP = 172.17.0.3 | |
2019-06-03T10:21:50.213852147Z stdout F podIP = 172.17.0.3 | |
2019-06-03T10:21:50.308175534Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T10:21:50.308216413Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T10:21:50.308225535Z stdout F Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} | |
2019-06-03T10:21:50.308231846Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T10:21:50.308236276Z stdout F Node kind-worker has CIDR 10.244.1.0/24 | |
2019-06-03T10:21:50.308240713Z stdout F Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} | |
2019-06-03T10:21:50.308246023Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T10:21:50.308250561Z stdout F handling current node | |
2019-06-03T10:22:00.403516194Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T10:22:00.403566711Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T10:22:00.403573152Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T10:22:00.403577371Z stdout F Node kind-worker has CIDR 10.244.1.0/24 | |
2019-06-03T10:22:00.403581658Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T10:22:00.403586042Z stdout F handling current node |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:20:34.869574736Z stderr F Flag --insecure-port has been deprecated, This flag will be removed in a future version. | |
2019-06-03T10:20:34.869929159Z stderr F I0603 10:20:34.869843 1 server.go:559] external host was not specified, using 172.17.0.4 | |
2019-06-03T10:20:34.870185739Z stderr F I0603 10:20:34.870154 1 server.go:146] Version: v1.14.2 | |
2019-06-03T10:20:35.808891833Z stderr F I0603 10:20:35.808734 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook. | |
2019-06-03T10:20:35.808975315Z stderr F I0603 10:20:35.808840 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. | |
2019-06-03T10:20:35.80994437Z stderr F E0603 10:20:35.809852 1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:35.809961694Z stderr F E0603 10:20:35.809883 1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:35.810022507Z stderr F E0603 10:20:35.809922 1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:35.810038327Z stderr F E0603 10:20:35.809952 1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:35.810043476Z stderr F E0603 10:20:35.809976 1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:35.81004858Z stderr F E0603 10:20:35.809994 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:35.810094543Z stderr F I0603 10:20:35.810014 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook. | |
2019-06-03T10:20:35.81010443Z stderr F I0603 10:20:35.810021 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. | |
2019-06-03T10:20:35.813121287Z stderr F I0603 10:20:35.813049 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:35.813138561Z stderr F I0603 10:20:35.813067 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:35.813203095Z stderr F I0603 10:20:35.813170 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:35.813291452Z stderr F I0603 10:20:35.813249 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:35.814018444Z stderr F W0603 10:20:35.813729 1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... | |
2019-06-03T10:20:36.791478546Z stderr F I0603 10:20:36.791289 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.791515278Z stderr F I0603 10:20:36.791317 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.79156654Z stderr F I0603 10:20:36.791368 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.79162313Z stderr F I0603 10:20:36.791452 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.803087198Z stderr F I0603 10:20:36.802941 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.826177917Z stderr F I0603 10:20:36.826039 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.826264563Z stderr F I0603 10:20:36.826064 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.826273046Z stderr F I0603 10:20:36.826107 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.826347762Z stderr F I0603 10:20:36.826209 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.826356873Z stderr F I0603 10:20:36.826285 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.842693715Z stderr F I0603 10:20:36.842564 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.869089247Z stderr F I0603 10:20:36.868890 1 master.go:233] Using reconciler: lease | |
2019-06-03T10:20:36.869488067Z stderr F I0603 10:20:36.869376 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.869519234Z stderr F I0603 10:20:36.869391 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.869528228Z stderr F I0603 10:20:36.869441 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.869609581Z stderr F I0603 10:20:36.869570 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.88142467Z stderr F I0603 10:20:36.881294 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.885349472Z stderr F I0603 10:20:36.885233 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.885394134Z stderr F I0603 10:20:36.885254 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.886244786Z stderr F I0603 10:20:36.886177 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.888205872Z stderr F I0603 10:20:36.888111 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.899833249Z stderr F I0603 10:20:36.899707 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.899894764Z stderr F I0603 10:20:36.899728 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.899903236Z stderr F I0603 10:20:36.899768 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.899931763Z stderr F I0603 10:20:36.899891 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.900274844Z stderr F I0603 10:20:36.900205 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.910879554Z stderr F I0603 10:20:36.910763 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.910936116Z stderr F I0603 10:20:36.910784 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.910943573Z stderr F I0603 10:20:36.910823 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.911021429Z stderr F I0603 10:20:36.910964 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.911099788Z stderr F I0603 10:20:36.911049 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.922050375Z stderr F I0603 10:20:36.921932 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.922128434Z stderr F I0603 10:20:36.921952 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.922139589Z stderr F I0603 10:20:36.921989 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.922185271Z stderr F I0603 10:20:36.922084 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.922376064Z stderr F I0603 10:20:36.922311 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.934963654Z stderr F I0603 10:20:36.934842 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.935091103Z stderr F I0603 10:20:36.934854 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.93517821Z stderr F I0603 10:20:36.935127 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.935327741Z stderr F I0603 10:20:36.935270 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.93548429Z stderr F I0603 10:20:36.935416 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.949652601Z stderr F I0603 10:20:36.949500 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.949677447Z stderr F I0603 10:20:36.949521 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.949691683Z stderr F I0603 10:20:36.949556 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.949949417Z stderr F I0603 10:20:36.949641 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.949968792Z stderr F I0603 10:20:36.949909 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.960644747Z stderr F I0603 10:20:36.960514 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.960706086Z stderr F I0603 10:20:36.960535 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.96071321Z stderr F I0603 10:20:36.960570 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.960731347Z stderr F I0603 10:20:36.960663 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.960927748Z stderr F I0603 10:20:36.960855 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.973151516Z stderr F I0603 10:20:36.973027 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.974543686Z stderr F I0603 10:20:36.974449 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.974642518Z stderr F I0603 10:20:36.974607 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.974808543Z stderr F I0603 10:20:36.974770 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.974963124Z stderr F I0603 10:20:36.974930 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.98814931Z stderr F I0603 10:20:36.988042 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:36.989142707Z stderr F I0603 10:20:36.989046 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:36.989774013Z stderr F I0603 10:20:36.989712 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:36.989947763Z stderr F I0603 10:20:36.989876 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:36.990169006Z stderr F I0603 10:20:36.990105 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.001775576Z stderr F I0603 10:20:37.001660 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.001797872Z stderr F I0603 10:20:37.001687 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.001818706Z stderr F I0603 10:20:37.001745 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.001959773Z stderr F I0603 10:20:37.001883 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.002203182Z stderr F I0603 10:20:37.002118 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.014311099Z stderr F I0603 10:20:37.014128 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.014339533Z stderr F I0603 10:20:37.014149 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.0143468Z stderr F I0603 10:20:37.014186 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.01442452Z stderr F I0603 10:20:37.014274 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.014800838Z stderr F I0603 10:20:37.014662 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.027017829Z stderr F I0603 10:20:37.026831 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.02704321Z stderr F I0603 10:20:37.026853 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.027089148Z stderr F I0603 10:20:37.026889 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.027105286Z stderr F I0603 10:20:37.026984 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.027227177Z stderr F I0603 10:20:37.027173 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.03882173Z stderr F I0603 10:20:37.038703 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.039499516Z stderr F I0603 10:20:37.039423 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.039516148Z stderr F I0603 10:20:37.039440 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.039553931Z stderr F I0603 10:20:37.039491 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.039586347Z stderr F I0603 10:20:37.039556 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.051451597Z stderr F I0603 10:20:37.051337 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.052231438Z stderr F I0603 10:20:37.052161 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.052325355Z stderr F I0603 10:20:37.052292 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.052480965Z stderr F I0603 10:20:37.052442 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.052610043Z stderr F I0603 10:20:37.052575 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.064829948Z stderr F I0603 10:20:37.064730 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.065498713Z stderr F I0603 10:20:37.065436 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.065662313Z stderr F I0603 10:20:37.065620 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.065829584Z stderr F I0603 10:20:37.065765 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.065962722Z stderr F I0603 10:20:37.065916 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.077901851Z stderr F I0603 10:20:37.077770 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.078243948Z stderr F I0603 10:20:37.078194 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.07829229Z stderr F I0603 10:20:37.078209 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.078308576Z stderr F I0603 10:20:37.078248 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.078380506Z stderr F I0603 10:20:37.078328 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.098309642Z stderr F I0603 10:20:37.098181 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.098978043Z stderr F I0603 10:20:37.098866 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.098995783Z stderr F I0603 10:20:37.098897 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.099009863Z stderr F I0603 10:20:37.098955 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.09907317Z stderr F I0603 10:20:37.099036 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.109213339Z stderr F I0603 10:20:37.109097 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.182804829Z stderr F I0603 10:20:37.182650 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.182913887Z stderr F I0603 10:20:37.182876 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.184351921Z stderr F I0603 10:20:37.184259 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.184571599Z stderr F I0603 10:20:37.184515 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.200374679Z stderr F I0603 10:20:37.200262 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.200883061Z stderr F I0603 10:20:37.200806 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.20089925Z stderr F I0603 10:20:37.200826 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.200937904Z stderr F I0603 10:20:37.200890 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.201047057Z stderr F I0603 10:20:37.201010 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.212028167Z stderr F I0603 10:20:37.211924 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.212051856Z stderr F I0603 10:20:37.211950 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.212107399Z stderr F I0603 10:20:37.212017 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.212230652Z stderr F I0603 10:20:37.212172 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.212389814Z stderr F I0603 10:20:37.212333 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.226167499Z stderr F I0603 10:20:37.226042 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.22723963Z stderr F I0603 10:20:37.227143 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.227454755Z stderr F I0603 10:20:37.227339 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.227619443Z stderr F I0603 10:20:37.227553 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.227795629Z stderr F I0603 10:20:37.227727 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.239342054Z stderr F I0603 10:20:37.239159 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.240330038Z stderr F I0603 10:20:37.240225 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.240461452Z stderr F I0603 10:20:37.240382 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.240603135Z stderr F I0603 10:20:37.240563 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.240752018Z stderr F I0603 10:20:37.240710 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.253139377Z stderr F I0603 10:20:37.252924 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.253169705Z stderr F I0603 10:20:37.252951 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.253208756Z stderr F I0603 10:20:37.252990 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.253224102Z stderr F I0603 10:20:37.253085 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.25339114Z stderr F I0603 10:20:37.253313 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.265109507Z stderr F I0603 10:20:37.264909 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.265986061Z stderr F I0603 10:20:37.265912 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.266090282Z stderr F I0603 10:20:37.266022 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.26610305Z stderr F I0603 10:20:37.266071 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.266158452Z stderr F I0603 10:20:37.266120 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.278442594Z stderr F I0603 10:20:37.278243 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.279357182Z stderr F I0603 10:20:37.279102 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.279518294Z stderr F I0603 10:20:37.279446 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.279726812Z stderr F I0603 10:20:37.279658 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.279928463Z stderr F I0603 10:20:37.279850 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.291303215Z stderr F I0603 10:20:37.291128 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.292709775Z stderr F I0603 10:20:37.292561 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.292842075Z stderr F I0603 10:20:37.292756 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.293040947Z stderr F I0603 10:20:37.292954 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.293272779Z stderr F I0603 10:20:37.293161 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.304021133Z stderr F I0603 10:20:37.303886 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.305190272Z stderr F I0603 10:20:37.305072 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.305928018Z stderr F I0603 10:20:37.305835 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.306127783Z stderr F I0603 10:20:37.306059 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.306304426Z stderr F I0603 10:20:37.306228 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.317772329Z stderr F I0603 10:20:37.317640 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.317797491Z stderr F I0603 10:20:37.317668 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.317805122Z stderr F I0603 10:20:37.317726 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.318007707Z stderr F I0603 10:20:37.317890 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.318240411Z stderr F I0603 10:20:37.318175 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.331015608Z stderr F I0603 10:20:37.330829 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.331174434Z stderr F I0603 10:20:37.330920 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.331334676Z stderr F I0603 10:20:37.331235 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.331582771Z stderr F I0603 10:20:37.331458 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.331595049Z stderr F I0603 10:20:37.331544 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.342634097Z stderr F I0603 10:20:37.342489 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.34381594Z stderr F I0603 10:20:37.343717 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.343985752Z stderr F I0603 10:20:37.343925 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.344141875Z stderr F I0603 10:20:37.344078 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.344310587Z stderr F I0603 10:20:37.344246 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.354684017Z stderr F I0603 10:20:37.354516 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.357025643Z stderr F I0603 10:20:37.356907 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.357240303Z stderr F I0603 10:20:37.357160 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.357574194Z stderr F I0603 10:20:37.357473 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.357616799Z stderr F I0603 10:20:37.357570 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.369787478Z stderr F I0603 10:20:37.369672 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.37043378Z stderr F I0603 10:20:37.369744 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.370455158Z stderr F I0603 10:20:37.369797 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.370476749Z stderr F I0603 10:20:37.369924 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.370770634Z stderr F I0603 10:20:37.370169 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.38525737Z stderr F I0603 10:20:37.385100 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.386809429Z stderr F I0603 10:20:37.386680 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.38693172Z stderr F I0603 10:20:37.386884 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.387091687Z stderr F I0603 10:20:37.387029 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.387278249Z stderr F I0603 10:20:37.387214 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.400048307Z stderr F I0603 10:20:37.399889 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.400117753Z stderr F I0603 10:20:37.399914 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.400143138Z stderr F I0603 10:20:37.400014 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.400689697Z stderr F I0603 10:20:37.400181 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.400853014Z stderr F I0603 10:20:37.400790 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.413239661Z stderr F I0603 10:20:37.413077 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.413977135Z stderr F I0603 10:20:37.413895 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.414088913Z stderr F I0603 10:20:37.414045 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.414229613Z stderr F I0603 10:20:37.414175 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.414455399Z stderr F I0603 10:20:37.414361 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.427266369Z stderr F I0603 10:20:37.427106 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.427306942Z stderr F I0603 10:20:37.427128 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.427314244Z stderr F I0603 10:20:37.427168 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.427331255Z stderr F I0603 10:20:37.427225 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.427736129Z stderr F I0603 10:20:37.427617 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.443102088Z stderr F I0603 10:20:37.442901 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.444331109Z stderr F I0603 10:20:37.444236 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.444825858Z stderr F I0603 10:20:37.444757 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.44494547Z stderr F I0603 10:20:37.444899 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.445111732Z stderr F I0603 10:20:37.445052 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.456591686Z stderr F I0603 10:20:37.456480 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.45664407Z stderr F I0603 10:20:37.456501 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.456661243Z stderr F I0603 10:20:37.456560 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.456798404Z stderr F I0603 10:20:37.456750 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.457086065Z stderr F I0603 10:20:37.457041 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.468663722Z stderr F I0603 10:20:37.468534 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.468699834Z stderr F I0603 10:20:37.468559 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.468706693Z stderr F I0603 10:20:37.468617 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.468846891Z stderr F I0603 10:20:37.468786 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.469099543Z stderr F I0603 10:20:37.469055 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.481179065Z stderr F I0603 10:20:37.481032 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.481210792Z stderr F I0603 10:20:37.481120 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.481234663Z stderr F I0603 10:20:37.481182 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.481301598Z stderr F I0603 10:20:37.481265 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.481698999Z stderr F I0603 10:20:37.481619 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.495144887Z stderr F I0603 10:20:37.495004 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.495625245Z stderr F I0603 10:20:37.495546 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.495678894Z stderr F I0603 10:20:37.495644 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.495759372Z stderr F I0603 10:20:37.495710 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.495849466Z stderr F I0603 10:20:37.495800 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.508413229Z stderr F I0603 10:20:37.508238 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.509121934Z stderr F I0603 10:20:37.509027 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.50923107Z stderr F I0603 10:20:37.509185 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.509436608Z stderr F I0603 10:20:37.509321 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.509606557Z stderr F I0603 10:20:37.509547 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.522318094Z stderr F I0603 10:20:37.522147 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.522345788Z stderr F I0603 10:20:37.522173 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.522497588Z stderr F I0603 10:20:37.522365 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.522865267Z stderr F I0603 10:20:37.522795 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.522950195Z stderr F I0603 10:20:37.522896 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.534479893Z stderr F I0603 10:20:37.534314 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.53451025Z stderr F I0603 10:20:37.534339 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.53453967Z stderr F I0603 10:20:37.534459 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.535287445Z stderr F I0603 10:20:37.535159 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.535643254Z stderr F I0603 10:20:37.535519 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.547513621Z stderr F I0603 10:20:37.547322 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.54818105Z stderr F I0603 10:20:37.548076 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.548195364Z stderr F I0603 10:20:37.548094 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.548201882Z stderr F I0603 10:20:37.548131 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.548519585Z stderr F I0603 10:20:37.548429 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.574529134Z stderr F I0603 10:20:37.574334 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.578590433Z stderr F I0603 10:20:37.578406 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.578618595Z stderr F I0603 10:20:37.578434 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.586953379Z stderr F I0603 10:20:37.586845 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.587339461Z stderr F I0603 10:20:37.587287 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.607602122Z stderr F I0603 10:20:37.607479 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.608256544Z stderr F I0603 10:20:37.608149 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.608274599Z stderr F I0603 10:20:37.608168 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.608281254Z stderr F I0603 10:20:37.608207 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.608605708Z stderr F I0603 10:20:37.608515 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.620453386Z stderr F I0603 10:20:37.620295 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.620477049Z stderr F I0603 10:20:37.620317 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.620484424Z stderr F I0603 10:20:37.620355 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.620573149Z stderr F I0603 10:20:37.620532 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.620793336Z stderr F I0603 10:20:37.620743 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.632179783Z stderr F I0603 10:20:37.632059 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.633164798Z stderr F I0603 10:20:37.633073 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.633265291Z stderr F I0603 10:20:37.633222 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.6333484Z stderr F I0603 10:20:37.633317 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.633524308Z stderr F I0603 10:20:37.633471 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.645456617Z stderr F I0603 10:20:37.645288 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.645499278Z stderr F I0603 10:20:37.645311 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.645507039Z stderr F I0603 10:20:37.645351 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.645565943Z stderr F I0603 10:20:37.645506 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.6457884Z stderr F I0603 10:20:37.645741 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.656213414Z stderr F I0603 10:20:37.656049 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.65626144Z stderr F I0603 10:20:37.656075 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.656268672Z stderr F I0603 10:20:37.656117 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.656326786Z stderr F I0603 10:20:37.656213 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.656566652Z stderr F I0603 10:20:37.656474 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.669070859Z stderr F I0603 10:20:37.668941 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.669109847Z stderr F I0603 10:20:37.668966 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.669125269Z stderr F I0603 10:20:37.669005 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.669188691Z stderr F I0603 10:20:37.669147 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.669308823Z stderr F I0603 10:20:37.669264 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.681534365Z stderr F I0603 10:20:37.681303 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.681565259Z stderr F I0603 10:20:37.681351 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.681572377Z stderr F I0603 10:20:37.681447 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.681647341Z stderr F I0603 10:20:37.681542 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.681848501Z stderr F I0603 10:20:37.681737 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.693983184Z stderr F I0603 10:20:37.693797 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.694010964Z stderr F I0603 10:20:37.693821 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.694018073Z stderr F I0603 10:20:37.693859 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.694086259Z stderr F I0603 10:20:37.693956 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.694208946Z stderr F I0603 10:20:37.694151 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.704108627Z stderr F I0603 10:20:37.703945 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.704176204Z stderr F I0603 10:20:37.703969 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.704184574Z stderr F I0603 10:20:37.704007 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.70425469Z stderr F I0603 10:20:37.704097 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.70434773Z stderr F I0603 10:20:37.704276 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.72847465Z stderr F I0603 10:20:37.728245 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.729030327Z stderr F I0603 10:20:37.728937 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.729046769Z stderr F I0603 10:20:37.728995 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.72908942Z stderr F I0603 10:20:37.729054 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.729188931Z stderr F I0603 10:20:37.729132 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.742014229Z stderr F I0603 10:20:37.741841 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.742655271Z stderr F I0603 10:20:37.742534 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.742758001Z stderr F I0603 10:20:37.742553 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.742770758Z stderr F I0603 10:20:37.742591 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.742788033Z stderr F I0603 10:20:37.742651 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.755875956Z stderr F I0603 10:20:37.755727 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.757135239Z stderr F I0603 10:20:37.757020 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.757185563Z stderr F I0603 10:20:37.757138 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.757271449Z stderr F I0603 10:20:37.757223 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.757427425Z stderr F I0603 10:20:37.757317 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.775289084Z stderr F I0603 10:20:37.775072 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.77629387Z stderr F I0603 10:20:37.776179 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.776323243Z stderr F I0603 10:20:37.776262 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.776420885Z stderr F I0603 10:20:37.776336 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.77653461Z stderr F I0603 10:20:37.776478 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.788560556Z stderr F I0603 10:20:37.788426 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.789482394Z stderr F I0603 10:20:37.789403 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.789619491Z stderr F I0603 10:20:37.789520 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.78964241Z stderr F I0603 10:20:37.789577 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.789722415Z stderr F I0603 10:20:37.789665 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.802024622Z stderr F I0603 10:20:37.801872 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.803238607Z stderr F I0603 10:20:37.803155 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.803842534Z stderr F I0603 10:20:37.803698 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.803878637Z stderr F I0603 10:20:37.803773 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.803896799Z stderr F I0603 10:20:37.803821 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.814660281Z stderr F I0603 10:20:37.814562 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.815627473Z stderr F I0603 10:20:37.815523 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.815690292Z stderr F I0603 10:20:37.815541 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.815701262Z stderr F I0603 10:20:37.815578 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.81571566Z stderr F I0603 10:20:37.815645 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.827852896Z stderr F I0603 10:20:37.827668 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.828493888Z stderr F I0603 10:20:37.828323 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.828558892Z stderr F I0603 10:20:37.828394 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.828566587Z stderr F I0603 10:20:37.828435 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.82861164Z stderr F I0603 10:20:37.828479 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.841112582Z stderr F I0603 10:20:37.840935 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.842979878Z stderr F I0603 10:20:37.842887 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.843064506Z stderr F I0603 10:20:37.843029 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.843196389Z stderr F I0603 10:20:37.843147 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.843509185Z stderr F I0603 10:20:37.843414 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.855230103Z stderr F I0603 10:20:37.855075 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.85622399Z stderr F I0603 10:20:37.856145 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.856322317Z stderr F I0603 10:20:37.856288 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.856492637Z stderr F I0603 10:20:37.856444 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.856614001Z stderr F I0603 10:20:37.856571 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.867884162Z stderr F I0603 10:20:37.867668 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.867906878Z stderr F I0603 10:20:37.867690 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.867940275Z stderr F I0603 10:20:37.867730 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.867969125Z stderr F I0603 10:20:37.867837 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.868158839Z stderr F I0603 10:20:37.868111 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.880528189Z stderr F I0603 10:20:37.880414 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.881147805Z stderr F I0603 10:20:37.881061 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.881165674Z stderr F I0603 10:20:37.881077 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.881203473Z stderr F I0603 10:20:37.881139 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.881239791Z stderr F I0603 10:20:37.881211 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.895044062Z stderr F I0603 10:20:37.894791 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.895073819Z stderr F I0603 10:20:37.894813 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.895080955Z stderr F I0603 10:20:37.894853 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.895119524Z stderr F I0603 10:20:37.894951 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.895312804Z stderr F I0603 10:20:37.895193 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.908588264Z stderr F I0603 10:20:37.908327 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.909650445Z stderr F I0603 10:20:37.909471 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:37.909673896Z stderr F I0603 10:20:37.909490 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:37.909680917Z stderr F I0603 10:20:37.909526 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:37.909716283Z stderr F I0603 10:20:37.909590 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:37.927119464Z stderr F I0603 10:20:37.926953 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:38.063814088Z stderr F W0603 10:20:38.063540 1 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources. | |
2019-06-03T10:20:38.095963644Z stderr F W0603 10:20:38.095745 1 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources. | |
2019-06-03T10:20:38.099737291Z stderr F W0603 10:20:38.099568 1 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. | |
2019-06-03T10:20:38.100754522Z stderr F W0603 10:20:38.100588 1 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. | |
2019-06-03T10:20:38.102932593Z stderr F W0603 10:20:38.102801 1 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources. | |
2019-06-03T10:20:38.946169885Z stderr F E0603 10:20:38.945898 1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:38.946231962Z stderr F E0603 10:20:38.945959 1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:38.94623811Z stderr F E0603 10:20:38.946009 1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:38.946316123Z stderr F E0603 10:20:38.946043 1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:38.946327363Z stderr F E0603 10:20:38.946065 1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:38.946335683Z stderr F E0603 10:20:38.946086 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T10:20:38.946345394Z stderr F I0603 10:20:38.946113 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook. | |
2019-06-03T10:20:38.946351383Z stderr F I0603 10:20:38.946121 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. | |
2019-06-03T10:20:38.948317017Z stderr F I0603 10:20:38.948144 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:38.948335335Z stderr F I0603 10:20:38.948167 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:38.948342675Z stderr F I0603 10:20:38.948209 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:38.948482791Z stderr F I0603 10:20:38.948426 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:38.959645201Z stderr F I0603 10:20:38.959485 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:38.960189427Z stderr F I0603 10:20:38.960116 1 client.go:352] parsed scheme: "" | |
2019-06-03T10:20:38.960332799Z stderr F I0603 10:20:38.960238 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T10:20:38.960459806Z stderr F I0603 10:20:38.960412 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T10:20:38.960592307Z stderr F I0603 10:20:38.960546 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:38.971770913Z stderr F I0603 10:20:38.971610 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T10:20:40.703513066Z stderr F I0603 10:20:40.703302 1 secure_serving.go:116] Serving securely on [::]:6443 | |
2019-06-03T10:20:40.703550017Z stderr F I0603 10:20:40.703394 1 autoregister_controller.go:139] Starting autoregister controller | |
2019-06-03T10:20:40.703557299Z stderr F I0603 10:20:40.703402 1 cache.go:32] Waiting for caches to sync for autoregister controller | |
2019-06-03T10:20:40.706606258Z stderr F I0603 10:20:40.706385 1 apiservice_controller.go:94] Starting APIServiceRegistrationController | |
2019-06-03T10:20:40.706643985Z stderr F I0603 10:20:40.706562 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller | |
2019-06-03T10:20:40.70667401Z stderr F I0603 10:20:40.706645 1 controller.go:81] Starting OpenAPI AggregationController | |
2019-06-03T10:20:40.709445557Z stderr F I0603 10:20:40.709328 1 crd_finalizer.go:242] Starting CRDFinalizer | |
2019-06-03T10:20:40.709468248Z stderr F I0603 10:20:40.709373 1 available_controller.go:320] Starting AvailableConditionController | |
2019-06-03T10:20:40.709474589Z stderr F I0603 10:20:40.709388 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller | |
2019-06-03T10:20:40.709643346Z stderr F I0603 10:20:40.709553 1 crdregistration_controller.go:112] Starting crd-autoregister controller | |
2019-06-03T10:20:40.709697796Z stderr F I0603 10:20:40.709568 1 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller | |
2019-06-03T10:20:40.812690868Z stderr F I0603 10:20:40.808582 1 customresource_discovery_controller.go:208] Starting DiscoveryController | |
2019-06-03T10:20:40.81273232Z stderr F I0603 10:20:40.808658 1 naming_controller.go:284] Starting NamingConditionController | |
2019-06-03T10:20:40.812745788Z stderr F I0603 10:20:40.808674 1 establishing_controller.go:73] Starting EstablishingController | |
2019-06-03T10:20:40.868784835Z stderr F E0603 10:20:40.868528 1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.4, ResourceVersion: 0, AdditionalErrorMsg: | |
2019-06-03T10:20:40.940757782Z stderr F I0603 10:20:40.940618 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller | |
2019-06-03T10:20:41.003609213Z stderr F I0603 10:20:41.003523 1 cache.go:39] Caches are synced for autoregister controller | |
2019-06-03T10:20:41.01345198Z stderr F I0603 10:20:41.013334 1 controller_utils.go:1034] Caches are synced for crd-autoregister controller | |
2019-06-03T10:20:41.013616977Z stderr F I0603 10:20:41.013577 1 cache.go:39] Caches are synced for AvailableConditionController controller | |
2019-06-03T10:20:41.701425463Z stderr F I0603 10:20:41.701252 1 controller.go:107] OpenAPI AggregationController: Processing item | |
2019-06-03T10:20:41.701465507Z stderr F I0603 10:20:41.701292 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). | |
2019-06-03T10:20:41.701473479Z stderr F I0603 10:20:41.701308 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). | |
2019-06-03T10:20:41.716672554Z stderr F I0603 10:20:41.716499 1 storage_scheduling.go:113] created PriorityClass system-node-critical with value 2000001000 | |
2019-06-03T10:20:41.725573912Z stderr F I0603 10:20:41.725412 1 storage_scheduling.go:113] created PriorityClass system-cluster-critical with value 2000000000 | |
2019-06-03T10:20:41.725599969Z stderr F I0603 10:20:41.725440 1 storage_scheduling.go:122] all system priority classes are created successfully or already exist. | |
2019-06-03T10:20:41.73623477Z stderr F I0603 10:20:41.736082 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin | |
2019-06-03T10:20:41.740313231Z stderr F I0603 10:20:41.740148 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery | |
2019-06-03T10:20:41.744261255Z stderr F I0603 10:20:41.744058 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user | |
2019-06-03T10:20:41.747917577Z stderr F I0603 10:20:41.747718 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer | |
2019-06-03T10:20:41.752632141Z stderr F I0603 10:20:41.752472 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin | |
2019-06-03T10:20:41.757404774Z stderr F I0603 10:20:41.757198 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit | |
2019-06-03T10:20:41.760401718Z stderr F I0603 10:20:41.760282 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view | |
2019-06-03T10:20:41.763922965Z stderr F I0603 10:20:41.763808 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin | |
2019-06-03T10:20:41.767797048Z stderr F I0603 10:20:41.767670 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit | |
2019-06-03T10:20:41.771794526Z stderr F I0603 10:20:41.771659 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view | |
2019-06-03T10:20:41.775475861Z stderr F I0603 10:20:41.775370 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster | |
2019-06-03T10:20:41.779436257Z stderr F I0603 10:20:41.779322 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node | |
2019-06-03T10:20:41.78310658Z stderr F I0603 10:20:41.782944 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector | |
2019-06-03T10:20:41.786928876Z stderr F I0603 10:20:41.786781 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier | |
2019-06-03T10:20:41.790792923Z stderr F I0603 10:20:41.790651 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin | |
2019-06-03T10:20:41.796983949Z stderr F I0603 10:20:41.796873 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper | |
2019-06-03T10:20:41.808635628Z stderr F I0603 10:20:41.808478 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator | |
2019-06-03T10:20:41.812785073Z stderr F I0603 10:20:41.812653 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator | |
2019-06-03T10:20:41.81665063Z stderr F I0603 10:20:41.816535 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager | |
2019-06-03T10:20:41.820962768Z stderr F I0603 10:20:41.820779 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler | |
2019-06-03T10:20:41.826545964Z stderr F I0603 10:20:41.826433 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns | |
2019-06-03T10:20:41.830143849Z stderr F I0603 10:20:41.829974 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner | |
2019-06-03T10:20:41.833678412Z stderr F I0603 10:20:41.833555 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher | |
2019-06-03T10:20:41.838257482Z stderr F I0603 10:20:41.838152 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider | |
2019-06-03T10:20:41.84175091Z stderr F I0603 10:20:41.841675 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient | |
2019-06-03T10:20:41.845474471Z stderr F I0603 10:20:41.845375 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient | |
2019-06-03T10:20:41.849674304Z stderr F I0603 10:20:41.849589 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler | |
2019-06-03T10:20:41.854833352Z stderr F I0603 10:20:41.854624 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner | |
2019-06-03T10:20:41.858324498Z stderr F I0603 10:20:41.858190 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller | |
2019-06-03T10:20:41.862390343Z stderr F I0603 10:20:41.862296 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller | |
2019-06-03T10:20:41.866827361Z stderr F I0603 10:20:41.866699 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller | |
2019-06-03T10:20:41.872067011Z stderr F I0603 10:20:41.871889 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller | |
2019-06-03T10:20:41.876731719Z stderr F I0603 10:20:41.876622 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller | |
2019-06-03T10:20:41.880361019Z stderr F I0603 10:20:41.880266 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller | |
2019-06-03T10:20:41.883921046Z stderr F I0603 10:20:41.883824 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller | |
2019-06-03T10:20:41.887530732Z stderr F I0603 10:20:41.887411 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller | |
2019-06-03T10:20:41.891303419Z stderr F I0603 10:20:41.891219 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector | |
2019-06-03T10:20:41.894969005Z stderr F I0603 10:20:41.894850 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler | |
2019-06-03T10:20:41.900154062Z stderr F I0603 10:20:41.900035 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller | |
2019-06-03T10:20:41.90485083Z stderr F I0603 10:20:41.904727 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller | |
2019-06-03T10:20:41.912560595Z stderr F I0603 10:20:41.912395 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller | |
2019-06-03T10:20:41.916772946Z stderr F I0603 10:20:41.916561 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder | |
2019-06-03T10:20:41.920400801Z stderr F I0603 10:20:41.920284 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector | |
2019-06-03T10:20:41.923861167Z stderr F I0603 10:20:41.923703 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller | |
2019-06-03T10:20:41.93057962Z stderr F I0603 10:20:41.930470 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller | |
2019-06-03T10:20:41.93718936Z stderr F I0603 10:20:41.937081 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller | |
2019-06-03T10:20:41.943337311Z stderr F I0603 10:20:41.943222 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller | |
2019-06-03T10:20:41.947070591Z stderr F I0603 10:20:41.946917 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller | |
2019-06-03T10:20:41.951182538Z stderr F I0603 10:20:41.951058 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller | |
2019-06-03T10:20:41.956344051Z stderr F I0603 10:20:41.956259 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller | |
2019-06-03T10:20:41.961819123Z stderr F I0603 10:20:41.961704 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller | |
2019-06-03T10:20:41.973534542Z stderr F I0603 10:20:41.973412 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller | |
2019-06-03T10:20:42.01789117Z stderr F I0603 10:20:42.017672 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller | |
2019-06-03T10:20:42.052562857Z stderr F I0603 10:20:42.052402 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller | |
2019-06-03T10:20:42.093544488Z stderr F I0603 10:20:42.093410 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin | |
2019-06-03T10:20:42.133448462Z stderr F I0603 10:20:42.133270 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery | |
2019-06-03T10:20:42.173251036Z stderr F I0603 10:20:42.173078 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user | |
2019-06-03T10:20:42.213076036Z stderr F I0603 10:20:42.212859 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer | |
2019-06-03T10:20:42.253025904Z stderr F I0603 10:20:42.252841 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier | |
2019-06-03T10:20:42.295572647Z stderr F I0603 10:20:42.295437 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager | |
2019-06-03T10:20:42.33441536Z stderr F I0603 10:20:42.334281 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns | |
2019-06-03T10:20:42.373287022Z stderr F I0603 10:20:42.373094 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler | |
2019-06-03T10:20:42.413239099Z stderr F I0603 10:20:42.413099 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider | |
2019-06-03T10:20:42.45370556Z stderr F I0603 10:20:42.453564 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler | |
2019-06-03T10:20:42.493410094Z stderr F I0603 10:20:42.493273 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node | |
2019-06-03T10:20:42.535040829Z stderr F I0603 10:20:42.534851 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller | |
2019-06-03T10:20:42.575813239Z stderr F I0603 10:20:42.575689 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller | |
2019-06-03T10:20:42.61374811Z stderr F I0603 10:20:42.613571 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller | |
2019-06-03T10:20:42.653088513Z stderr F I0603 10:20:42.652857 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller | |
2019-06-03T10:20:42.693356264Z stderr F I0603 10:20:42.693177 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller | |
2019-06-03T10:20:42.733107328Z stderr F I0603 10:20:42.732888 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller | |
2019-06-03T10:20:42.773377967Z stderr F I0603 10:20:42.773116 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller | |
2019-06-03T10:20:42.813382133Z stderr F I0603 10:20:42.813190 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller | |
2019-06-03T10:20:42.853363879Z stderr F I0603 10:20:42.853141 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector | |
2019-06-03T10:20:42.893259627Z stderr F I0603 10:20:42.893077 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler | |
2019-06-03T10:20:42.937362861Z stderr F I0603 10:20:42.937220 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller | |
2019-06-03T10:20:42.973701121Z stderr F I0603 10:20:42.973558 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller | |
2019-06-03T10:20:43.013025087Z stderr F I0603 10:20:43.012780 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller | |
2019-06-03T10:20:43.053281524Z stderr F I0603 10:20:43.053084 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder | |
2019-06-03T10:20:43.09352798Z stderr F I0603 10:20:43.093369 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector | |
2019-06-03T10:20:43.133032386Z stderr F I0603 10:20:43.132801 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller | |
2019-06-03T10:20:43.172957469Z stderr F I0603 10:20:43.172763 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller | |
2019-06-03T10:20:43.213519796Z stderr F I0603 10:20:43.213360 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller | |
2019-06-03T10:20:43.253140719Z stderr F I0603 10:20:43.252970 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller | |
2019-06-03T10:20:43.293533885Z stderr F I0603 10:20:43.293339 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller | |
2019-06-03T10:20:43.332807767Z stderr F I0603 10:20:43.332649 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller | |
2019-06-03T10:20:43.373105114Z stderr F I0603 10:20:43.372949 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller | |
2019-06-03T10:20:43.412763248Z stderr F I0603 10:20:43.412605 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller | |
2019-06-03T10:20:43.452593932Z stderr F I0603 10:20:43.452456 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller | |
2019-06-03T10:20:43.493255524Z stderr F I0603 10:20:43.493120 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller | |
2019-06-03T10:20:43.532989341Z stderr F I0603 10:20:43.532824 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller | |
2019-06-03T10:20:43.570868551Z stderr F I0603 10:20:43.570654 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io | |
2019-06-03T10:20:43.573373962Z stderr F I0603 10:20:43.573255 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system | |
2019-06-03T10:20:43.613121598Z stderr F I0603 10:20:43.612981 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system | |
2019-06-03T10:20:43.652684048Z stderr F I0603 10:20:43.652558 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system | |
2019-06-03T10:20:43.692876978Z stderr F I0603 10:20:43.692720 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system | |
2019-06-03T10:20:43.732899672Z stderr F I0603 10:20:43.732735 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system | |
2019-06-03T10:20:43.773699247Z stderr F I0603 10:20:43.773518 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system | |
2019-06-03T10:20:43.814493769Z stderr F I0603 10:20:43.814352 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public | |
2019-06-03T10:20:43.85134952Z stderr F I0603 10:20:43.851168 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io | |
2019-06-03T10:20:43.853443425Z stderr F I0603 10:20:43.853321 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public | |
2019-06-03T10:20:43.892714901Z stderr F I0603 10:20:43.892502 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system | |
2019-06-03T10:20:43.932727781Z stderr F I0603 10:20:43.932616 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system | |
2019-06-03T10:20:43.941004717Z stderr F I0603 10:20:43.940872 1 controller.go:606] quota admission added evaluator for: endpoints | |
2019-06-03T10:20:43.972934249Z stderr F I0603 10:20:43.972775 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system | |
2019-06-03T10:20:44.012639549Z stderr F I0603 10:20:44.012482 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system | |
2019-06-03T10:20:44.05219434Z stderr F I0603 10:20:44.052048 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system | |
2019-06-03T10:20:44.093406608Z stderr F I0603 10:20:44.093023 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system | |
2019-06-03T10:20:44.14928831Z stderr F W0603 10:20:44.149149 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.17.0.4] | |
2019-06-03T10:20:44.959858098Z stderr F I0603 10:20:44.959740 1 controller.go:606] quota admission added evaluator for: serviceaccounts | |
2019-06-03T10:20:45.351010068Z stderr F I0603 10:20:45.350867 1 controller.go:606] quota admission added evaluator for: deployments.apps | |
2019-06-03T10:20:45.696297665Z stderr F I0603 10:20:45.696104 1 controller.go:606] quota admission added evaluator for: daemonsets.apps | |
2019-06-03T10:20:46.384336788Z stderr F I0603 10:20:46.384119 1 controller.go:606] quota admission added evaluator for: daemonsets.extensions | |
2019-06-03T10:20:47.452627241Z stderr F I0603 10:20:47.452427 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io | |
2019-06-03T10:21:00.870395657Z stderr F I0603 10:21:00.870275 1 controller.go:606] quota admission added evaluator for: replicasets.apps | |
2019-06-03T10:21:01.472721407Z stderr F I0603 10:21:01.472549 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps | |
2019-06-03T10:21:49.374650282Z stderr F I0603 10:21:49.374488 1 trace.go:81] Trace[806926521]: "GuaranteedUpdate etcd3: *apps.DaemonSet" (started: 2019-06-03 10:21:48.818982035 +0000 UTC m=+74.126503803) (total time: 555.446149ms): | |
2019-06-03T10:21:49.374701368Z stderr F Trace[806926521]: [555.296575ms] [554.770672ms] Transaction committed | |
2019-06-03T10:21:49.375691767Z stderr F I0603 10:21:49.375341 1 trace.go:81] Trace[158308343]: "Update /apis/apps/v1/namespaces/kube-system/daemonsets/kindnet/status" (started: 2019-06-03 10:21:48.818706369 +0000 UTC m=+74.126228123) (total time: 556.603821ms): | |
2019-06-03T10:21:49.375718802Z stderr F Trace[158308343]: [556.066951ms] [555.870773ms] Object stored in database | |
2019-06-03T10:21:49.381159531Z stderr F I0603 10:21:49.380979 1 trace.go:81] Trace[349882157]: "Get /api/v1/namespaces/kube-system/pods/coredns-fb8b8dccf-vpjb6" (started: 2019-06-03 10:21:48.873896065 +0000 UTC m=+74.181417832) (total time: 507.044572ms): | |
2019-06-03T10:21:49.381219656Z stderr F Trace[349882157]: [506.630358ms] [506.609229ms] About to write a response | |
2019-06-03T10:21:49.382302213Z stderr F I0603 10:21:49.382189 1 trace.go:81] Trace[1837854500]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-06-03 10:21:48.800248821 +0000 UTC m=+74.107770589) (total time: 581.911144ms): | |
2019-06-03T10:21:49.382334629Z stderr F Trace[1837854500]: [581.885491ms] [572.271825ms] Transaction committed | |
2019-06-03T10:21:49.383050263Z stderr F I0603 10:21:49.382941 1 trace.go:81] Trace[956650045]: "Patch /api/v1/namespaces/kube-system/events/kindnet-s2mkf.15a4a91495a5d6d3" (started: 2019-06-03 10:21:48.800112339 +0000 UTC m=+74.107634126) (total time: 582.788594ms): | |
2019-06-03T10:21:49.383070454Z stderr F Trace[956650045]: [582.101339ms] [572.513956ms] Object stored in database |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:20:35.334496279Z stderr F I0603 10:20:35.334249 1 serving.go:319] Generated self-signed cert in-memory | |
2019-06-03T10:20:36.133147143Z stderr F I0603 10:20:36.133006 1 controllermanager.go:155] Version: v1.14.2 | |
2019-06-03T10:20:36.133734708Z stderr F I0603 10:20:36.133661 1 secure_serving.go:116] Serving securely on 127.0.0.1:10257 | |
2019-06-03T10:20:36.134338154Z stderr F I0603 10:20:36.134277 1 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252 | |
2019-06-03T10:20:36.13454932Z stderr F I0603 10:20:36.134509 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-controller-manager... | |
2019-06-03T10:20:40.890006691Z stderr F E0603 10:20:40.889839 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system" | |
2019-06-03T10:20:44.746008762Z stderr F I0603 10:20:44.745847 1 leaderelection.go:227] successfully acquired lease kube-system/kube-controller-manager | |
2019-06-03T10:20:44.746482213Z stderr F I0603 10:20:44.746387 1 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"3f4da14c-85e9-11e9-a310-0242ac110004", APIVersion:"v1", ResourceVersion:"163", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kind-control-plane_3a2bf5f4-85e9-11e9-a11b-0242ac110004 became leader | |
2019-06-03T10:20:44.95205563Z stderr F I0603 10:20:44.951911 1 plugins.go:103] No cloud provider specified. | |
2019-06-03T10:20:44.954015812Z stderr F I0603 10:20:44.953902 1 controller_utils.go:1027] Waiting for caches to sync for tokens controller | |
2019-06-03T10:20:45.054235348Z stderr F I0603 10:20:45.054078 1 controller_utils.go:1034] Caches are synced for tokens controller | |
2019-06-03T10:20:45.068429838Z stderr F I0603 10:20:45.068226 1 controllermanager.go:497] Started "ttl" | |
2019-06-03T10:20:45.068509429Z stderr F I0603 10:20:45.068355 1 ttl_controller.go:116] Starting TTL controller | |
2019-06-03T10:20:45.068518278Z stderr F I0603 10:20:45.068379 1 controller_utils.go:1027] Waiting for caches to sync for TTL controller | |
2019-06-03T10:20:45.084959505Z stderr F I0603 10:20:45.084807 1 controllermanager.go:497] Started "bootstrapsigner" | |
2019-06-03T10:20:45.085233465Z stderr F I0603 10:20:45.085153 1 controller_utils.go:1027] Waiting for caches to sync for bootstrap_signer controller | |
2019-06-03T10:20:45.104085525Z stderr F I0603 10:20:45.103923 1 node_ipam_controller.go:99] Sending events to api server. | |
2019-06-03T10:20:55.113101397Z stderr F I0603 10:20:55.112947 1 range_allocator.go:78] Sending events to api server. | |
2019-06-03T10:20:55.113231897Z stderr F I0603 10:20:55.113156 1 range_allocator.go:99] No Service CIDR provided. Skipping filtering out service addresses. | |
2019-06-03T10:20:55.113266271Z stderr F I0603 10:20:55.113217 1 range_allocator.go:105] Node kind-control-plane has no CIDR, ignoring | |
2019-06-03T10:20:55.113346677Z stderr F I0603 10:20:55.113290 1 node_ipam_controller.go:167] Starting ipam controller | |
2019-06-03T10:20:55.113362865Z stderr F I0603 10:20:55.113332 1 controller_utils.go:1027] Waiting for caches to sync for node controller | |
2019-06-03T10:20:55.113411786Z stderr F I0603 10:20:55.113379 1 controllermanager.go:497] Started "nodeipam" | |
2019-06-03T10:20:55.113456047Z stderr F W0603 10:20:55.113422 1 core.go:175] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. | |
2019-06-03T10:20:55.113463848Z stderr F W0603 10:20:55.113433 1 controllermanager.go:489] Skipping "route" | |
2019-06-03T10:20:55.113476807Z stderr F W0603 10:20:55.113451 1 controllermanager.go:489] Skipping "root-ca-cert-publisher" | |
2019-06-03T10:20:55.335103486Z stderr F I0603 10:20:55.334951 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io | |
2019-06-03T10:20:55.335157244Z stderr F I0603 10:20:55.335031 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges | |
2019-06-03T10:20:55.33519508Z stderr F I0603 10:20:55.335125 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps | |
2019-06-03T10:20:55.335214963Z stderr F I0603 10:20:55.335176 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch | |
2019-06-03T10:20:55.335311628Z stderr F W0603 10:20:55.335261 1 shared_informer.go:311] resyncPeriod 57250945460692 is smaller than resyncCheckPeriod 83228512835391 and the informer has already started. Changing it to 83228512835391 | |
2019-06-03T10:20:55.33548656Z stderr F I0603 10:20:55.335421 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io | |
2019-06-03T10:20:55.335496158Z stderr F W0603 10:20:55.335445 1 shared_informer.go:311] resyncPeriod 58644458508067 is smaller than resyncCheckPeriod 83228512835391 and the informer has already started. Changing it to 83228512835391 | |
2019-06-03T10:20:55.335514409Z stderr F I0603 10:20:55.335475 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts | |
2019-06-03T10:20:55.33557404Z stderr F I0603 10:20:55.335527 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io | |
2019-06-03T10:20:55.335663098Z stderr F I0603 10:20:55.335622 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates | |
2019-06-03T10:20:55.335762112Z stderr F I0603 10:20:55.335726 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions | |
2019-06-03T10:20:55.335862481Z stderr F I0603 10:20:55.335823 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps | |
2019-06-03T10:20:55.335875155Z stderr F I0603 10:20:55.335859 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io | |
2019-06-03T10:20:55.335942701Z stderr F I0603 10:20:55.335904 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints | |
2019-06-03T10:20:55.336046811Z stderr F I0603 10:20:55.336007 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps | |
2019-06-03T10:20:55.336110403Z stderr F I0603 10:20:55.336056 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling | |
2019-06-03T10:20:55.336134777Z stderr F I0603 10:20:55.336104 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io | |
2019-06-03T10:20:55.336186296Z stderr F I0603 10:20:55.336155 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy | |
2019-06-03T10:20:55.336257706Z stderr F I0603 10:20:55.336229 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io | |
2019-06-03T10:20:55.336319335Z stderr F I0603 10:20:55.336297 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions | |
2019-06-03T10:20:55.336389184Z stderr F I0603 10:20:55.336356 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps | |
2019-06-03T10:20:55.336472473Z stderr F I0603 10:20:55.336436 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps | |
2019-06-03T10:20:55.336568731Z stderr F I0603 10:20:55.336507 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch | |
2019-06-03T10:20:55.336680792Z stderr F I0603 10:20:55.336604 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions | |
2019-06-03T10:20:55.336693013Z stderr F I0603 10:20:55.336660 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions | |
2019-06-03T10:20:55.336716789Z stderr F E0603 10:20:55.336681 1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies" | |
2019-06-03T10:20:55.336753163Z stderr F I0603 10:20:55.336710 1 controllermanager.go:497] Started "resourcequota" | |
2019-06-03T10:20:55.336825599Z stderr F I0603 10:20:55.336774 1 resource_quota_controller.go:276] Starting resource quota controller | |
2019-06-03T10:20:55.33690426Z stderr F I0603 10:20:55.336855 1 controller_utils.go:1027] Waiting for caches to sync for resource quota controller | |
2019-06-03T10:20:55.336973925Z stderr F I0603 10:20:55.336940 1 resource_quota_monitor.go:301] QuotaMonitor running | |
2019-06-03T10:20:55.361719817Z stderr F I0603 10:20:55.361554 1 controllermanager.go:497] Started "namespace" | |
2019-06-03T10:20:55.361977524Z stderr F I0603 10:20:55.361824 1 namespace_controller.go:186] Starting namespace controller | |
2019-06-03T10:20:55.362067639Z stderr F I0603 10:20:55.362025 1 controller_utils.go:1027] Waiting for caches to sync for namespace controller | |
2019-06-03T10:20:55.381013282Z stderr F I0603 10:20:55.380859 1 controllermanager.go:497] Started "csrsigning" | |
2019-06-03T10:20:55.38130914Z stderr F I0603 10:20:55.381129 1 certificate_controller.go:113] Starting certificate controller | |
2019-06-03T10:20:55.381425758Z stderr F I0603 10:20:55.381381 1 controller_utils.go:1027] Waiting for caches to sync for certificate controller | |
2019-06-03T10:20:55.387980092Z stderr F I0603 10:20:55.387870 1 controllermanager.go:497] Started "csrcleaner" | |
2019-06-03T10:20:55.388120509Z stderr F I0603 10:20:55.388052 1 cleaner.go:81] Starting CSR cleaner controller | |
2019-06-03T10:20:55.393636987Z stderr F I0603 10:20:55.393520 1 node_lifecycle_controller.go:292] Sending events to api server. | |
2019-06-03T10:20:55.394010566Z stderr F I0603 10:20:55.393941 1 node_lifecycle_controller.go:325] Controller is using taint based evictions. | |
2019-06-03T10:20:55.394030913Z stderr F I0603 10:20:55.394015 1 taint_manager.go:175] Sending events to api server. | |
2019-06-03T10:20:55.394423954Z stderr F I0603 10:20:55.394365 1 node_lifecycle_controller.go:390] Controller will reconcile labels. | |
2019-06-03T10:20:55.394539913Z stderr F I0603 10:20:55.394502 1 node_lifecycle_controller.go:403] Controller will taint node by condition. | |
2019-06-03T10:20:55.394633277Z stderr F I0603 10:20:55.394592 1 controllermanager.go:497] Started "nodelifecycle" | |
2019-06-03T10:20:55.394810984Z stderr F I0603 10:20:55.394651 1 node_lifecycle_controller.go:427] Starting node controller | |
2019-06-03T10:20:55.394946096Z stderr F I0603 10:20:55.394884 1 controller_utils.go:1027] Waiting for caches to sync for taint controller | |
2019-06-03T10:20:55.469900445Z stderr F I0603 10:20:55.469762 1 controllermanager.go:497] Started "endpoint" | |
2019-06-03T10:20:55.47014594Z stderr F I0603 10:20:55.470094 1 endpoints_controller.go:166] Starting endpoint controller | |
2019-06-03T10:20:55.470229475Z stderr F I0603 10:20:55.470190 1 controller_utils.go:1027] Waiting for caches to sync for endpoint controller | |
2019-06-03T10:20:55.493547382Z stderr F I0603 10:20:55.493401 1 controllermanager.go:497] Started "podgc" | |
2019-06-03T10:20:55.493907072Z stderr F I0603 10:20:55.493830 1 gc_controller.go:76] Starting GC controller | |
2019-06-03T10:20:55.498961657Z stderr F I0603 10:20:55.498850 1 controller_utils.go:1027] Waiting for caches to sync for GC controller | |
2019-06-03T10:20:55.517265895Z stderr F I0603 10:20:55.517114 1 controllermanager.go:497] Started "persistentvolume-binder" | |
2019-06-03T10:20:55.517339896Z stderr F I0603 10:20:55.517268 1 pv_controller_base.go:270] Starting persistent volume controller | |
2019-06-03T10:20:55.517476297Z stderr F I0603 10:20:55.517348 1 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller | |
2019-06-03T10:20:55.666357212Z stderr F I0603 10:20:55.666208 1 controllermanager.go:497] Started "clusterrole-aggregation" | |
2019-06-03T10:20:55.666407936Z stderr F I0603 10:20:55.666271 1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator | |
2019-06-03T10:20:55.666415212Z stderr F I0603 10:20:55.666292 1 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller | |
2019-06-03T10:20:56.365830242Z stderr F I0603 10:20:56.365654 1 controllermanager.go:497] Started "horizontalpodautoscaling" | |
2019-06-03T10:20:56.365929066Z stderr F I0603 10:20:56.365798 1 horizontal.go:156] Starting HPA controller | |
2019-06-03T10:20:56.365939778Z stderr F I0603 10:20:56.365825 1 controller_utils.go:1027] Waiting for caches to sync for HPA controller | |
2019-06-03T10:20:56.516198203Z stderr F E0603 10:20:56.516034 1 prometheus.go:138] failed to register depth metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T10:20:56.51639273Z stderr F E0603 10:20:56.516331 1 prometheus.go:150] failed to register adds metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T10:20:56.516443256Z stderr F E0603 10:20:56.516410 1 prometheus.go:162] failed to register latency metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T10:20:56.516519065Z stderr F E0603 10:20:56.516489 1 prometheus.go:174] failed to register work_duration metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T10:20:56.516583153Z stderr F E0603 10:20:56.516544 1 prometheus.go:189] failed to register unfinished_work_seconds metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T10:20:56.516621339Z stderr F E0603 10:20:56.516588 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T10:20:56.516706248Z stderr F E0603 10:20:56.516666 1 prometheus.go:214] failed to register retries metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T10:20:56.516758555Z stderr F I0603 10:20:56.516730 1 controllermanager.go:497] Started "csrapproving" | |
2019-06-03T10:20:56.516836814Z stderr F I0603 10:20:56.516807 1 certificate_controller.go:113] Starting certificate controller | |
2019-06-03T10:20:56.516871832Z stderr F I0603 10:20:56.516851 1 controller_utils.go:1027] Waiting for caches to sync for certificate controller | |
2019-06-03T10:20:56.766332915Z stderr F W0603 10:20:56.766187 1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
2019-06-03T10:20:56.767117199Z stderr F I0603 10:20:56.766887 1 controllermanager.go:497] Started "attachdetach" | |
2019-06-03T10:20:56.767136677Z stderr F I0603 10:20:56.766961 1 attach_detach_controller.go:323] Starting attach detach controller | |
2019-06-03T10:20:56.767150017Z stderr F I0603 10:20:56.766969 1 controller_utils.go:1027] Waiting for caches to sync for attach detach controller | |
2019-06-03T10:20:57.016422465Z stderr F I0603 10:20:57.016219 1 controllermanager.go:497] Started "persistentvolume-expander" | |
2019-06-03T10:20:57.016484884Z stderr F I0603 10:20:57.016287 1 expand_controller.go:153] Starting expand controller | |
2019-06-03T10:20:57.016493912Z stderr F I0603 10:20:57.016309 1 controller_utils.go:1027] Waiting for caches to sync for expand controller | |
2019-06-03T10:20:57.268689644Z stderr F I0603 10:20:57.268532 1 controllermanager.go:497] Started "daemonset" | |
2019-06-03T10:20:57.268967295Z stderr F I0603 10:20:57.268870 1 daemon_controller.go:267] Starting daemon sets controller | |
2019-06-03T10:20:57.269071132Z stderr F I0603 10:20:57.269017 1 controller_utils.go:1027] Waiting for caches to sync for daemon sets controller | |
2019-06-03T10:20:57.516473731Z stderr F I0603 10:20:57.516330 1 controllermanager.go:497] Started "tokencleaner" | |
2019-06-03T10:20:57.516519001Z stderr F I0603 10:20:57.516405 1 tokencleaner.go:116] Starting token cleaner controller | |
2019-06-03T10:20:57.51657397Z stderr F I0603 10:20:57.516427 1 controller_utils.go:1027] Waiting for caches to sync for token_cleaner controller | |
2019-06-03T10:20:57.61670024Z stderr F I0603 10:20:57.616561 1 controller_utils.go:1034] Caches are synced for token_cleaner controller | |
2019-06-03T10:20:57.766808041Z stderr F I0603 10:20:57.766606 1 controllermanager.go:497] Started "job" | |
2019-06-03T10:20:57.766860723Z stderr F I0603 10:20:57.766672 1 job_controller.go:143] Starting job controller | |
2019-06-03T10:20:57.766867548Z stderr F I0603 10:20:57.766693 1 controller_utils.go:1027] Waiting for caches to sync for job controller | |
2019-06-03T10:20:58.01619691Z stderr F I0603 10:20:58.016065 1 controllermanager.go:497] Started "pvc-protection" | |
2019-06-03T10:20:58.016262718Z stderr F I0603 10:20:58.016156 1 pvc_protection_controller.go:99] Starting PVC protection controller | |
2019-06-03T10:20:58.016271618Z stderr F I0603 10:20:58.016180 1 controller_utils.go:1027] Waiting for caches to sync for PVC protection controller | |
2019-06-03T10:20:58.26634616Z stderr F I0603 10:20:58.266208 1 controllermanager.go:497] Started "pv-protection" | |
2019-06-03T10:20:58.266382227Z stderr F I0603 10:20:58.266279 1 pv_protection_controller.go:81] Starting PV protection controller | |
2019-06-03T10:20:58.266423724Z stderr F I0603 10:20:58.266308 1 controller_utils.go:1027] Waiting for caches to sync for PV protection controller | |
2019-06-03T10:20:58.51672898Z stderr F I0603 10:20:58.516594 1 controllermanager.go:497] Started "replicationcontroller" | |
2019-06-03T10:20:58.51680081Z stderr F I0603 10:20:58.516681 1 replica_set.go:182] Starting replicationcontroller controller | |
2019-06-03T10:20:58.516809242Z stderr F I0603 10:20:58.516706 1 controller_utils.go:1027] Waiting for caches to sync for ReplicationController controller | |
2019-06-03T10:20:58.766341656Z stderr F I0603 10:20:58.766184 1 controllermanager.go:497] Started "serviceaccount" | |
2019-06-03T10:20:58.76639286Z stderr F I0603 10:20:58.766243 1 serviceaccounts_controller.go:115] Starting service account controller | |
2019-06-03T10:20:58.766400369Z stderr F I0603 10:20:58.766266 1 controller_utils.go:1027] Waiting for caches to sync for service account controller | |
2019-06-03T10:20:59.017335848Z stderr F E0603 10:20:59.017128 1 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail | |
2019-06-03T10:20:59.017400702Z stderr F W0603 10:20:59.017160 1 controllermanager.go:489] Skipping "service" | |
2019-06-03T10:20:59.266556325Z stderr F I0603 10:20:59.266397 1 controllermanager.go:497] Started "disruption" | |
2019-06-03T10:20:59.266630675Z stderr F I0603 10:20:59.266477 1 disruption.go:286] Starting disruption controller | |
2019-06-03T10:20:59.266652338Z stderr F I0603 10:20:59.266500 1 controller_utils.go:1027] Waiting for caches to sync for disruption controller | |
2019-06-03T10:20:59.516814087Z stderr F I0603 10:20:59.516629 1 controllermanager.go:497] Started "statefulset" | |
2019-06-03T10:20:59.516887574Z stderr F I0603 10:20:59.516693 1 stateful_set.go:151] Starting stateful set controller | |
2019-06-03T10:20:59.516907362Z stderr F I0603 10:20:59.516715 1 controller_utils.go:1027] Waiting for caches to sync for stateful set controller | |
2019-06-03T10:21:00.326202144Z stderr F I0603 10:21:00.324662 1 garbagecollector.go:130] Starting garbage collector controller | |
2019-06-03T10:21:00.326251328Z stderr F I0603 10:21:00.324717 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller | |
2019-06-03T10:21:00.326258995Z stderr F I0603 10:21:00.324733 1 controllermanager.go:497] Started "garbagecollector" | |
2019-06-03T10:21:00.32628927Z stderr F I0603 10:21:00.324741 1 graph_builder.go:308] GraphBuilder running | |
2019-06-03T10:21:00.34886417Z stderr F I0603 10:21:00.348691 1 controllermanager.go:497] Started "replicaset" | |
2019-06-03T10:21:00.349165691Z stderr F I0603 10:21:00.349017 1 replica_set.go:182] Starting replicaset controller | |
2019-06-03T10:21:00.349184947Z stderr F I0603 10:21:00.349051 1 controller_utils.go:1027] Waiting for caches to sync for ReplicaSet controller | |
2019-06-03T10:21:00.355603206Z stderr F I0603 10:21:00.355452 1 node_lifecycle_controller.go:77] Sending events to api server | |
2019-06-03T10:21:00.355835286Z stderr F E0603 10:21:00.355767 1 core.go:161] failed to start cloud node lifecycle controller: no cloud provider provided | |
2019-06-03T10:21:00.355918776Z stderr F W0603 10:21:00.355882 1 controllermanager.go:489] Skipping "cloud-node-lifecycle" | |
2019-06-03T10:21:00.356038714Z stderr F W0603 10:21:00.355980 1 controllermanager.go:489] Skipping "ttl-after-finished" | |
2019-06-03T10:21:00.567530515Z stderr F I0603 10:21:00.567346 1 controllermanager.go:497] Started "deployment" | |
2019-06-03T10:21:00.567579613Z stderr F I0603 10:21:00.567416 1 deployment_controller.go:152] Starting deployment controller | |
2019-06-03T10:21:00.567588391Z stderr F I0603 10:21:00.567438 1 controller_utils.go:1027] Waiting for caches to sync for deployment controller | |
2019-06-03T10:21:00.816661792Z stderr F I0603 10:21:00.816482 1 controllermanager.go:497] Started "cronjob" | |
2019-06-03T10:21:00.818290423Z stderr F E0603 10:21:00.818207 1 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies" | |
2019-06-03T10:21:00.818325203Z stderr F I0603 10:21:00.818233 1 cronjob_controller.go:94] Starting CronJob Manager | |
2019-06-03T10:21:00.831218662Z stderr F W0603 10:21:00.831033 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kind-control-plane" does not exist | |
2019-06-03T10:21:00.857393449Z stderr F I0603 10:21:00.857188 1 controller_utils.go:1034] Caches are synced for ReplicaSet controller | |
2019-06-03T10:21:00.868319575Z stderr F I0603 10:21:00.868002 1 controller_utils.go:1034] Caches are synced for deployment controller | |
2019-06-03T10:21:00.868677407Z stderr F I0603 10:21:00.868611 1 controller_utils.go:1034] Caches are synced for HPA controller | |
2019-06-03T10:21:00.868774975Z stderr F I0603 10:21:00.868002 1 controller_utils.go:1034] Caches are synced for namespace controller | |
2019-06-03T10:21:00.868975841Z stderr F I0603 10:21:00.868930 1 controller_utils.go:1034] Caches are synced for job controller | |
2019-06-03T10:21:00.870950512Z stderr F I0603 10:21:00.870870 1 controller_utils.go:1034] Caches are synced for service account controller | |
2019-06-03T10:21:00.871381163Z stderr F I0603 10:21:00.871315 1 controller_utils.go:1034] Caches are synced for disruption controller | |
2019-06-03T10:21:00.871471913Z stderr F I0603 10:21:00.871420 1 disruption.go:294] Sending events to api server. | |
2019-06-03T10:21:00.871621052Z stderr F I0603 10:21:00.871579 1 controller_utils.go:1034] Caches are synced for TTL controller | |
2019-06-03T10:21:00.880042596Z stderr F I0603 10:21:00.879853 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3faa3bf9-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"190", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-fb8b8dccf to 2 | |
2019-06-03T10:21:00.884393517Z stderr F I0603 10:21:00.884235 1 controller_utils.go:1034] Caches are synced for certificate controller | |
2019-06-03T10:21:00.886986178Z stderr F I0603 10:21:00.886799 1 controller_utils.go:1034] Caches are synced for bootstrap_signer controller | |
2019-06-03T10:21:00.899769509Z stderr F I0603 10:21:00.899606 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"48ea4ce4-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"328", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-zslcr | |
2019-06-03T10:21:00.899876831Z stderr F I0603 10:21:00.899820 1 controller_utils.go:1034] Caches are synced for GC controller | |
2019-06-03T10:21:00.913646677Z stderr F I0603 10:21:00.913493 1 controller_utils.go:1034] Caches are synced for node controller | |
2019-06-03T10:21:00.913704552Z stderr F I0603 10:21:00.913529 1 range_allocator.go:157] Starting range CIDR allocator | |
2019-06-03T10:21:00.913710533Z stderr F I0603 10:21:00.913546 1 controller_utils.go:1027] Waiting for caches to sync for cidrallocator controller | |
2019-06-03T10:21:00.916943582Z stderr F I0603 10:21:00.916821 1 controller_utils.go:1034] Caches are synced for ReplicationController controller | |
2019-06-03T10:21:00.917653036Z stderr F I0603 10:21:00.917584 1 controller_utils.go:1034] Caches are synced for certificate controller | |
2019-06-03T10:21:00.918751076Z stderr F I0603 10:21:00.918621 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"48ea4ce4-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"328", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-vpjb6 | |
2019-06-03T10:21:00.918879763Z stderr F I0603 10:21:00.918833 1 controller_utils.go:1034] Caches are synced for PVC protection controller | |
2019-06-03T10:21:00.96316042Z stderr F I0603 10:21:00.962977 1 log.go:172] [INFO] signed certificate with serial number 27792946466559472645016419288852042625230525076 | |
2019-06-03T10:21:01.013827794Z stderr F I0603 10:21:01.013660 1 controller_utils.go:1034] Caches are synced for cidrallocator controller | |
2019-06-03T10:21:01.024519263Z stderr F I0603 10:21:01.024368 1 range_allocator.go:310] Set node kind-control-plane PodCIDR to 10.244.0.0/24 | |
2019-06-03T10:21:01.195519649Z stderr F I0603 10:21:01.195327 1 controller_utils.go:1034] Caches are synced for taint controller | |
2019-06-03T10:21:01.195567409Z stderr F I0603 10:21:01.195432 1 taint_manager.go:198] Starting NoExecuteTaintManager | |
2019-06-03T10:21:01.196244722Z stderr F I0603 10:21:01.196158 1 node_lifecycle_controller.go:1159] Initializing eviction metric for zone: | |
2019-06-03T10:21:01.196338772Z stderr F W0603 10:21:01.196236 1 node_lifecycle_controller.go:833] Missing timestamp for Node kind-control-plane. Assuming now as a timestamp. | |
2019-06-03T10:21:01.196348839Z stderr F I0603 10:21:01.196301 1 node_lifecycle_controller.go:1009] Controller detected that all Nodes are not-Ready. Entering master disruption mode. | |
2019-06-03T10:21:01.196812384Z stderr F I0603 10:21:01.196676 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-control-plane", UID:"3d10399b-85e9-11e9-a310-0242ac110004", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kind-control-plane event: Registered Node kind-control-plane in Controller | |
2019-06-03T10:21:01.27061235Z stderr F I0603 10:21:01.270396 1 controller_utils.go:1034] Caches are synced for endpoint controller | |
2019-06-03T10:21:01.417014919Z stderr F I0603 10:21:01.416890 1 controller_utils.go:1034] Caches are synced for stateful set controller | |
2019-06-03T10:21:01.469500601Z stderr F I0603 10:21:01.469365 1 controller_utils.go:1034] Caches are synced for daemon sets controller | |
2019-06-03T10:21:01.482823598Z stderr F I0603 10:21:01.482651 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"3fdeee70-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"200", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-ngzc6 | |
2019-06-03T10:21:01.504624204Z stderr F I0603 10:21:01.504411 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"4047e910-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-9fjgn | |
2019-06-03T10:21:01.517587233Z stderr F I0603 10:21:01.517474 1 controller_utils.go:1034] Caches are synced for expand controller | |
2019-06-03T10:21:01.517738698Z stderr F I0603 10:21:01.517698 1 controller_utils.go:1034] Caches are synced for persistent volume controller | |
2019-06-03T10:21:01.539433131Z stderr F I0603 10:21:01.539277 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"ip-masq-agent", UID:"404b9006-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ip-masq-agent-d84kg | |
2019-06-03T10:21:01.56700477Z stderr F I0603 10:21:01.566881 1 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller | |
2019-06-03T10:21:01.567520905Z stderr F E0603 10:21:01.567415 1 daemon_controller.go:302] kube-system/ip-masq-agent failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-masq-agent", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/ip-masq-agent", UID:"404b9006-85e9-11e9-a310-0242ac110004", ResourceVersion:"221", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63695154046, loc:(*time.Location)(0x724ce00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ip-masq-agent", "k8s-app":"ip-masq-agent", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001e0e920), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ip-masq-agent", "k8s-app":"ip-masq-agent", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"config", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001d77740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"ip-masq-agent", Image:"k8s.gcr.io/ip-masq-agent:v2.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"config", ReadOnly:false, MountPath:"/etc/config", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001d65c70), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e1c608), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ip-masq-agent", DeprecatedServiceAccount:"ip-masq-agent", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d7f3e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"OnDelete", RollingUpdate:(*v1.RollingUpdateDaemonSet)(nil)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001e1c658)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "ip-masq-agent": the object has been modified; please apply your changes to the latest version and try again | |
2019-06-03T10:21:01.567998358Z stderr F I0603 10:21:01.567102 1 controller_utils.go:1034] Caches are synced for PV protection controller | |
2019-06-03T10:21:01.569011021Z stderr F I0603 10:21:01.567163 1 controller_utils.go:1034] Caches are synced for attach detach controller | |
2019-06-03T10:21:01.625099033Z stderr F I0603 10:21:01.624949 1 controller_utils.go:1034] Caches are synced for garbage collector controller | |
2019-06-03T10:21:01.625227867Z stderr F I0603 10:21:01.625037 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage | |
2019-06-03T10:21:01.637328835Z stderr F I0603 10:21:01.637164 1 controller_utils.go:1034] Caches are synced for resource quota controller | |
2019-06-03T10:21:01.817735159Z stderr F I0603 10:21:01.817562 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller | |
2019-06-03T10:21:01.919274941Z stderr F I0603 10:21:01.919130 1 controller_utils.go:1034] Caches are synced for garbage collector controller | |
2019-06-03T10:21:03.484003606Z stderr F I0603 10:21:03.465994 1 log.go:172] [INFO] signed certificate with serial number 439073225066011088510789256988120411274886758415 | |
2019-06-03T10:21:03.962462733Z stderr F I0603 10:21:03.959947 1 log.go:172] [INFO] signed certificate with serial number 100419201628949707653524756369728210732612034436 | |
2019-06-03T10:21:16.011422876Z stderr F W0603 10:21:16.011272 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kind-worker" does not exist | |
2019-06-03T10:21:16.032775393Z stderr F I0603 10:21:16.032630 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"ip-masq-agent", UID:"404b9006-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ip-masq-agent-j4ssk | |
2019-06-03T10:21:16.03309593Z stderr F I0603 10:21:16.033021 1 range_allocator.go:310] Set node kind-worker PodCIDR to 10.244.1.0/24 | |
2019-06-03T10:21:16.038375979Z stderr F I0603 10:21:16.038214 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"3fdeee70-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-kp2s9 | |
2019-06-03T10:21:16.075527905Z stderr F I0603 10:21:16.075370 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"4047e910-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-s2mkf | |
2019-06-03T10:21:16.197197426Z stderr F W0603 10:21:16.196989 1 node_lifecycle_controller.go:833] Missing timestamp for Node kind-worker. Assuming now as a timestamp. | |
2019-06-03T10:21:16.197500423Z stderr F I0603 10:21:16.197394 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"51efa4cf-85e9-11e9-a310-0242ac110004", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker event: Registered Node kind-worker in Controller | |
2019-06-03T10:21:16.654677566Z stderr F W0603 10:21:16.654528 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kind-worker2" does not exist | |
2019-06-03T10:21:16.696666592Z stderr F I0603 10:21:16.696504 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"3fdeee70-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-cgpvl | |
2019-06-03T10:21:16.705566913Z stderr F I0603 10:21:16.705373 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"4047e910-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-zj8wn | |
2019-06-03T10:21:16.780717395Z stderr F I0603 10:21:16.780529 1 range_allocator.go:310] Set node kind-worker2 PodCIDR to 10.244.2.0/24 | |
2019-06-03T10:21:16.839383722Z stderr F I0603 10:21:16.839225 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"ip-masq-agent", UID:"404b9006-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ip-masq-agent-pfrg8 | |
2019-06-03T10:21:20.900047329Z stderr F I0603 10:21:20.899880 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello", UID:"54d857d5-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"528", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-6d6586c69c to 1 | |
2019-06-03T10:21:20.924053447Z stderr F I0603 10:21:20.923908 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-6d6586c69c", UID:"54d92fe2-85e9-11e9-a310-0242ac110004", APIVersion:"apps/v1", ResourceVersion:"529", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-6d6586c69c-vvqlx | |
2019-06-03T10:21:21.197534106Z stderr F W0603 10:21:21.197369 1 node_lifecycle_controller.go:833] Missing timestamp for Node kind-worker2. Assuming now as a timestamp. | |
2019-06-03T10:21:21.197762631Z stderr F I0603 10:21:21.197680 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker2", UID:"524e0cf2-85e9-11e9-a310-0242ac110004", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker2 event: Registered Node kind-worker2 in Controller | |
2019-06-03T10:21:41.199214242Z stderr F I0603 10:21:41.199003 1 node_lifecycle_controller.go:1036] Controller detected that some Nodes are Ready. Exiting master disruption mode. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:20.193633245Z stderr F W0603 10:21:20.190058 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy | |
2019-06-03T10:21:20.206113161Z stderr F I0603 10:21:20.205643 1 server_others.go:146] Using iptables Proxier. | |
2019-06-03T10:21:20.233053222Z stderr F I0603 10:21:20.206425 1 server.go:562] Version: v1.14.2 | |
2019-06-03T10:21:20.247782786Z stderr F I0603 10:21:20.245930 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
2019-06-03T10:21:20.247849396Z stderr F I0603 10:21:20.246013 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
2019-06-03T10:21:20.247856749Z stderr F I0603 10:21:20.246060 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
2019-06-03T10:21:20.247861924Z stderr F I0603 10:21:20.246196 1 config.go:102] Starting endpoints config controller | |
2019-06-03T10:21:20.247869057Z stderr F I0603 10:21:20.246228 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller | |
2019-06-03T10:21:20.247873969Z stderr F I0603 10:21:20.246247 1 config.go:202] Starting service config controller | |
2019-06-03T10:21:20.247909893Z stderr F I0603 10:21:20.246264 1 controller_utils.go:1027] Waiting for caches to sync for service config controller | |
2019-06-03T10:21:20.348477127Z stderr F I0603 10:21:20.346453 1 controller_utils.go:1034] Caches are synced for endpoints config controller | |
2019-06-03T10:21:20.348513967Z stderr F I0603 10:21:20.346550 1 controller_utils.go:1034] Caches are synced for service config controller |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:19.345806693Z stderr F W0603 10:21:19.345171 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy | |
2019-06-03T10:21:19.36198005Z stderr F I0603 10:21:19.360846 1 server_others.go:146] Using iptables Proxier. | |
2019-06-03T10:21:19.362014967Z stderr F I0603 10:21:19.361135 1 server.go:562] Version: v1.14.2 | |
2019-06-03T10:21:19.425219131Z stderr F I0603 10:21:19.412619 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
2019-06-03T10:21:19.443569868Z stderr F I0603 10:21:19.425595 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
2019-06-03T10:21:19.44388503Z stderr F I0603 10:21:19.443803 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
2019-06-03T10:21:19.444329291Z stderr F I0603 10:21:19.444126 1 config.go:202] Starting service config controller | |
2019-06-03T10:21:19.444419706Z stderr F I0603 10:21:19.444363 1 controller_utils.go:1027] Waiting for caches to sync for service config controller | |
2019-06-03T10:21:19.444690498Z stderr F I0603 10:21:19.444605 1 config.go:102] Starting endpoints config controller | |
2019-06-03T10:21:19.444785541Z stderr F I0603 10:21:19.444751 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller | |
2019-06-03T10:21:19.544755008Z stderr F I0603 10:21:19.544600 1 controller_utils.go:1034] Caches are synced for service config controller | |
2019-06-03T10:21:19.544994799Z stderr F I0603 10:21:19.544935 1 controller_utils.go:1034] Caches are synced for endpoints config controller |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:21:04.30328681Z stderr F W0603 10:21:04.303116 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy | |
2019-06-03T10:21:04.315965015Z stderr F I0603 10:21:04.315833 1 server_others.go:146] Using iptables Proxier. | |
2019-06-03T10:21:04.31639797Z stderr F I0603 10:21:04.316330 1 server.go:562] Version: v1.14.2 | |
2019-06-03T10:21:04.350199451Z stderr F I0603 10:21:04.350062 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 | |
2019-06-03T10:21:04.350242807Z stderr F I0603 10:21:04.350100 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
2019-06-03T10:21:04.350305386Z stderr F I0603 10:21:04.350143 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
2019-06-03T10:21:04.350314602Z stderr F I0603 10:21:04.350176 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
2019-06-03T10:21:04.350350067Z stderr F I0603 10:21:04.350279 1 config.go:102] Starting endpoints config controller | |
2019-06-03T10:21:04.350356095Z stderr F I0603 10:21:04.350307 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller | |
2019-06-03T10:21:04.350394045Z stderr F I0603 10:21:04.350341 1 config.go:202] Starting service config controller | |
2019-06-03T10:21:04.350400959Z stderr F I0603 10:21:04.350359 1 controller_utils.go:1027] Waiting for caches to sync for service config controller | |
2019-06-03T10:21:04.451925876Z stderr F I0603 10:21:04.451763 1 controller_utils.go:1034] Caches are synced for service config controller | |
2019-06-03T10:21:04.452019886Z stderr F I0603 10:21:04.451763 1 controller_utils.go:1034] Caches are synced for endpoints config controller |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T10:20:33.64121379Z stderr F I0603 10:20:33.625023 1 serving.go:319] Generated self-signed cert in-memory | |
2019-06-03T10:20:35.825952674Z stderr F W0603 10:20:35.825830 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. | |
2019-06-03T10:20:35.826062577Z stderr F W0603 10:20:35.826032 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. | |
2019-06-03T10:20:35.826146514Z stderr F W0603 10:20:35.826115 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. | |
2019-06-03T10:20:35.829719719Z stderr F I0603 10:20:35.829627 1 server.go:142] Version: v1.14.2 | |
2019-06-03T10:20:35.829872876Z stderr F I0603 10:20:35.829826 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory | |
2019-06-03T10:20:35.83493791Z stderr F W0603 10:20:35.834846 1 authorization.go:47] Authorization is disabled | |
2019-06-03T10:20:35.8350302Z stderr F W0603 10:20:35.835001 1 authentication.go:55] Authentication is disabled | |
2019-06-03T10:20:35.83511576Z stderr F I0603 10:20:35.835078 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251 | |
2019-06-03T10:20:35.835837409Z stderr F I0603 10:20:35.835792 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259 | |
2019-06-03T10:20:40.868338857Z stderr F E0603 10:20:40.868023 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
2019-06-03T10:20:40.869810964Z stderr F E0603 10:20:40.869443 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
2019-06-03T10:20:40.927236067Z stderr F E0603 10:20:40.927042 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
2019-06-03T10:20:40.927481364Z stderr F E0603 10:20:40.927425 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope | |
2019-06-03T10:20:40.927790963Z stderr F E0603 10:20:40.927722 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
2019-06-03T10:20:40.927972268Z stderr F E0603 10:20:40.927931 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope | |
2019-06-03T10:20:40.928246688Z stderr F E0603 10:20:40.928184 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope | |
2019-06-03T10:20:40.928471383Z stderr F E0603 10:20:40.928410 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
2019-06-03T10:20:40.951402707Z stderr F E0603 10:20:40.951260 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
2019-06-03T10:20:40.951659036Z stderr F E0603 10:20:40.951536 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
2019-06-03T10:20:41.870307886Z stderr F E0603 10:20:41.870194 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
2019-06-03T10:20:41.874171091Z stderr F E0603 10:20:41.874039 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
2019-06-03T10:20:41.928906223Z stderr F E0603 10:20:41.928789 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
2019-06-03T10:20:41.92965478Z stderr F E0603 10:20:41.929566 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope | |
2019-06-03T10:20:41.932864958Z stderr F E0603 10:20:41.932784 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
2019-06-03T10:20:41.939546413Z stderr F E0603 10:20:41.939424 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope | |
2019-06-03T10:20:41.940454924Z stderr F E0603 10:20:41.940357 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope | |
2019-06-03T10:20:41.941774988Z stderr F E0603 10:20:41.941683 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
2019-06-03T10:20:41.95324795Z stderr F E0603 10:20:41.953119 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
2019-06-03T10:20:41.956109872Z stderr F E0603 10:20:41.955924 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
2019-06-03T10:20:43.837509072Z stderr F I0603 10:20:43.837342 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller | |
2019-06-03T10:20:43.937647804Z stderr F I0603 10:20:43.937547 1 controller_utils.go:1034] Caches are synced for scheduler controller | |
2019-06-03T10:20:43.937835773Z stderr F I0603 10:20:43.937777 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler... | |
2019-06-03T10:20:43.942923014Z stderr F I0603 10:20:43.942830 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-- Logs begin at Mon 2019-06-03 10:20:11 UTC, end at Mon 2019-06-03 10:22:03 UTC. -- | |
Jun 03 10:20:11 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:11 kind-worker kubelet[46]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:11 kind-worker kubelet[46]: F0603 10:20:11.549405 46 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:11 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:11 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:20:21 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 10:20:21 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. | |
Jun 03 10:20:21 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:21 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:21 kind-worker kubelet[67]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:21 kind-worker kubelet[67]: F0603 10:20:21.831682 67 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:21 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:21 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:20:32 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 10:20:32 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. | |
Jun 03 10:20:32 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:32 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:32 kind-worker kubelet[75]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:32 kind-worker kubelet[75]: F0603 10:20:32.119910 75 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:32 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:32 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:20:42 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 10:20:42 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. | |
Jun 03 10:20:42 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:42 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:42 kind-worker kubelet[83]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:42 kind-worker kubelet[83]: F0603 10:20:42.354467 83 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:42 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:42 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:20:52 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 10:20:52 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. | |
Jun 03 10:20:52 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:52 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:20:52 kind-worker kubelet[122]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:20:52 kind-worker kubelet[122]: F0603 10:20:52.577058 122 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 10:20:52 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 10:20:52 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 10:21:02 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 10:21:02 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 10:21:02 kind-worker kubelet[156]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:21:02 kind-worker kubelet[156]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.097582 156 server.go:417] Version: v1.14.2 | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.097852 156 plugins.go:103] No cloud provider specified. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.097877 156 server.go:754] Client rotation is on, will bootstrap in background | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.124382 156 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.124875 156 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: [] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.124968 156 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.125094 156 container_manager_linux.go:286] Creating device plugin manager: true | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.125269 156 state_mem.go:36] [cpumanager] initializing new in-memory state store | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.133707 156 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.133958 156 kubelet.go:304] Watching apiserver | |
Jun 03 10:21:03 kind-worker kubelet[156]: W0603 10:21:03.182502 156 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock". | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.186862 156 remote_runtime.go:62] parsed scheme: "" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187078 156 remote_runtime.go:62] scheme "" not registered, fallback to default scheme | |
Jun 03 10:21:03 kind-worker kubelet[156]: W0603 10:21:03.187176 156 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock". | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187254 156 remote_image.go:50] parsed scheme: "" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187317 156 remote_image.go:50] scheme "" not registered, fallback to default scheme | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187582 156 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187651 156 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.187765 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000232900, CONNECTING | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.188406 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000232900, READY | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.188542 156 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.188605 156 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.188698 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000232a90, CONNECTING | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.189215 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000232a90, READY | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.190833 156 kuberuntime_manager.go:210] Container runtime containerd initialized, version: 1.2.6-0ubuntu1, apiVersion: v1alpha2 | |
Jun 03 10:21:03 kind-worker kubelet[156]: W0603 10:21:03.191373 156 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.192001 156 server.go:1037] Started kubelet | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196382 156 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196545 156 status_manager.go:152] Starting to sync pod status with apiserver | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196619 156 kubelet.go:1806] Starting kubelet main sync loop. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196694 156 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.196882 156 server.go:141] Starting to listen on 0.0.0.0:10250 | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.199137 156 server.go:343] Adding debug handlers to kubelet server. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.202685 156 volume_manager.go:248] Starting Kubelet Volume Manager | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.212410 156 desired_state_of_world_populator.go:130] Desired state populator starts to run | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.214391 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.218329 156 clientconn.go:440] parsed scheme: "unix" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.218456 156 clientconn.go:440] scheme "unix" not registered, fallback to default scheme | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.218565 156 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.218645 156 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.234560 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0007953a0, CONNECTING | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.235083 156 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0007953a0, READY | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.310544 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.319930 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.320753 156 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet. | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.320955 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.321755 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.375905 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.377119 156 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.381945 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.382177 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.382341 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.386001 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.386747 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9116770308f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03cb715a8f, ext:781637649, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03cb715a8f, ext:781637649, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.397122 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.408320 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.415475 156 cpu_manager.go:155] [cpumanager] starting with none policy | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.415721 156 cpu_manager.go:156] [cpumanager] reconciling every 10s | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.415795 156 policy_none.go:42] [cpumanager] none policy: Start | |
Jun 03 10:21:03 kind-worker kubelet[156]: W0603 10:21:03.466521 156 manager.go:538] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.478066 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.480200 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.480360 156 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.491793 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.494660 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d8c2ff2a, ext:1005091659, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.497631 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d8c2ac48, ext:1005070452, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.498930 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d8c2e756, ext:1005085564, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.500004 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117843f532", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03dc451f32, ext:1063951205, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03dc451f32, ext:1063951205, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.578888 156 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.580557 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.586669 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.587877 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.590217 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.590481 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03e309baa2, ext:1177499338, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.591718 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03e309ed92, ext:1177512375, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.593533 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03e309da03, ext:1177507368, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.680828 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.781554 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.881724 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.981947 156 controller.go:115] failed to ensure node lease exists, will retry in 800ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.982279 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.990458 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:03 kind-worker kubelet[156]: I0603 10:21:03.992600 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.993963 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.994053 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03fb1bb45e, ext:1581330572, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.995132 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03fb295419, ext:1582223411, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:03 kind-worker kubelet[156]: E0603 10:21:03.996810 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03fb297e2c, ext:1582234192, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.083553 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.183917 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.284107 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.313716 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.321335 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.383774 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.384262 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.387787 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.398665 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.484551 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.584750 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.684923 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.783434 156 controller.go:115] failed to ensure node lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.785096 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: I0603 10:21:04.794137 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:04 kind-worker kubelet[156]: I0603 10:21:04.795283 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.796423 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b042f666490, ext:2384898740, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.796860 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.797348 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b042f669c9b, ext:2384913087, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.799176 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b042f66b161, ext:2384918400, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.885321 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:04 kind-worker kubelet[156]: E0603 10:21:04.985488 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.085737 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.185906 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.286113 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.315717 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.322562 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.385322 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.386266 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.388681 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.399681 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.486428 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.586587 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.686788 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.786966 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.887249 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:05 kind-worker kubelet[156]: E0603 10:21:05.987412 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.087599 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.188021 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.288214 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.317308 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.323585 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.390929 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.391095 156 controller.go:115] failed to ensure node lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.391183 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.392449 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: I0603 10:21:06.397411 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:06 kind-worker kubelet[156]: I0603 10:21:06.398449 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.399383 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.399677 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0497bf4505, ext:3988070187, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.400556 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0497bf81d8, ext:3988085749, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.401438 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.401402 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0497bf667f, ext:3988078749, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.491354 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.591594 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.691803 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.792070 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.892379 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:06 kind-worker kubelet[156]: E0603 10:21:06.992858 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.093093 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.193560 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.293811 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.318826 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.324852 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.393058 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.393681 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.393953 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.402488 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.494246 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.594464 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.694705 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.795014 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.895207 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:07 kind-worker kubelet[156]: E0603 10:21:07.995412 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.095613 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.195818 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.296071 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.320319 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.326178 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.394938 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.395654 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.396398 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.403692 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.469143 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.496578 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.596999 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.697747 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.797958 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.898164 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:08 kind-worker kubelet[156]: E0603 10:21:08.998355 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.098555 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.198712 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.298925 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.321765 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.327442 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.396365 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.397348 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.399058 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.404847 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.499254 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.593401 156 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.599414 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: I0603 10:21:09.599628 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:09 kind-worker kubelet[156]: I0603 10:21:09.600696 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.601821 156 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.602202 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265a2f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666ccf5, ext:965495066, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0563cd5bec, ext:7190320132, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265a2f5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.603099 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265cf1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d666f91a, ext:965506368, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0563cd784e, ext:7190327399, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265cf1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.603876 156 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a9117265e8b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf355b03d66712b5, ext:965512924, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355b0563cd88d3, ext:7190331627, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a9117265e8b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.699576 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.799717 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:09 kind-worker kubelet[156]: E0603 10:21:09.899878 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.000069 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.100290 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.200465 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.300681 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.323574 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.328958 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.399118 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.399686 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.400879 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.406398 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.501030 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.601198 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.701358 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.801536 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:10 kind-worker kubelet[156]: E0603 10:21:10.901717 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.001860 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.102035 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.202151 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.302330 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.325189 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.330461 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.400541 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.401162 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.402684 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.407683 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.502875 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.603128 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.703303 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.803931 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:11 kind-worker kubelet[156]: E0603 10:21:11.904220 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.004375 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.104546 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.204722 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.304927 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.328523 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.331738 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.402128 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.402488 156 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.405067 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.408764 156 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.505219 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.605504 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.706760 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.806941 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:12 kind-worker kubelet[156]: E0603 10:21:12.907103 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.007281 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.107469 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: I0603 10:21:13.120783 156 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.207682 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.307866 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.408047 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.470052 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.480607 156 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: I0603 10:21:13.506209 156 reconciler.go:154] Reconciler: start to sync state | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.508375 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.608533 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.708718 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.808905 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:13 kind-worker kubelet[156]: E0603 10:21:13.909044 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.009241 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.109411 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.209574 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.309746 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.409938 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.510151 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.610296 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.710494 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.810776 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:14 kind-worker kubelet[156]: E0603 10:21:14.910966 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.011314 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.111502 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.211678 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.311878 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.412066 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.512259 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.612443 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.712629 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.812784 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.912958 156 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 10:21:15 kind-worker kubelet[156]: E0603 10:21:15.997702 156 controller.go:194] failed to get node "kind-worker" when trying to set owner ref to the node lease: nodes "kind-worker" not found | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.002013 156 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.003161 156 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.006713 156 kubelet_node_status.go:75] Successfully registered node kind-worker | |
Jun 03 10:21:16 kind-worker kubelet[156]: E0603 10:21:16.007686 156 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "kind-worker": nodes "kind-worker" not found | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.113280 156 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.1.0/24 | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.114014 156 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.1.0/24 | |
Jun 03 10:21:16 kind-worker kubelet[156]: E0603 10:21:16.114473 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.211425 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/51f164de-85e9-11e9-a310-0242ac110004-kube-proxy") pod "kube-proxy-kp2s9" (UID: "51f164de-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.211474 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/51f164de-85e9-11e9-a310-0242ac110004-xtables-lock") pod "kube-proxy-kp2s9" (UID: "51f164de-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.211510 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/51f164de-85e9-11e9-a310-0242ac110004-lib-modules") pod "kube-proxy-kp2s9" (UID: "51f164de-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.211543 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-b29kb" (UniqueName: "kubernetes.io/secret/51f164de-85e9-11e9-a310-0242ac110004-kube-proxy-token-b29kb") pod "kube-proxy-kp2s9" (UID: "51f164de-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.311789 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-c7lrr" (UniqueName: "kubernetes.io/secret/51f61a08-85e9-11e9-a310-0242ac110004-kindnet-token-c7lrr") pod "kindnet-s2mkf" (UID: "51f61a08-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.311940 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/51f17b3e-85e9-11e9-a310-0242ac110004-config") pod "ip-masq-agent-j4ssk" (UID: "51f17b3e-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.311973 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ip-masq-agent-token-njsg6" (UniqueName: "kubernetes.io/secret/51f17b3e-85e9-11e9-a310-0242ac110004-ip-masq-agent-token-njsg6") pod "ip-masq-agent-j4ssk" (UID: "51f17b3e-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:16 kind-worker kubelet[156]: I0603 10:21:16.312002 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/51f61a08-85e9-11e9-a310-0242ac110004-cni-cfg") pod "kindnet-s2mkf" (UID: "51f61a08-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:21:18 kind-worker kubelet[156]: E0603 10:21:18.471623 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:23 kind-worker kubelet[156]: E0603 10:21:23.472680 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:23 kind-worker kubelet[156]: E0603 10:21:23.507716 156 summary_sys_containers.go:47] Failed to get system container stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get cgroup stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get container info for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": unknown container "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service" | |
Jun 03 10:21:28 kind-worker kubelet[156]: E0603 10:21:28.473827 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:33 kind-worker kubelet[156]: E0603 10:21:33.474996 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:33 kind-worker kubelet[156]: E0603 10:21:33.527584 156 summary_sys_containers.go:47] Failed to get system container stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get cgroup stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get container info for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": unknown container "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service" | |
Jun 03 10:21:38 kind-worker kubelet[156]: E0603 10:21:38.476265 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:43 kind-worker kubelet[156]: E0603 10:21:43.477508 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:43 kind-worker kubelet[156]: E0603 10:21:43.561059 156 summary_sys_containers.go:47] Failed to get system container stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get cgroup stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get container info for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": unknown container "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service" | |
Jun 03 10:21:48 kind-worker kubelet[156]: E0603 10:21:48.478908 156 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 10:21:53 kind-worker kubelet[156]: E0603 10:21:53.581512 156 summary_sys_containers.go:47] Failed to get system container stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get cgroup stats for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": failed to get container info for "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service": unknown container "/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/docker/03e39691ea7b2ecc9fdcadbc841f633ba8c722a949839d67524b97f0fe0d60ee/system.slice/kubelet.service" | |
Jun 03 10:22:00 kind-worker kubelet[156]: I0603 10:22:00.011221 156 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-8d6cv" (UniqueName: "kubernetes.io/secret/54dca26d-85e9-11e9-a310-0242ac110004-default-token-8d6cv") pod "hello-6d6586c69c-vvqlx" (UID: "54dca26d-85e9-11e9-a310-0242ac110004") | |
Jun 03 10:22:02 kind-worker kubelet[156]: E0603 10:22:02.429218 156 upgradeaware.go:384] Error proxying data from backend to client: read tcp 127.0.0.1:45862->127.0.0.1:35752: read: connection reset by peer |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Initializing machine ID from random generator. | |
Inserted module 'autofs4' |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment