Skip to content

Instantly share code, notes, and snippets.

@egernst
Last active August 9, 2018 04:26
Show Gist options
  • Select an option

  • Save egernst/56427ed77d82ba67be167583d8ef323c to your computer and use it in GitHub Desktop.

Select an option

Save egernst/56427ed77d82ba67be167583d8ef323c to your computer and use it in GitHub Desktop.
$ sudo kata-runtime list
ID                                                                 PID         STATUS      BUNDLE                                                                                                                 CREATED                          OWNER
f9c27becce8ce678523313eca093ae64c45bf51d67a7493190de3048e3a45dac   22859       running     /run/containers/storage/overlay-containers/f9c27becce8ce678523313eca093ae64c45bf51d67a7493190de3048e3a45dac/userdata   2018-08-09T04:15:38.169158318Z   #0
296c0247716df5c89663c07fc0383f346933aa7214ab5234b2809f7eb66bef44   22972       stopped     /run/containers/storage/overlay-containers/296c0247716df5c89663c07fc0383f346933aa7214ab5234b2809f7eb66bef44/userdata   2018-08-09T04:15:39.411160174Z   #0
a64f8601f98eb3c9171505713aca2cce6e53b271a607690b8c72f4fd418e7863   24169       running     /run/containers/storage/overlay-containers/a64f8601f98eb3c9171505713aca2cce6e53b271a607690b8c72f4fd418e7863/userdata   2018-08-09T04:17:24.713274953Z   #0
f51cba19458669347714999d0da20d371230e79b88ce1972c7427b1296bc541a   22802       running     /run/containers/storage/overlay-containers/f51cba19458669347714999d0da20d371230e79b88ce1972c7427b1296bc541a/userdata   2018-08-09T04:15:37.923277214Z   #0
b5732d45ecf658cdd3303667d4d77570b7501d07b1605d4e1ccc20b12bb08bb0   22939       running     /run/containers/storage/overlay-containers/b5732d45ecf658cdd3303667d4d77570b7501d07b1605d4e1ccc20b12bb08bb0/userdata   2018-08-09T04:15:39.303822167Z   #0
$ cat /etc/crio/crio.conf | grep default_workload
# default_workload_trust is the default level of trust crio puts in container
default_workload_trust = "untrusted"

--> now restart it

sudo systemctl daemon-reload
sudo systemctl restart crio
$ sudo systemctl restart crio
kata@kata-clear ~ $ ps -ae | grep kata
22745 ?        00:00:05 kata-qemu-lite-
22763 ?        00:00:05 kata-qemu-lite-
22771 ?        00:00:00 kata-proxy
22778 ?        00:00:00 kata-proxy
22802 ?        00:00:00 kata-shim
22859 ?        00:00:00 kata-shim
25609 ?        00:00:00 kata-shim
25669 ?        00:00:00 kata-shim

journal:

Aug 09 04:20:17 kata-clear sudo[26220]:     kata : TTY=pts/1 ; PWD=/home/kata ; USER=root ; COMMAND=/usr/bin/systemctl restart crio
Aug 09 04:20:17 kata-clear sudo[26220]: pam_unix(sudo:session): session opened for user root by (uid=0)
Aug 09 04:20:17 kata-clear systemd[1]: Stopping Open Container Initiative Daemon...
Aug 09 04:20:17 kata-clear crio[21132]: time="2018-08-09 04:20:17.641376913Z" level=error msg="Failed to start streaming server: http: Server closed"
Aug 09 04:20:17 kata-clear systemd[1]: Stopped Open Container Initiative Daemon.
Aug 09 04:20:17 kata-clear systemd[1]: Starting Open Container Initiative Daemon...
Aug 09 04:20:17 kata-clear kernel: overlayfs: NFS export requires "redirect_dir=nofollow" on non-upper mount, falling back to nfs_export=off.
Aug 09 04:20:17 kata-clear kubelet[21572]: E0809 04:20:17.755635   21572 remote_runtime.go:169] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
Aug 09 04:20:17 kata-clear kubelet[21572]: E0809 04:20:17.755710   21572 kuberuntime_sandbox.go:195] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
Aug 09 04:20:17 kata-clear kubelet[21572]: E0809 04:20:17.755723   21572 kubelet_pods.go:1019] Error listing containers: &status.statusError{Code:14, Message:"grpc: the connection is unavailable", Details:[]*any.Any(nil)}
Aug 09 04:20:17 kata-clear kubelet[21572]: E0809 04:20:17.755750   21572 kubelet.go:1928] Failed cleaning pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
Aug 09 04:20:17 kata-clear crio[26225]: time="2018-08-09 04:20:17.934788553Z" level=warning msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Aug 09 04:20:17 kata-clear crio[26225]: time="2018-08-09 04:20:17.934835653Z" level=info msg="[graphdriver] using prior storage driver: overlay"
Aug 09 04:20:17 kata-clear crio[26225]: time="2018-08-09 04:20:17.937768461Z" level=warning msg="hooks path: "/usr/share/containers/oci/hooks.d" does not exist"
Aug 09 04:20:17 kata-clear crio[26225]: time="2018-08-09 04:20:17.937789762Z" level=warning msg="hooks path: "/etc/containers/oci/hooks.d" does not exist"
Aug 09 04:20:17 kata-clear crio[26225]: time="2018-08-09 04:20:17.940321869Z" level=info msg="CNI network cbr0 (type=flannel) is used from /etc/cni/net.d/10-flannel.conflist"
Aug 09 04:20:17 kata-clear crio[26225]: time="2018-08-09 04:20:17.940350769Z" level=info msg="Initial CNI setting succeeded"
Aug 09 04:20:17 kata-clear kubelet[21572]: E0809 04:20:17.961258   21572 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
Aug 09 04:20:17 kata-clear kubelet[21572]: E0809 04:20:17.961310   21572 kuberuntime_sandbox.go:195] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
Aug 09 04:20:17 kata-clear kubelet[21572]: E0809 04:20:17.961327   21572 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
Aug 09 04:20:18 kata-clear crio[26225]: time="2018-08-09 04:20:18.186964875Z" level=warning msg="could not restore sandbox IP for 4ca8055b049270e3f86c072beedaba3b6703b89176436678127c798b01c0a2f9: failed to get network status for pod sandbox k8s_coredns-78fcdf6894-lhd72_kube-system_fdc507bd-9b89-11e8-bf4d-000d3af78d7a_1(4ca8055b049270e3f86c072beedaba3b6703b89176436678127c798b01c0a2f9): Unexpected command output nsenter: cannot open /proc/19721/ns/net: No such file or directory
Aug 09 04:20:18 kata-clear crio[26225]:  with error: exit status 1"
Aug 09 04:20:18 kata-clear crio[26225]: time="2018-08-09 04:20:18.188992081Z" level=warning msg="could not restore sandbox IP for a97fbfec6d9585b58615eb780054c98c4f32ab27f7f698f5dc41542b735209ce: failed to get network status for pod sandbox k8s_coredns-78fcdf6894-lhd72_kube-system_fdc507bd-9b89-11e8-bf4d-000d3af78d7a_0(a97fbfec6d9585b58615eb780054c98c4f32ab27f7f698f5dc41542b735209ce): Unexpected command output nsenter: cannot open /proc/18382/ns/net: No such file or directory
Aug 09 04:20:18 kata-clear crio[26225]:  with error: exit status 1"
Aug 09 04:20:18 kata-clear crio[26225]: time="2018-08-09 04:20:18.193120792Z" level=warning msg="could not restore sandbox IP for a18a64985a07e07114f9939fab5ec7f7d51d35da0a3951f49fe0c864666701e5: failed to get network status for pod sandbox k8s_coredns-78fcdf6894-g5697_kube-system_fdd7de6a-9b89-11e8-bf4d-000d3af78d7a_1(a18a64985a07e07114f9939fab5ec7f7d51d35da0a3951f49fe0c864666701e5): Unexpected command output nsenter: cannot open /proc/19703/ns/net: No such file or directory
Aug 09 04:20:18 kata-clear crio[26225]:  with error: exit status 1"
Aug 09 04:20:18 kata-clear crio[26225]: time="2018-08-09 04:20:18.194629097Z" level=warning msg="could not restore sandbox IP for 125b9a21b435e4136136ba2522501151edbf18bea03fb47a82e7b46bc6c8d74d: failed to get network status for pod sandbox k8s_coredns-78fcdf6894-g5697_kube-system_fdd7de6a-9b89-11e8-bf4d-000d3af78d7a_0(125b9a21b435e4136136ba2522501151edbf18bea03fb47a82e7b46bc6c8d74d): Unexpected command output nsenter: cannot open /proc/18264/ns/net: No such file or directory
Aug 09 04:20:18 kata-clear crio[26225]:  with error: exit status 1"
Aug 09 04:20:18 kata-clear systemd[1]: Started Open Container Initiative Daemon.
Aug 09 04:20:18 kata-clear sudo[26220]: pam_unix(sudo:session): session closed for user root
Aug 09 04:20:18 kata-clear crio[26225]: time="2018-08-09 04:20:18.199097110Z" level=error msg="watcher.Add("/usr/share/containers/oci/hooks.d") failed: no such file or directory"
Aug 09 04:20:18 kata-clear kubelet[21572]: W0809 04:20:18.968658   21572 pod_container_deletor.go:75] Container "f9c27becce8ce678523313eca093ae64c45bf51d67a7493190de3048e3a45dac" not found in pod's containers
Aug 09 04:20:19 kata-clear kubelet[21572]: W0809 04:20:19.193887   21572 pod_container_deletor.go:75] Container "f51cba19458669347714999d0da20d371230e79b88ce1972c7427b1296bc541a" not found in pod's containers
Aug 09 04:20:19 kata-clear crio[26225]: time="2018-08-09 04:20:19.287629326Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-klmq6 Namespace:kube-system ID:f9c27becce8ce678523313eca093ae64c45bf51d67a7493190de3048e3a45dac NetNS:/proc/22859/ns/net PortMappings:[]}"
Aug 09 04:20:19 kata-clear crio[26225]: time="2018-08-09 04:20:19.287680026Z" level=info msg="About to del CNI network cbr0 (type=flannel)"
Aug 09 04:20:19 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:19 kata-clear systemd-networkd[2750]: vethefdfc1a6: Lost carrier
Aug 09 04:20:19 kata-clear kernel: cni0: port 6(vethefdfc1a6) entered disabled state
Aug 09 04:20:19 kata-clear kernel: device vethefdfc1a6 left promiscuous mode
Aug 09 04:20:19 kata-clear kernel: cni0: port 6(vethefdfc1a6) entered disabled state
Aug 09 04:20:19 kata-clear crio[26225]: time="2018-08-09 04:20:19.508974659Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-fgmv6 Namespace:kube-system ID:f51cba19458669347714999d0da20d371230e79b88ce1972c7427b1296bc541a NetNS:/proc/22802/ns/net PortMappings:[]}"
Aug 09 04:20:19 kata-clear crio[26225]: time="2018-08-09 04:20:19.509020860Z" level=info msg="About to del CNI network cbr0 (type=flannel)"
Aug 09 04:20:19 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:19 kata-clear systemd-networkd[2750]: veth060477c9: Lost carrier
Aug 09 04:20:19 kata-clear kernel: cni0: port 5(veth060477c9) entered disabled state
Aug 09 04:20:19 kata-clear kernel: device veth060477c9 left promiscuous mode
Aug 09 04:20:19 kata-clear kernel: cni0: port 5(veth060477c9) entered disabled state
Aug 09 04:20:19 kata-clear conmon[26555]: conmon 383884c6eb031047af02 <ninfo>: container PID: 26565
Aug 09 04:20:19 kata-clear conmon[26555]: conmon 383884c6eb031047af02 <ninfo>: attach sock path: /var/run/crio/383884c6eb031047af021ae21e2056e77e6f8ab13a732017e8b230fb486e65fc/attach
Aug 09 04:20:19 kata-clear conmon[26555]: conmon 383884c6eb031047af02 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/var/run/crio/383884c6eb031047af021ae21e2056e77e6f8ab13a732017e8b230fb486e65fc/attach}
Aug 09 04:20:19 kata-clear conmon[26555]: conmon 383884c6eb031047af02 <ninfo>: ctl fifo path: /var/run/containers/storage/overlay-containers/383884c6eb031047af021ae21e2056e77e6f8ab13a732017e8b230fb486e65fc/userdata/ctl
Aug 09 04:20:19 kata-clear conmon[26555]: conmon 383884c6eb031047af02 <ninfo>: terminal_ctrl_fd: 15
Aug 09 04:20:19 kata-clear crio[26225]: time="2018-08-09 04:20:19.910314608Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-klmq6 Namespace:kube-system ID:383884c6eb031047af021ae21e2056e77e6f8ab13a732017e8b230fb486e65fc NetNS:/proc/26565/ns/net PortMappings:[]}"
Aug 09 04:20:19 kata-clear crio[26225]: time="2018-08-09 04:20:19.910347708Z" level=info msg="About to add CNI network cni-loopback (type=loopback)"
Aug 09 04:20:19 kata-clear crio[26225]: time="2018-08-09 04:20:19.912878016Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-klmq6 Namespace:kube-system ID:383884c6eb031047af021ae21e2056e77e6f8ab13a732017e8b230fb486e65fc NetNS:/proc/26565/ns/net PortMappings:[]}"
Aug 09 04:20:19 kata-clear crio[26225]: time="2018-08-09 04:20:19.912902716Z" level=info msg="About to add CNI network cbr0 (type=flannel)"
Aug 09 04:20:19 kata-clear systemd-udevd[26597]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 09 04:20:19 kata-clear kernel: cni0: port 5(veth8cfcec7f) entered blocking state
Aug 09 04:20:19 kata-clear kernel: cni0: port 5(veth8cfcec7f) entered disabled state
Aug 09 04:20:19 kata-clear kernel: device veth8cfcec7f entered promiscuous mode
Aug 09 04:20:19 kata-clear kernel: cni0: port 5(veth8cfcec7f) entered blocking state
Aug 09 04:20:19 kata-clear kernel: cni0: port 5(veth8cfcec7f) entered forwarding state
Aug 09 04:20:19 kata-clear systemd-udevd[26597]: Could not generate persistent MAC address for veth8cfcec7f: No such file or directory
Aug 09 04:20:19 kata-clear systemd-networkd[2750]: veth8cfcec7f: Gained carrier
Aug 09 04:20:19 kata-clear kubelet[21572]: I0809 04:20:19.971509   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:19 kata-clear kubelet[21572]: I0809 04:20:19.971715   21572 kuberuntime_manager.go:767] Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)
Aug 09 04:20:19 kata-clear kubelet[21572]: E0809 04:20:19.971797   21572 pod_workers.go:186] Error syncing pod d0ab4633-9b8a-11e8-a1e4-000d3af78d7a ("coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:20 kata-clear conmon[26622]: conmon 511345d753f5c70bdafb <ninfo>: container PID: 26632
Aug 09 04:20:20 kata-clear conmon[26622]: conmon 511345d753f5c70bdafb <ninfo>: attach sock path: /var/run/crio/511345d753f5c70bdafb97854046c81c05e4fe0b0b9bfa55a82be0aa94c6fa40/attach
Aug 09 04:20:20 kata-clear conmon[26622]: conmon 511345d753f5c70bdafb <ninfo>: addr{sun_family=AF_UNIX, sun_path=/var/run/crio/511345d753f5c70bdafb97854046c81c05e4fe0b0b9bfa55a82be0aa94c6fa40/attach}
Aug 09 04:20:20 kata-clear conmon[26622]: conmon 511345d753f5c70bdafb <ninfo>: ctl fifo path: /var/run/containers/storage/overlay-containers/511345d753f5c70bdafb97854046c81c05e4fe0b0b9bfa55a82be0aa94c6fa40/userdata/ctl
Aug 09 04:20:20 kata-clear conmon[26622]: conmon 511345d753f5c70bdafb <ninfo>: terminal_ctrl_fd: 15
Aug 09 04:20:20 kata-clear crio[26225]: time="2018-08-09 04:20:20.176646271Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-fgmv6 Namespace:kube-system ID:511345d753f5c70bdafb97854046c81c05e4fe0b0b9bfa55a82be0aa94c6fa40 NetNS:/proc/26632/ns/net PortMappings:[]}"
Aug 09 04:20:20 kata-clear crio[26225]: time="2018-08-09 04:20:20.176673371Z" level=info msg="About to add CNI network cni-loopback (type=loopback)"
Aug 09 04:20:20 kata-clear crio[26225]: time="2018-08-09 04:20:20.178831177Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-fgmv6 Namespace:kube-system ID:511345d753f5c70bdafb97854046c81c05e4fe0b0b9bfa55a82be0aa94c6fa40 NetNS:/proc/26632/ns/net PortMappings:[]}"
Aug 09 04:20:20 kata-clear crio[26225]: time="2018-08-09 04:20:20.178853677Z" level=info msg="About to add CNI network cbr0 (type=flannel)"
Aug 09 04:20:20 kata-clear systemd-udevd[26664]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 09 04:20:20 kata-clear systemd-udevd[26664]: Could not generate persistent MAC address for veth4290cf17: No such file or directory
Aug 09 04:20:20 kata-clear kernel: cni0: port 6(veth4290cf17) entered blocking state
Aug 09 04:20:20 kata-clear kernel: cni0: port 6(veth4290cf17) entered disabled state
Aug 09 04:20:20 kata-clear kernel: device veth4290cf17 entered promiscuous mode
Aug 09 04:20:20 kata-clear kernel: cni0: port 6(veth4290cf17) entered blocking state
Aug 09 04:20:20 kata-clear kernel: cni0: port 6(veth4290cf17) entered forwarding state
Aug 09 04:20:20 kata-clear systemd-networkd[2750]: veth4290cf17: Gained carrier
Aug 09 04:20:20 kata-clear kubelet[21572]: I0809 04:20:20.232531   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:20 kata-clear kubelet[21572]: I0809 04:20:20.232670   21572 kuberuntime_manager.go:767] Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)
Aug 09 04:20:20 kata-clear kubelet[21572]: E0809 04:20:20.232705   21572 pod_workers.go:186] Error syncing pod d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a ("coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:20 kata-clear kubelet[21572]: I0809 04:20:20.498865   21572 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-vs7jv ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 09 04:20:20 kata-clear kubelet[21572]: I0809 04:20:20.498984   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:20 kata-clear kubelet[21572]: I0809 04:20:20.499191   21572 kuberuntime_manager.go:767] Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)
Aug 09 04:20:20 kata-clear kubelet[21572]: E0809 04:20:20.499229   21572 pod_workers.go:186] Error syncing pod d0ab4633-9b8a-11e8-a1e4-000d3af78d7a ("coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:21 kata-clear kubelet[21572]: I0809 04:20:21.521502   21572 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-vs7jv ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 09 04:20:21 kata-clear kubelet[21572]: I0809 04:20:21.521600   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:21 kata-clear kubelet[21572]: I0809 04:20:21.521723   21572 kuberuntime_manager.go:767] Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)
Aug 09 04:20:21 kata-clear kubelet[21572]: E0809 04:20:21.521753   21572 pod_workers.go:186] Error syncing pod d0ab4633-9b8a-11e8-a1e4-000d3af78d7a ("coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:21 kata-clear crio[26225]: time="2018-08-09 04:20:21.522473623Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-fgmv6 Namespace:kube-system ID:511345d753f5c70bdafb97854046c81c05e4fe0b0b9bfa55a82be0aa94c6fa40 NetNS:/proc/26632/ns/net PortMappings:[]}"
Aug 09 04:20:21 kata-clear crio[26225]: time="2018-08-09 04:20:21.522499124Z" level=info msg="About to del CNI network cbr0 (type=flannel)"
Aug 09 04:20:21 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:21 kata-clear systemd-networkd[2750]: veth4290cf17: Lost carrier
Aug 09 04:20:21 kata-clear kernel: cni0: port 6(veth4290cf17) entered disabled state
Aug 09 04:20:21 kata-clear kernel: device veth4290cf17 left promiscuous mode
Aug 09 04:20:21 kata-clear kernel: cni0: port 6(veth4290cf17) entered disabled state
Aug 09 04:20:22 kata-clear kubelet[21572]: E0809 04:20:22.260487   21572 manager.go:1130] Failed to create existing container: /kubepods/burstable/poda00c35e56ebd0bdfcd77d53674a5d2a1/crio-a5e71361d006b9ca54e51b846e2fa0cacce14304855c6c4b75436986027f9c2c: invalid character 'c' looking for beginning of value
Aug 09 04:20:22 kata-clear kubelet[21572]: W0809 04:20:22.286501   21572 pod_container_deletor.go:75] Container "511345d753f5c70bdafb97854046c81c05e4fe0b0b9bfa55a82be0aa94c6fa40" not found in pod's containers
Aug 09 04:20:22 kata-clear conmon[26762]: conmon 868877d65ec5b0a0a53d <ninfo>: container PID: 26772
Aug 09 04:20:22 kata-clear conmon[26762]: conmon 868877d65ec5b0a0a53d <ninfo>: attach sock path: /var/run/crio/868877d65ec5b0a0a53db79b6b6679921101d956f405798ea3f8c33befdf3220/attach
Aug 09 04:20:22 kata-clear conmon[26762]: conmon 868877d65ec5b0a0a53d <ninfo>: addr{sun_family=AF_UNIX, sun_path=/var/run/crio/868877d65ec5b0a0a53db79b6b6679921101d956f405798ea3f8c33befdf3220/attach}
Aug 09 04:20:22 kata-clear conmon[26762]: conmon 868877d65ec5b0a0a53d <ninfo>: ctl fifo path: /var/run/containers/storage/overlay-containers/868877d65ec5b0a0a53db79b6b6679921101d956f405798ea3f8c33befdf3220/userdata/ctl
Aug 09 04:20:22 kata-clear conmon[26762]: conmon 868877d65ec5b0a0a53d <ninfo>: terminal_ctrl_fd: 15
Aug 09 04:20:22 kata-clear crio[26225]: time="2018-08-09 04:20:22.453682489Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-fgmv6 Namespace:kube-system ID:868877d65ec5b0a0a53db79b6b6679921101d956f405798ea3f8c33befdf3220 NetNS:/proc/26772/ns/net PortMappings:[]}"
Aug 09 04:20:22 kata-clear crio[26225]: time="2018-08-09 04:20:22.453710089Z" level=info msg="About to add CNI network cni-loopback (type=loopback)"
Aug 09 04:20:22 kata-clear crio[26225]: time="2018-08-09 04:20:22.456471597Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-fgmv6 Namespace:kube-system ID:868877d65ec5b0a0a53db79b6b6679921101d956f405798ea3f8c33befdf3220 NetNS:/proc/26772/ns/net PortMappings:[]}"
Aug 09 04:20:22 kata-clear crio[26225]: time="2018-08-09 04:20:22.456493797Z" level=info msg="About to add CNI network cbr0 (type=flannel)"
Aug 09 04:20:22 kata-clear systemd-udevd[26804]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 09 04:20:22 kata-clear systemd-udevd[26804]: Could not generate persistent MAC address for vethbdc6c319: No such file or directory
Aug 09 04:20:22 kata-clear systemd-networkd[2750]: vethbdc6c319: Gained carrier
Aug 09 04:20:22 kata-clear kernel: cni0: port 6(vethbdc6c319) entered blocking state
Aug 09 04:20:22 kata-clear kernel: cni0: port 6(vethbdc6c319) entered disabled state
Aug 09 04:20:22 kata-clear kernel: device vethbdc6c319 entered promiscuous mode
Aug 09 04:20:22 kata-clear kernel: cni0: port 6(vethbdc6c319) entered blocking state
Aug 09 04:20:22 kata-clear kernel: cni0: port 6(vethbdc6c319) entered forwarding state
Aug 09 04:20:22 kata-clear kubelet[21572]: I0809 04:20:22.511240   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:22 kata-clear kubelet[21572]: I0809 04:20:22.511489   21572 kuberuntime_manager.go:767] Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)
Aug 09 04:20:22 kata-clear kubelet[21572]: E0809 04:20:22.511524   21572 pod_workers.go:186] Error syncing pod d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a ("coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:22 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:22 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:23 kata-clear kubelet[21572]: I0809 04:20:23.592259   21572 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-vs7jv ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 09 04:20:23 kata-clear kubelet[21572]: I0809 04:20:23.592377   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:23 kata-clear kubelet[21572]: I0809 04:20:23.592549   21572 kuberuntime_manager.go:767] Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)
Aug 09 04:20:23 kata-clear kubelet[21572]: E0809 04:20:23.592579   21572 pod_workers.go:186] Error syncing pod d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a ("coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:23 kata-clear kubelet[21572]: I0809 04:20:23.887630   21572 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-vs7jv ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 09 04:20:23 kata-clear kubelet[21572]: I0809 04:20:23.887769   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:23 kata-clear kubelet[21572]: I0809 04:20:23.887939   21572 kuberuntime_manager.go:767] Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)
Aug 09 04:20:23 kata-clear kubelet[21572]: E0809 04:20:23.887968   21572 pod_workers.go:186] Error syncing pod d0ab4633-9b8a-11e8-a1e4-000d3af78d7a ("coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:24 kata-clear kubelet[21572]: I0809 04:20:24.594149   21572 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-vs7jv ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 09 04:20:24 kata-clear kubelet[21572]: I0809 04:20:24.594311   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:24 kata-clear kubelet[21572]: I0809 04:20:24.594453   21572 kuberuntime_manager.go:767] Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)
Aug 09 04:20:24 kata-clear kubelet[21572]: E0809 04:20:24.594485   21572 pod_workers.go:186] Error syncing pod d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a ("coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:25 kata-clear crio[26225]: time="2018-08-09 04:20:25.551148056Z" level=warning msg="unable to get stats for container %s0b8db3fe87f57d5f583e29533c65682bb8e2df8c9a2bcae41264a539d955f0a5"
Aug 09 04:20:25 kata-clear crio[26225]: time="2018-08-09 04:20:25.551249157Z" level=warning msg="unable to get stats for container %s276520d38be689790ef73e5ed3428531704719600b904783e44460f49478ae7c"
Aug 09 04:20:25 kata-clear crio[26225]: time="2018-08-09 04:20:25.565441497Z" level=warning msg="unable to get stats for container %s61a325be338cc3eca28f6961912c2a32743a1f2827115acf545931ef0e2cd4ab"
Aug 09 04:20:25 kata-clear crio[26225]: time="2018-08-09 04:20:25.565582798Z" level=warning msg="unable to get stats for container %s1f555d026ea1dd79fcf14350b0963b6796106196a469f2d7eab304e06e4bfcce"
Aug 09 04:20:25 kata-clear crio[26225]: time="2018-08-09 04:20:25.565698098Z" level=warning msg="unable to get stats for container %se353ab9f923d77714373629230319b3314e73f77ecc9db51fd26a20e4776958e"
Aug 09 04:20:25 kata-clear crio[26225]: time="2018-08-09 04:20:25.565790698Z" level=warning msg="unable to get stats for container %s87854df52a48a4246da0a6f93abb328f5eecb1e57e17d36376a9543de2b9356d"
Aug 09 04:20:28 kata-clear kubelet[21572]: I0809 04:20:28.495723   21572 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-vs7jv ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 09 04:20:28 kata-clear kubelet[21572]: I0809 04:20:28.495853   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:28 kata-clear kubelet[21572]: I0809 04:20:28.496013   21572 kuberuntime_manager.go:767] Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)
Aug 09 04:20:28 kata-clear kubelet[21572]: E0809 04:20:28.496050   21572 pod_workers.go:186] Error syncing pod d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a ("coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.087676443Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-g5697 Namespace:kube-system ID:125b9a21b435e4136136ba2522501151edbf18bea03fb47a82e7b46bc6c8d74d NetNS:/proc/18264/ns/net PortMappings:[]}"
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.087705943Z" level=info msg="About to del CNI network cbr0 (type=flannel)"
Aug 09 04:20:30 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.289673021Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-g5697 Namespace:kube-system ID:a18a64985a07e07114f9939fab5ec7f7d51d35da0a3951f49fe0c864666701e5 NetNS:/proc/19703/ns/net PortMappings:[]}"
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.289745122Z" level=info msg="About to del CNI network cbr0 (type=flannel)"
Aug 09 04:20:30 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.314793993Z" level=error msg="Error deleting network: failed to Statfs "/proc/19703/ns/net": no such file or directory"
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.314821793Z" level=warning msg="failed to destroy network for pod sandbox k8s_coredns-78fcdf6894-g5697_kube-system_fdd7de6a-9b89-11e8-bf4d-000d3af78d7a_1(a18a64985a07e07114f9939fab5ec7f7d51d35da0a3951f49fe0c864666701e5): failed to Statfs "/proc/19703/ns/net": no such file or directory"
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.665656998Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-lhd72 Namespace:kube-system ID:a97fbfec6d9585b58615eb780054c98c4f32ab27f7f698f5dc41542b735209ce NetNS:/proc/18382/ns/net PortMappings:[]}"
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.665718598Z" level=info msg="About to del CNI network cbr0 (type=flannel)"
Aug 09 04:20:30 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.848473521Z" level=info msg="Got pod network {Name:coredns-78fcdf6894-lhd72 Namespace:kube-system ID:4ca8055b049270e3f86c072beedaba3b6703b89176436678127c798b01c0a2f9 NetNS:/proc/19721/ns/net PortMappings:[]}"
Aug 09 04:20:30 kata-clear crio[26225]: time="2018-08-09 04:20:30.848504221Z" level=info msg="About to del CNI network cbr0 (type=flannel)"
Aug 09 04:20:30 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:32 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:32 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:35 kata-clear crio[26225]: time="2018-08-09 04:20:35.640515839Z" level=warning msg="unable to get stats for container %s0b8db3fe87f57d5f583e29533c65682bb8e2df8c9a2bcae41264a539d955f0a5"
Aug 09 04:20:35 kata-clear crio[26225]: time="2018-08-09 04:20:35.640653640Z" level=warning msg="unable to get stats for container %s276520d38be689790ef73e5ed3428531704719600b904783e44460f49478ae7c"
Aug 09 04:20:40 kata-clear kubelet[21572]: I0809 04:20:40.051153   21572 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-vs7jv ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 09 04:20:40 kata-clear kubelet[21572]: I0809 04:20:40.051331   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-klmq6_kube-system(d0ab4633-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:40 kata-clear kubelet[21572]: I0809 04:20:40.051794   21572 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-vs7jv ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 09 04:20:40 kata-clear kubelet[21572]: I0809 04:20:40.051871   21572 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-fgmv6_kube-system(d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a)"
Aug 09 04:20:40 kata-clear crio[26225]: time="2018-08-09 04:20:40.065511207Z" level=warning msg="requested logPath for ctr id 5dd86a05a2b9d7bb118e7b16dceb459be1581306ff946e1452fdd5576e916e79 is a relative path: coredns/3.log"
Aug 09 04:20:40 kata-clear crio[26225]: time="2018-08-09 04:20:40.065543107Z" level=warning msg="logPath from relative path is now absolute: /var/log/pods/d0ac56e1-9b8a-11e8-a1e4-000d3af78d7a/coredns/3.log"
Aug 09 04:20:40 kata-clear crio[26225]: time="2018-08-09 04:20:40.066067708Z" level=warning msg="requested logPath for ctr id d68d1c12cb2ccfcec83098827669d0ff15f5d28b882b15931110024f7d9c9269 is a relative path: coredns/3.log"
Aug 09 04:20:40 kata-clear crio[26225]: time="2018-08-09 04:20:40.066083808Z" level=warning msg="logPath from relative path is now absolute: /var/log/pods/d0ab4633-9b8a-11e8-a1e4-000d3af78d7a/coredns/3.log"
Aug 09 04:20:40 kata-clear crio[26225]: time="2018-08-09 04:20:40.283118930Z" level=warning msg="file "/etc/containers/mounts.conf" not found, skipping..."
Aug 09 04:20:40 kata-clear crio[26225]: time="2018-08-09 04:20:40.283164030Z" level=warning msg="file "/usr/share/containers/mounts.conf" not found, skipping..."
Aug 09 04:20:40 kata-clear conmon[27159]: conmon 5dd86a05a2b9d7bb118e <ninfo>: container PID: 27170
Aug 09 04:20:40 kata-clear conmon[27159]: conmon 5dd86a05a2b9d7bb118e <ninfo>: attach sock path: /var/run/crio/5dd86a05a2b9d7bb118e7b16dceb459be1581306ff946e1452fdd5576e916e79/attach
Aug 09 04:20:40 kata-clear conmon[27159]: conmon 5dd86a05a2b9d7bb118e <ninfo>: addr{sun_family=AF_UNIX, sun_path=/var/run/crio/5dd86a05a2b9d7bb118e7b16dceb459be1581306ff946e1452fdd5576e916e79/attach}
Aug 09 04:20:40 kata-clear conmon[27159]: conmon 5dd86a05a2b9d7bb118e <ninfo>: ctl fifo path: /var/run/containers/storage/overlay-containers/5dd86a05a2b9d7bb118e7b16dceb459be1581306ff946e1452fdd5576e916e79/userdata/ctl
Aug 09 04:20:40 kata-clear conmon[27159]: conmon 5dd86a05a2b9d7bb118e <ninfo>: terminal_ctrl_fd: 15
Aug 09 04:20:40 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:40 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:40 kata-clear crio[26225]: time="2018-08-09 04:20:40.482090399Z" level=warning msg="file "/etc/containers/mounts.conf" not found, skipping..."
Aug 09 04:20:40 kata-clear crio[26225]: time="2018-08-09 04:20:40.482118399Z" level=warning msg="file "/usr/share/containers/mounts.conf" not found, skipping..."
Aug 09 04:20:40 kata-clear conmon[27202]: conmon d68d1c12cb2ccfcec830 <ninfo>: container PID: 27213
Aug 09 04:20:40 kata-clear conmon[27202]: conmon d68d1c12cb2ccfcec830 <ninfo>: attach sock path: /var/run/crio/d68d1c12cb2ccfcec83098827669d0ff15f5d28b882b15931110024f7d9c9269/attach
Aug 09 04:20:40 kata-clear conmon[27202]: conmon d68d1c12cb2ccfcec830 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/var/run/crio/d68d1c12cb2ccfcec83098827669d0ff15f5d28b882b15931110024f7d9c9269/attach}
Aug 09 04:20:40 kata-clear conmon[27202]: conmon d68d1c12cb2ccfcec830 <ninfo>: ctl fifo path: /var/run/containers/storage/overlay-containers/d68d1c12cb2ccfcec83098827669d0ff15f5d28b882b15931110024f7d9c9269/userdata/ctl
Aug 09 04:20:40 kata-clear conmon[27202]: conmon d68d1c12cb2ccfcec830 <ninfo>: terminal_ctrl_fd: 15
Aug 09 04:20:40 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:40 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:42 kata-clear systemd[1]: Started OpenSSH per-connection server daemon (86.42.91.227:20232).
Aug 09 04:20:42 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:42 kata-clear sshd[27284]: error: Could not load host key: /etc/ssh/ssh_host_dsa_key
Aug 09 04:20:42 kata-clear sshd[27284]: error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key
Aug 09 04:20:42 kata-clear sshd[27284]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Aug 09 04:20:42 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:42 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:45 kata-clear crio[26225]: time="2018-08-09 04:20:45.730390324Z" level=warning msg="unable to get stats for container %s0b8db3fe87f57d5f583e29533c65682bb8e2df8c9a2bcae41264a539d955f0a5"
Aug 09 04:20:45 kata-clear crio[26225]: time="2018-08-09 04:20:45.730519824Z" level=warning msg="unable to get stats for container %s276520d38be689790ef73e5ed3428531704719600b904783e44460f49478ae7c"
Aug 09 04:20:52 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:52 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:20:55 kata-clear crio[26225]: time="2018-08-09 04:20:55.887861201Z" level=warning msg="unable to get stats for container %s0b8db3fe87f57d5f583e29533c65682bb8e2df8c9a2bcae41264a539d955f0a5"
Aug 09 04:20:55 kata-clear crio[26225]: time="2018-08-09 04:20:55.888034302Z" level=warning msg="unable to get stats for container %s276520d38be689790ef73e5ed3428531704719600b904783e44460f49478ae7c"
Aug 09 04:20:57 kata-clear sshd[27284]: Connection closed by 86.42.91.227 port 20232 [preauth]
Aug 09 04:21:02 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:21:02 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:21:05 kata-clear crio[26225]: time="2018-08-09 04:21:05.973194772Z" level=warning msg="unable to get stats for container %s0b8db3fe87f57d5f583e29533c65682bb8e2df8c9a2bcae41264a539d955f0a5"
Aug 09 04:21:05 kata-clear crio[26225]: time="2018-08-09 04:21:05.973320373Z" level=warning msg="unable to get stats for container %s276520d38be689790ef73e5ed3428531704719600b904783e44460f49478ae7c"
Aug 09 04:21:12 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:21:12 kata-clear kernel: clr: call_modprobe: net-pf-10   2 
Aug 09 04:21:16 kata-clear crio[26225]: time="2018-08-09 04:21:16.072042982Z" level=warning msg="unable to get stats for container %s0b8db3fe87f57d5f583e29533c65682bb8e2df8c9a2bcae41264a539d955f0a5"
Aug 09 04:21:16 kata-clear crio[26225]: time="2018-08-09 04:21:16.072169583Z" level=warning msg="unable to get stats for container %s276520d38be689790ef73e5ed3428531704719600b904783e44460f49478ae7c"
Aug 09 04:21:18 kata-clear sudo[27661]:     kata : TTY=pts/1 ; PWD=/home/kata ; USER=root ; COMMAND=/usr/bin/journalctl
Aug 09 04:21:18 kata-clear sudo[27661]: pam_unix(sudo:session): session opened for user root by (uid=0)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment