Skip to content

Instantly share code, notes, and snippets.

@smarterclayton
Created January 26, 2018 15:45
Show Gist options
  • Save smarterclayton/2c34ec65ff477bb410b07156828375d6 to your computer and use it in GitHub Desktop.
Save smarterclayton/2c34ec65ff477bb410b07156828375d6 to your computer and use it in GitHub Desktop.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x393dad8]
goroutine 200 [running]:
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm.(*qosContainerManagerImpl).setCPUCgroupConfig(0xc4202dd780, 0xc420f0f170, 0x4be4867, 0xa)
/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm/qos_container_manager_linux.go:176 +0x58
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm.(*qosContainerManagerImpl).UpdateCgroups(0xc4202dd780, 0x0, 0x0)
/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm/qos_container_manager_linux.go:292 +0x254
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm.(*containerManagerImpl).UpdateQOSCgroups(0xc420302240, 0xc42108ce00, 0xc420744a00)
/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go:511 +0x3a
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).syncPod(0xc421064000, 0x0, 0xc42108ce00, 0x1, 0xc420316fc0, 0x0, 0xb, 0xc420316fc0)
/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1573 +0x1dde
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).runPod(0xc421064000, 0xc42108ce00, 0x3b9aca00, 0x0, 0x0)
/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go:129 +0x49d
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).runOnce.func1(0xc421064000, 0x3b9aca00, 0xc420124660, 0xc42108ce00)
/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go:83 +0x3f
created by github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).runOnce
/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go:82 +0x1df
F0126 15:44:31.135601 10966 start_node.go:159] exit status 2
@smarterclayton
Copy link
Author

# OPENSHIFT_ALLOW_UNSUPPORTED_KUBELET=kubelet openshift start node --enable=kubelet --bootstrap-config-name=node-config --kubeconfig /tmp/config/master/admin.kubeconfig  --config=/tmp/config/node-localhost.localdomain/node-config.yaml --loglevel=6
I0126 15:44:30.045748   10966 loader.go:357] Config loaded from file /tmp/config/node-localhost.localdomain/node.kubeconfig
I0126 15:44:30.057594   10966 round_trippers.go:436] GET https://10.0.2.15:8443/api/v1/namespaces/default/endpoints/kubernetes 200 OK in 10 milliseconds
I0126 15:44:30.057837   10966 start_node.go:265] Invoking run-once mode to launch static pods from /tmp/config/node-localhost.localdomain/static-pods
I0126 15:44:30.057848   10966 start_node.go:266] =========================
I0126 15:44:30.057854   10966 start_node.go:267] run-once kubelet args: [--register-node=true --healthz-port=0 --file-check-frequency=0s --pods-per-core=10 --authentication-token-webhook-cache-ttl=5m --hostname-override=localhost.localdomain --cadvisor-port=0 --host-network-sources=api --host-network-sources=file --tls-cert-file=/tmp/config/node-localhost.localdomain/server.crt --tls-private-key-file=/tmp/config/node-localhost.localdomain/server.key --allow-privileged=true --cluster-dns=10.0.2.15 --container-runtime-endpoint=/var/run/dockershim.sock --containerized=false --runonce=true --authorization-mode=AlwaysAllow --fail-swap-on=false --address=127.0.0.1 --cluster-domain=cluster.local --host-ipc-sources=api --host-ipc-sources=file --pod-infra-container-image=openshift/origin-pod:v3.9.0-alpha.3 --max-pods=250 --pod-manifest-path=/tmp/config/node-localhost.localdomain/static-pods --root-dir=/data/src/github.com/openshift/origin/openshift.local.volumes --http-check-frequency=0s --cgroup-driver=systemd --tls-min-version=VersionTLS12 --port=10250 --image-service-endpoint=/var/run/dockershim.sock --experimental-dockershim-root-directory=/var/lib/dockershim --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA --tls-cipher-suites=TLS_RSA_WITH_AES_128_GCM_SHA256 --tls-cipher-suites=TLS_RSA_WITH_AES_256_GCM_SHA384 --tls-cipher-suites=TLS_RSA_WITH_AES_128_CBC_SHA --tls-cipher-suites=TLS_RSA_WITH_AES_256_CBC_SHA --node-ip= --read-only-port=0 --host-pid-sources=api --host-pid-sources=file --authorization-webhook-cache-authorized-ttl=5m --authorization-webhook-cache-unauthorized-ttl=5m --healthz-bind-address= --anonymous-auth=true --client-ca-file=/tmp/config/node-localhost.localdomain/node-client-ca.crt --network-plugin=]
W0126 15:44:30.057971   10966 start_node.go:468] UNSUPPORTED: Executing a different Kubelet than the current binary is not supported: /data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubelet
I0126 15:44:30.705656   10979 server.go:236] Version: v1.9.1+a0ce1bc657
I0126 15:44:30.705885   10979 feature_gate.go:220] feature gates: &{{} map[]}
I0126 15:44:30.722396   10979 mount_linux.go:202] Detected OS with systemd
I0126 15:44:30.737506   10979 plugins.go:101] No cloud provider specified.
I0126 15:44:30.737525   10979 server.go:357] No cloud provider specified: "" from the config file: ""
W0126 15:44:30.737535   10979 server.go:382] standalone mode, no API client
I0126 15:44:30.737855   10979 manager.go:151] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct"
I0126 15:44:30.894043   10979 manager.go:163] Rkt not connected: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I0126 15:44:30.894241   10979 manager.go:174] CRI-O not connected: Get http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info: dial unix /var/run/crio/crio.sock: connect: no such file or directory
I0126 15:44:30.952682   10979 fs.go:140] Filesystem UUIDs: map[2018-01-15-13-59-52-30:/dev/sr0 4537d533-47ff-463c-bffc-7ce294d9c93a:/dev/dm-1 598bbfb9-027e-4f52-a5b3-c4d3d1fbc2b8:/dev/dm-0 8ffa0ee9-e1a8-4c03-acce-b65b342c6935:/dev/sda2]
I0126 15:44:30.952718   10979 fs.go:141] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/mapper/VolGroup00-LogVol00:{mountpoint:/ major:253 minor:0 fsType:xfs blockSize:0} /dev/sda2:{mountpoint:/boot major:8 minor:2 fsType:xfs blockSize:0}]
I0126 15:44:30.957770   10979 manager.go:225] Machine: {NumCores:6 CpuFrequency:2593994 MemoryCapacity:6971113472 HugePages:[{PageSize:2048 NumPages:0}] MachineID:7a83cd34d0c94d68a635ce09817faa36 SystemUUID:7A83CD34-D0C9-4D68-A635-CE09817FAA36 BootID:6dd2e60d-0988-4d42-bcd4-ce04b8cb9f9e Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:17 Capacity:3485556736 Type:vfs Inodes:850966 HasInodes:true} {Device:/dev/mapper/VolGroup00-LogVol00 DeviceMajor:253 DeviceMinor:0 Capacity:40212119552 Type:vfs Inodes:19644416 HasInodes:true} {Device:/dev/sda2 DeviceMajor:8 DeviceMinor:2 Capacity:1063256064 Type:vfs Inodes:524288 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:40231763968 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:1610612736 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:42949672960 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:52:54:00:ca:e4:8b Speed:1000 Mtu:1500} {Name:eth1 MacAddress:08:00:27:d7:77:24 Speed:1000 Mtu:1500}] Topology:[{Id:0 Memory:7339565056 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]} {Id:2 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]} {Id:3 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]} {Id:4 Threads:[4] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]} {Id:5 Threads:[5] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0126 15:44:30.959533   10979 manager.go:231] Version: {KernelVersion:3.10.0-693.11.6.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.12.6 DockerAPIVersion:1.24 CadvisorVersion: CadvisorRevision:}
W0126 15:44:30.960478   10979 server.go:290] No api server defined - no events will be sent to API server.
I0126 15:44:30.960493   10979 server.go:482] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0126 15:44:30.960939   10979 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
I0126 15:44:30.960958   10979 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/data/src/github.com/openshift/origin/openshift.local.volumes ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
I0126 15:44:30.961108   10979 container_manager_linux.go:266] Creating device plugin manager: false
I0126 15:44:30.961138   10979 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
I0126 15:44:30.961183   10979 server.go:756] Using root directory: /data/src/github.com/openshift/origin/openshift.local.volumes
I0126 15:44:30.961212   10979 kubelet.go:290] Adding manifest path: /tmp/config/node-localhost.localdomain/static-pods
I0126 15:44:30.961250   10979 file.go:52] Watching path "/tmp/config/node-localhost.localdomain/static-pods"
I0126 15:44:30.961412   10979 file.go:161] Reading manifest file "/tmp/config/node-localhost.localdomain/static-pods/api.yaml"
W0126 15:44:30.964558   10979 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0126 15:44:30.964588   10979 kubelet.go:571] Hairpin mode set to "hairpin-veth"
I0126 15:44:30.967169   10979 common.go:61] Generated UID "1e3bacc586083520e66d5d36269951da" pod "openshift-master-api" from /tmp/config/node-localhost.localdomain/static-pods/api.yaml
I0126 15:44:30.967224   10979 common.go:65] Generated Name "openshift-master-api-localhost.localdomain" for UID "1e3bacc586083520e66d5d36269951da" from URL /tmp/config/node-localhost.localdomain/static-pods/api.yaml
I0126 15:44:30.967234   10979 common.go:70] Using namespace "kube-system" for pod "openshift-master-api-localhost.localdomain" from /tmp/config/node-localhost.localdomain/static-pods/api.yaml
I0126 15:44:30.967571   10979 config.go:297] Setting pods for source file
I0126 15:44:30.967614   10979 config.go:405] Receiving a new pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 15:44:30.994027   10979 client.go:80] Connecting to docker on unix:///var/run/docker.sock
I0126 15:44:30.994080   10979 client.go:109] Start docker client with request timeout=2m0s
I0126 15:44:31.003102   10979 docker_service.go:232] Docker cri networking managed by kubernetes.io/no-op
I0126 15:44:31.051825   10979 docker_service.go:237] Docker Info: &{ID:EZRT:3QAS:NKRC:E6SM:MKXH:A2EW:3XWN:RCLX:CBDH:BKEB:VHFI:VQKM Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:168 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[host bridge null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:16 OomKillDisable:true NGoroutines:26 SystemTime:2018-01-26T15:44:31.051133801Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-693.11.6.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420924930 NCPU:6 MemTotal:6971113472 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:localhost.localdomain Labels:[] ExperimentalBuild:false ServerVersion:1.12.6 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc4206a77c0} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[seccomp selinux]}
I0126 15:44:31.051931   10979 docker_service.go:250] Setting cgroupDriver to systemd
I0126 15:44:31.051984   10979 kubelet.go:645] RemoteRuntimeEndpoint: "/var/run/dockershim.sock", RemoteImageEndpoint: "/var/run/dockershim.sock"
I0126 15:44:31.052101   10979 kubelet.go:648] Starting the GRPC server for the docker CRI shim.
I0126 15:44:31.052136   10979 docker_server.go:51] Start dockershim grpc server
W0126 15:44:31.052192   10979 util_unix.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock".
I0126 15:44:31.057760   10979 container_manager_linux.go:756] attempting to apply oom_score_adj of -999 to pid 1077
I0126 15:44:31.057782   10979 oom_linux.go:65] attempting to set "/proc/1077/oom_score_adj" to "-999"
I0126 15:44:31.057952   10979 container_manager_linux.go:756] attempting to apply oom_score_adj of -999 to pid 1095
I0126 15:44:31.057961   10979 oom_linux.go:65] attempting to set "/proc/1095/oom_score_adj" to "-999"
I0126 15:44:31.107513   10979 remote_runtime.go:43] Connecting to runtime service /var/run/dockershim.sock
W0126 15:44:31.107547   10979 util_unix.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock".
I0126 15:44:31.107664   10979 remote_image.go:40] Connecting to image service /var/run/dockershim.sock
W0126 15:44:31.107679   10979 util_unix.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock".
I0126 15:44:31.107760   10979 plugins.go:56] Registering credential provider: .dockercfg
I0126 15:44:31.107780   10979 azure_credentials.go:80] Azure config unspecified, disabling
I0126 15:44:31.110155   10979 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.12.6, apiVersion: 1.24.0
I0126 15:44:31.113204   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/aws-ebs"
I0126 15:44:31.113226   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/empty-dir"
I0126 15:44:31.113242   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/gce-pd"
I0126 15:44:31.113273   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/git-repo"
I0126 15:44:31.113289   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/host-path"
I0126 15:44:31.113303   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/nfs"
I0126 15:44:31.113321   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/secret"
I0126 15:44:31.113335   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/iscsi"
I0126 15:44:31.113352   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/glusterfs"
I0126 15:44:31.113366   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/rbd"
I0126 15:44:31.113382   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/cinder"
I0126 15:44:31.113397   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/quobyte"
I0126 15:44:31.113409   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/cephfs"
I0126 15:44:31.113428   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/downward-api"
I0126 15:44:31.113443   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/fc"
I0126 15:44:31.113457   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/flocker"
I0126 15:44:31.113469   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/azure-file"
I0126 15:44:31.113481   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/configmap"
I0126 15:44:31.113496   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/vsphere-volume"
I0126 15:44:31.113510   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/azure-disk"
I0126 15:44:31.113525   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/photon-pd"
I0126 15:44:31.113537   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/projected"
I0126 15:44:31.113549   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/portworx-volume"
I0126 15:44:31.113565   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/scaleio"
I0126 15:44:31.115321   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/local-volume"
I0126 15:44:31.115426   10979 plugins.go:453] Loaded volume plugin "kubernetes.io/storageos"
I0126 15:44:31.117505   10979 mount_linux.go:647] Directory /data/src/github.com/openshift/origin/openshift.local.volumes is already on a shared mount
I0126 15:44:31.117610   10979 runonce.go:60] processing manifest with 1 pods
I0126 15:44:31.117655   10979 kubelet_node_status.go:296] Setting node annotation to enable volume controller attach/detach
I0126 15:44:31.117701   10979 server.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost.localdomain", UID:"localhost.localdomain", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet.
I0126 15:44:31.117791   10979 config.go:99] Looking for [api file], have seen map[]
E0126 15:44:31.118414   10979 kubelet.go:1275] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
I0126 15:44:31.119374   10979 interface.go:360] Looking for default routes with IPv4 addresses
I0126 15:44:31.119388   10979 interface.go:365] Default route transits interface "eth0"
I0126 15:44:31.119511   10979 interface.go:174] Interface eth0 is up
I0126 15:44:31.119676   10979 interface.go:222] Interface "eth0" has 2 addresses :[10.0.2.15/24 fe80::5054:ff:feca:e48b/64].
I0126 15:44:31.119706   10979 interface.go:189] Checking addr  10.0.2.15/24.
I0126 15:44:31.119719   10979 interface.go:196] IP found 10.0.2.15
I0126 15:44:31.119748   10979 interface.go:228] Found valid IPv4 address 10.0.2.15 for interface "eth0".
I0126 15:44:31.120835   10979 interface.go:371] Found active IP 10.0.2.15
I0126 15:44:31.123012   10979 kubelet.go:1263] Container garbage collection succeeded
I0126 15:44:31.123402   10979 kubelet_node_status.go:454] Recording NodeHasSufficientDisk event message for node localhost.localdomain
I0126 15:44:31.123436   10979 kubelet_node_status.go:454] Recording NodeHasSufficientMemory event message for node localhost.localdomain
I0126 15:44:31.123453   10979 kubelet_node_status.go:454] Recording NodeHasNoDiskPressure event message for node localhost.localdomain
I0126 15:44:31.123481   10979 server.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost.localdomain", UID:"localhost.localdomain", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node localhost.localdomain status is now: NodeHasSufficientDisk
I0126 15:44:31.123509   10979 runonce.go:88] Waiting for 1 pods
I0126 15:44:31.123511   10979 server.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost.localdomain", UID:"localhost.localdomain", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node localhost.localdomain status is now: NodeHasSufficientMemory
I0126 15:44:31.123550   10979 server.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost.localdomain", UID:"localhost.localdomain", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node localhost.localdomain status is now: NodeHasNoDiskPressure
I0126 15:44:31.124649   10979 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs [] for pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 15:44:31.125724   10979 runonce.go:153] Container "api" for pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)" not running
I0126 15:44:31.125739   10979 runonce.go:122] pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)" containers not running: syncing
I0126 15:44:31.125748   10979 runonce.go:124] Creating a mirror pod for static pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 15:44:31.125779   10979 kubelet_pods.go:1349] Generating status for "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 15:44:31.125843   10979 kubelet_pods.go:1314] pod waiting > 0, pending
I0126 15:44:31.125915   10979 status_manager.go:367] Status Manager: adding pod: "1e3bacc586083520e66d5d36269951da", with status: ('\x01', {Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-01-26 15:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-01-26 15:44:31 +0000 UTC ContainersNotReady containers with unready status: [api]}]     2018-01-26 15:44:31 +0000 UTC [] [{api {&ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 openshift/origin:v3.9.0-alpha.3  }] BestEffort}) to podStatusChannel
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x393dad8]

goroutine 200 [running]:
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm.(*qosContainerManagerImpl).setCPUCgroupConfig(0xc4202dd780, 0xc420f0f170, 0x4be4867, 0xa)
	/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm/qos_container_manager_linux.go:176 +0x58
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm.(*qosContainerManagerImpl).UpdateCgroups(0xc4202dd780, 0x0, 0x0)
	/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm/qos_container_manager_linux.go:292 +0x254
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm.(*containerManagerImpl).UpdateQOSCgroups(0xc420302240, 0xc42108ce00, 0xc420744a00)
	/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go:511 +0x3a
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).syncPod(0xc421064000, 0x0, 0xc42108ce00, 0x1, 0xc420316fc0, 0x0, 0xb, 0xc420316fc0)
	/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1573 +0x1dde
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).runPod(0xc421064000, 0xc42108ce00, 0x3b9aca00, 0x0, 0x0)
	/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go:129 +0x49d
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).runOnce.func1(0xc421064000, 0x3b9aca00, 0xc420124660, 0xc42108ce00)
	/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go:83 +0x3f
created by github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).runOnce
	/data/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go:82 +0x1df
F0126 15:44:31.135601   10966 start_node.go:159] exit status 2

@smarterclayton
Copy link
Author

hyperkube kubelet --register-node=true --healthz-port=0 --file-check-frequency=0s --pods-per-core=10 --authentication-token-webhook-cache-ttl=5m --hostname-override=localhost.localdomain --cadvisor-port=0 --host-network-sources=api --host-network-sources=file --tls-cert-file=/tmp/config/node-localhost.localdomain/server.crt --tls-private-key-file=/tmp/config/node-localhost.localdomain/server.key --allow-privileged=true --cluster-dns=10.0.2.15 --container-runtime-endpoint=/var/run/dockershim.sock --containerized=false --runonce=true --authorization-mode=AlwaysAllow --fail-swap-on=false --address=127.0.0.1 --cluster-domain=cluster.local --host-ipc-sources=api --host-ipc-sources=file --pod-infra-container-image=openshift/origin-pod:v3.9.0-alpha.3 --max-pods=250 --pod-manifest-path=/tmp/config/node-localhost.localdomain/static-pods --root-dir=/data/src/github.com/openshift/origin/openshift.local.volumes --http-check-frequency=0s --cgroup-driver=systemd --tls-min-version=VersionTLS12 --port=10250 --image-service-endpoint=/var/run/dockershim.sock --experimental-dockershim-root-directory=/var/lib/dockershim --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA --tls-cipher-suites=TLS_RSA_WITH_AES_128_GCM_SHA256 --tls-cipher-suites=TLS_RSA_WITH_AES_256_GCM_SHA384 --tls-cipher-suites=TLS_RSA_WITH_AES_128_CBC_SHA --tls-cipher-suites=TLS_RSA_WITH_AES_256_CBC_SHA --node-ip= --read-only-port=0 --host-pid-sources=api --host-pid-sources=file --authorization-webhook-cache-authorized-ttl=5m --authorization-webhook-cache-unauthorized-ttl=5m --healthz-bind-address= --anonymous-auth=true --client-ca-file=/tmp/config/node-localhost.localdomain/node-client-ca.crt --network-plugin=

@smarterclayton
Copy link
Author

I0126 18:19:20.479618   13416 kubelet.go:1605] Creating a mirror pod for static pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 18:19:20.479660   13416 volume_manager.go:342] Waiting for volumes to attach and mount for pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 18:19:20.479702   13416 server.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost.localdomain", UID:"localhost.localdomain", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node localhost.localdomain status is now: NodeHasSufficientDisk
I0126 18:19:20.479724   13416 server.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost.localdomain", UID:"localhost.localdomain", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node localhost.localdomain status is now: NodeHasSufficientMemory
I0126 18:19:20.479748   13416 server.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost.localdomain", UID:"localhost.localdomain", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node localhost.localdomain status is now: NodeHasNoDiskPressure
I0126 18:19:20.557480   13416 config.go:99] Looking for [api file], have seen map[]
I0126 18:19:20.679495   13416 config.go:99] Looking for [api file], have seen map[]
I0126 18:19:20.760577   13416 config.go:99] Looking for [api file], have seen map[]
I0126 18:19:20.782217   13416 volume_manager.go:371] All volumes are attached and mounted for pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 18:19:20.782271   13416 kuberuntime_manager.go:442] Syncing Pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:openshift-master-api-localhost.localdomain,GenerateName:,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/openshift-master-api-localhost.localdomain,UID:1e3bacc586083520e66d5d36269951da,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{kubernetes.io/config.hash: 1e3bacc586083520e66d5d36269951da,kubernetes.io/config.seen: 2018-01-26T18:19:18.820204928Z,kubernetes.io/config.source: file,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{master-config {HostPathVolumeSource{Path:/etc/origin/master/,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-cloud-provider {&HostPathVolumeSource{Path:/etc/origin/cloudprovider,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{api openshift/origin:v3.9.0-alpha.3 [/usr/bin/openshift start master api] [--config=/etc/origin/master/master-config.yaml]  [] [] [] {map[] map[]} [{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>}] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:localhost.localdomain,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{ Exists  NoExecute <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-01-26 18:19:18.96561473 +0000 UTC m=+0.441242156  }],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:,InitContainerStatuses:[],},}
I0126 18:19:20.782565   13416 kuberuntime_manager.go:514] Container {Name:api Image:openshift/origin:v3.9.0-alpha.3 Command:[/usr/bin/openshift start master api] Args:[--config=/etc/origin/master/master-config.yaml] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:master-config ReadOnly:false MountPath:/etc/origin/master/ SubPath: MountPropagation:<nil>} {Name:master-cloud-provider ReadOnly:false MountPath:/etc/origin/cloudprovider/ SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
I0126 18:19:20.782585   13416 kuberuntime_manager.go:571] computePodActions got {KillPod:false CreateSandbox:false SandboxID:f54c26bd45e92d715935a4bc931473752b8668c29568f824e24eabdd7d7f2c04 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 18:19:20.782700   13416 kuberuntime_manager.go:758] checking backoff for container "api" in pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 18:19:20.782832   13416 kuberuntime_manager.go:768] Back-off 10s restarting failed container=api pod=openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)
I0126 18:19:20.782846   13416 kuberuntime_manager.go:721] Backing Off restarting container &Container{Name:api,Image:openshift/origin:v3.9.0-alpha.3,Command:[/usr/bin/openshift start master api],Args:[--config=/etc/origin/master/master-config.yaml],WorkingDir:,Ports:[],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,VolumeDevices:[],} in pod openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)
I0126 18:19:20.782927   13416 runonce.go:107] failed to start pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)": error syncing pod "openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)": failed to "StartContainer" for "api" with CrashLoopBackOff: "Back-off 10s restarting failed container=api pod=openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)"
I0126 18:19:20.782946   13416 runonce.go:74] finished processing 1 pods
Error: failed to run Kubelet: runonce failed: error running pods: [openshift-master-api-localhost.localdomain_kube-system(1e3bacc586083520e66d5d36269951da)]
F0126 18:19:20.787624   13403 start_node.go:159] exit status 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment