Skip to content

Instantly share code, notes, and snippets.

@brgnepal
Created March 19, 2018 15:00
Show Gist options
  • Save brgnepal/e76062cb253fcea9393a2fc9eba718a2 to your computer and use it in GitHub Desktop.
Save brgnepal/e76062cb253fcea9393a2fc9eba718a2 to your computer and use it in GitHub Desktop.
Origin successful app deploy
W0319 14:44:51.836549 3263 start_master.go:290] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console, master start will continue.
W0319 14:44:51.836735 3263 start_master.go:290] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console, master start will continue.
E0319 14:44:51.845429 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
I0319 14:44:51.980571 3263 master_config.go:351] Will report 192.168.122.211 as public IP address.
2018-03-19 14:44:51.984552 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:4001: getsockopt: connection refused"; Reconnecting to {127.0.0.1:4001 <nil>}
I0319 14:44:51.985035 3263 start_master.go:530] Starting master on 0.0.0.0:8443 (v3.7.1+282e43f-42)
I0319 14:44:51.985055 3263 start_master.go:531] Public master address is https://192.168.42.153:8443
I0319 14:44:51.985068 3263 start_master.go:538] Using images from "openshift/origin-<component>:v3.7.1"
2018-03-19 14:44:51.985119 I | embed: peerTLS: cert = /var/lib/origin/openshift.local.config/master/etcd.server.crt, key = /var/lib/origin/openshift.local.config/master/etcd.server.key, ca = /var/lib/origin/openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2018-03-19 14:44:51.985200 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:4001: getsockopt: connection refused"; Reconnecting to {127.0.0.1:4001 <nil>}
2018-03-19 14:44:51.985610 I | embed: listening for peers on https://0.0.0.0:7001
2018-03-19 14:44:51.985661 I | embed: listening for client requests on 0.0.0.0:4001
2018-03-19 14:44:52.010916 I | etcdserver: name = openshift.local
2018-03-19 14:44:52.011016 I | etcdserver: data dir = /var/lib/origin/openshift.local.etcd
2018-03-19 14:44:52.011045 I | etcdserver: member dir = /var/lib/origin/openshift.local.etcd/member
2018-03-19 14:44:52.011108 I | etcdserver: heartbeat = 100ms
2018-03-19 14:44:52.011144 I | etcdserver: election = 1000ms
2018-03-19 14:44:52.011199 I | etcdserver: snapshot count = 100000
2018-03-19 14:44:52.011248 I | etcdserver: advertise client URLs = https://127.0.0.1:4001
2018-03-19 14:44:52.011320 I | etcdserver: initial advertise peer URLs = https://127.0.0.1:7001
2018-03-19 14:44:52.011398 I | etcdserver: initial cluster = openshift.local=https://127.0.0.1:7001
2018-03-19 14:44:52.042037 I | etcdserver: starting member 51cc720fdd39e048 in cluster dcf5ba954f7ebe11
2018-03-19 14:44:52.042123 I | raft: 51cc720fdd39e048 became follower at term 0
2018-03-19 14:44:52.042146 I | raft: newRaft 51cc720fdd39e048 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-03-19 14:44:52.042163 I | raft: 51cc720fdd39e048 became follower at term 1
2018-03-19 14:44:52.087091 W | auth: simple token is not cryptographically signed
2018-03-19 14:44:52.102785 I | etcdserver: starting server... [version: 3.2.8, cluster version: to_be_decided]
2018-03-19 14:44:52.102866 I | embed: ClientTLS: cert = /var/lib/origin/openshift.local.config/master/etcd.server.crt, key = /var/lib/origin/openshift.local.config/master/etcd.server.key, ca = /var/lib/origin/openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2018-03-19 14:44:52.104269 I | etcdserver/membership: added member 51cc720fdd39e048 [https://127.0.0.1:7001] to cluster dcf5ba954f7ebe11
2018-03-19 14:44:52.742844 I | raft: 51cc720fdd39e048 is starting a new election at term 1
2018-03-19 14:44:52.743090 I | raft: 51cc720fdd39e048 became candidate at term 2
2018-03-19 14:44:52.743386 I | raft: 51cc720fdd39e048 received MsgVoteResp from 51cc720fdd39e048 at term 2
2018-03-19 14:44:52.743726 I | raft: 51cc720fdd39e048 became leader at term 2
2018-03-19 14:44:52.743867 I | raft: raft.node: 51cc720fdd39e048 elected leader 51cc720fdd39e048 at term 2
2018-03-19 14:44:52.745243 I | etcdserver: setting up the initial cluster version to 3.2
2018-03-19 14:44:52.751127 N | etcdserver/membership: set the initial cluster version to 3.2
2018-03-19 14:44:52.751200 I | etcdserver/api: enabled capabilities for version 3.2
2018-03-19 14:44:52.751291 I | etcdserver: published {Name:openshift.local ClientURLs:[https://127.0.0.1:4001]} to cluster dcf5ba954f7ebe11
I0319 14:44:52.751339 3263 run.go:81] Started etcd at 127.0.0.1:4001
2018-03-19 14:44:52.752171 I | embed: ready to serve client requests
2018-03-19 14:44:52.752418 I | embed: serving client requests on [::]:4001
2018-03-19 14:44:52.760767 I | etcdserver/api/v3rpc: Failed to dial 0.0.0.0:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
W0319 14:44:52.762484 3263 run_components.go:49] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
W0319 14:44:52.762683 3263 server.go:85] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
I0319 14:44:52.762774 3263 logs.go:41] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
I0319 14:44:52.762833 3263 logs.go:41] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
I0319 14:44:52.763068 3263 run_components.go:75] DNS listening at 0.0.0.0:8053
E0319 14:44:52.845829 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
I0319 14:44:53.624693 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:53.624746 3263 master.go:329] Starting OAuth2 API at /oauth
I0319 14:44:53.628186 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:53.628223 3263 master.go:329] Starting OAuth2 API at /oauth
I0319 14:44:53.630384 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:53.630407 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:53.649065 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:53.649114 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:53.661687 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:53.661723 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:53.681295 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:53.681316 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:53.748433 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:53.748479 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:53.764736 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:53.764756 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:53.853301 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:53.853348 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:53.866715 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:53.866736 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
E0319 14:44:53.871760 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
I0319 14:44:53.940841 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:53.940883 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:53.994327 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:53.994377 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:53 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:54.073891 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:54.073934 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:54.125165 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:54.125221 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:54.225470 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:54.225521 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:54.248322 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
W0319 14:44:54.248370 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:54.310025 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:54.310046 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:54.335863 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
W0319 14:44:54.335883 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:54.419845 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:54.419867 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:54.533680 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:54.533735 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:54.622081 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:54.622101 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:54.667332 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:54.667390 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:54.833279 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:54.833300 3263 master.go:329] Starting OAuth2 API at /oauth
E0319 14:44:54.879333 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
W0319 14:44:54.906885 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:54.906939 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:54 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:55.015107 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:55.015137 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:55.209912 3263 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W0319 14:44:55.209968 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:55 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:55 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:55.407206 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:55.407225 3263 master.go:329] Starting OAuth2 API at /oauth
I0319 14:44:55.422980 3263 openshift_apiserver.go:544] Started Origin API at /oapi/v1
W0319 14:44:55.473182 3263 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/03/19 14:44:55 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:55 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
E0319 14:44:55.845805 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
I0319 14:44:55.892123 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:55.892145 3263 master.go:329] Starting OAuth2 API at /oauth
W0319 14:44:55.976305 3263 genericapiserver.go:371] Skipping API autoscaling/v2alpha1 because it has no resources.
W0319 14:44:55.985189 3263 genericapiserver.go:371] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/03/19 14:44:56 log.go:33: [restful/swagger] listing is available at https://192.168.42.153:8443/swaggerapi
[restful] 2018/03/19 14:44:56 log.go:33: [restful/swagger] https://192.168.42.153:8443/swaggerui/ is mapped to folder /swagger-ui/
I0319 14:44:56.777896 3263 master.go:320] Starting Web Console https://192.168.42.153:8443/console/
I0319 14:44:56.777924 3263 master.go:329] Starting OAuth2 API at /oauth
I0319 14:44:56.831835 3263 serve.go:85] Serving securely on 0.0.0.0:8443
I0319 14:44:56.832581 3263 available_controller.go:256] Starting AvailableConditionController
I0319 14:44:56.832628 3263 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0319 14:44:56.832708 3263 tprregistration_controller.go:147] Starting tpr-autoregister controller
I0319 14:44:56.832723 3263 controller_utils.go:1025] Waiting for caches to sync for tpr-autoregister controller
I0319 14:44:56.832796 3263 openshift_apiserver.go:642] Using default project node label selector:
I0319 14:44:56.836518 3263 crd_finalizer.go:248] Starting CRDFinalizer
I0319 14:44:56.836891 3263 apiservice_controller.go:113] Starting APIServiceRegistrationController
I0319 14:44:56.837040 3263 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0319 14:44:56.841611 3263 clusterquotamapping.go:160] Starting ClusterQuotaMappingController controller
I0319 14:44:56.848983 3263 autoregister_controller.go:141] Starting autoregister controller
I0319 14:44:56.849089 3263 cache.go:32] Waiting for caches to sync for autoregister controller
I0319 14:44:56.849549 3263 customresource_discovery_controller.go:152] Starting DiscoveryController
I0319 14:44:56.849752 3263 naming_controller.go:284] Starting NamingConditionController
W0319 14:44:57.025824 3263 server.go:190] WARNING: all flags other than --config, --write-config-to, and --cleanup-iptables are deprecated. Please begin using a config file ASAP.
E0319 14:44:57.031317 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
W0319 14:44:57.031697 3263 node_config.go:48] Using "localhost" as node name will not resolve from all locations
I0319 14:44:57.048624 3263 client.go:72] Connecting to docker on unix:///var/run/docker.sock
I0319 14:44:57.048686 3263 client.go:92] Start docker client with request timeout=2m0s
W0319 14:44:57.065047 3263 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
I0319 14:44:57.089052 3263 cache.go:39] Caches are synced for AvailableConditionController controller
I0319 14:44:57.089170 3263 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0319 14:44:57.089546 3263 cache.go:39] Caches are synced for autoregister controller
I0319 14:44:57.141457 3263 controller_utils.go:1032] Caches are synced for tpr-autoregister controller
I0319 14:44:57.162772 3263 start_node.go:469] Starting node localhost (v3.7.1+282e43f-42)
I0319 14:44:57.162852 3263 client.go:72] Connecting to docker on unix:///var/run/docker.sock
I0319 14:44:57.162876 3263 client.go:92] Start docker client with request timeout=2m0s
I0319 14:44:57.164547 3263 node.go:109] Connecting to Docker at unix:///var/run/docker.sock
I0319 14:44:57.200409 3263 feature_gate.go:144] feature gates: map[]
I0319 14:44:57.203733 3263 manager.go:144] cAdvisor running in container: "/docker/07002511f2563d9ccae330d0c355f55faffcd8d6f836b8a1ef3832c7cf791925"
I0319 14:44:57.224523 3263 network.go:88] Using iptables Proxier.
W0319 14:44:57.228891 3263 manager.go:161] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: connection refused
I0319 14:44:57.239713 3263 fs.go:124] Filesystem partitions: map[/dev/sda1:{mountpoint:/var/lib/docker/overlay2 major:8 minor:1 fsType:ext4 blockSize:0} overlay:{mountpoint:/ major:0 minor:38 fsType:overlay blockSize:0}]
I0319 14:44:57.240900 3263 manager.go:211] Machine: {NumCores:2 CpuFrequency:2593994 MemoryCapacity:2097143808 MachineID:5fb683be0e824550b3c11801fd621dae SystemUUID:5FB683BE-0E82-4550-B3C1-1801FD621DAE BootID:dc1000af-0bd2-47c6-844e-3dae340cbabb Filesystems:[{Device:overlay DeviceMajor:0 DeviceMinor:38 Capacity:17293533184 Type:vfs Inodes:9732096 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:17293533184 Type:vfs Inodes:9732096 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:52:54:00:72:16:18 Speed:100 Mtu:1500} {Name:eth1 MacAddress:52:54:00:bf:64:93 Speed:100 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2097143808 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[{Size:16777216 Type:Unified Level:3}]} {Id:1 Memory:0 Cores:[{Id:0 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[{Size:16777216 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0319 14:44:57.241795 3263 manager.go:217] Version: {KernelVersion:4.9.64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:17.09.0-ce DockerAPIVersion:1.32 CadvisorVersion: CadvisorRevision:}
I0319 14:44:57.242379 3263 server.go:546] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
W0319 14:44:57.250996 3263 container_manager_linux.go:218] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
I0319 14:44:57.251095 3263 container_manager_linux.go:246] container manager verified user specified cgroup-root exists: /
I0319 14:44:57.251117 3263 container_manager_linux.go:251] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
I0319 14:44:57.251393 3263 kubelet.go:271] Watching apiserver
W0319 14:44:57.270402 3263 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0319 14:44:57.270429 3263 kubelet.go:507] Hairpin mode set to "hairpin-veth"
W0319 14:44:57.280408 3263 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
W0319 14:44:57.352059 3263 network.go:234] Failed to retrieve node info: nodes "localhost" not found
W0319 14:44:57.352247 3263 proxier.go:483] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
W0319 14:44:57.352268 3263 proxier.go:488] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0319 14:44:57.352399 3263 network.go:119] Tearing down userspace rules.
I0319 14:44:57.499604 3263 docker_service.go:209] Docker cri networking managed by kubernetes.io/no-op
I0319 14:44:57.519864 3263 docker_service.go:226] Setting cgroupDriver to cgroupfs
W0319 14:44:57.577492 3263 util_linux.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock".
I0319 14:44:57.609990 3263 remote_runtime.go:42] Connecting to runtime service /var/run/dockershim.sock
W0319 14:44:57.610083 3263 util_linux.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock".
W0319 14:44:57.610203 3263 util_linux.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock".
I0319 14:44:57.681820 3263 kuberuntime_manager.go:178] Container runtime docker initialized, version: 17.09.0-ce, apiVersion: 1.32.0
I0319 14:44:57.753172 3263 network.go:226] Started Kubernetes Proxy on 0.0.0.0
I0319 14:44:57.754320 3263 server.go:869] Started kubelet v1.7.6+a08f5eeb62
I0319 14:44:57.754347 3263 server.go:132] Starting to listen on 0.0.0.0:10250
I0319 14:44:57.755151 3263 server.go:314] Adding debug handlers to kubelet server.
I0319 14:44:57.757377 3263 config.go:202] Starting service config controller
I0319 14:44:57.757396 3263 controller_utils.go:1025] Waiting for caches to sync for service config controller
I0319 14:44:57.757451 3263 config.go:102] Starting endpoints config controller
I0319 14:44:57.757461 3263 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller
E0319 14:44:57.758534 3263 kubelet.go:1191] Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
I0319 14:44:57.758838 3263 kubelet_node_status.go:270] Setting node annotation to enable volume controller attach/detach
E0319 14:44:57.770082 3263 reflector.go:216] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:72: Failed to list *api.Endpoints: User "system:node:localhost" cannot list endpoints at the cluster scope: User "system:node:localhost" cannot list all endpoints in the cluster (get endpoints)
E0319 14:44:57.814955 3263 kubelet.go:1705] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0319 14:44:57.814999 3263 kubelet.go:1713] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I0319 14:44:57.825889 3263 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0319 14:44:57.825913 3263 status_manager.go:141] Starting to sync pod status with apiserver
I0319 14:44:57.825962 3263 kubelet.go:1785] Starting kubelet main sync loop.
I0319 14:44:57.825999 3263 kubelet.go:1796] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
E0319 14:44:57.826630 3263 container_manager_linux.go:543] [ContainerManager]: Fail to get rootfs information unable to find data for container /
I0319 14:44:57.826654 3263 volume_manager.go:245] Starting Kubelet Volume Manager
I0319 14:44:57.895604 3263 controller_utils.go:1032] Caches are synced for service config controller
I0319 14:44:57.949530 3263 factory.go:351] Registering Docker factory
E0319 14:44:57.950162 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
I0319 14:44:57.968674 3263 kubelet_node_status.go:270] Setting node annotation to enable volume controller attach/detach
I0319 14:44:57.970643 3263 factory.go:89] Registering Rkt factory
W0319 14:44:57.970934 3263 manager.go:271] Registration of the crio container factory failed: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: connection refused
I0319 14:44:57.970953 3263 factory.go:54] Registering systemd factory
I0319 14:44:57.971714 3263 factory.go:86] Registering Raw factory
I0319 14:44:57.976247 3263 manager.go:1139] Started watching for new ooms in manager
I0319 14:44:57.983505 3263 oomparser.go:185] oomparser using systemd
I0319 14:44:57.984058 3263 manager.go:306] Starting recovery of all containers
I0319 14:44:58.047825 3263 trace.go:76] Trace[1965275621]: "Create /apis/oauth.openshift.io/v1/oauthclients" (started: 2018-03-19 14:44:57.011182475 +0000 UTC) (total time: 1.036611484s):
Trace[1965275621]: [1.004463819s] [1.00433083s] About to store object in database
I0319 14:44:58.063514 3263 manager.go:311] Recovery completed
I0319 14:44:58.093060 3263 openshift_apiserver.go:681] Created default security context constraint privileged
I0319 14:44:58.132064 3263 rkt.go:56] starting detectRktContainers thread
I0319 14:44:58.136838 3263 kubelet_node_status.go:82] Attempting to register node localhost
E0319 14:44:58.140592 3263 summary.go:70] Partial failure issuing GetContainerInfoV2: partial failures: ["/system.slice/systemd-journal-flush.service": RecentStats: unable to find data for container /system.slice/systemd-journal-flush.service], ["/system.slice/system-getty.slice": RecentStats: unable to find data for container /system.slice/system-getty.slice], ["/system.slice/nfs-server.service": RecentStats: unable to find data for container /system.slice/nfs-server.service], ["/system.slice/systemd-udev-trigger.service": RecentStats: unable to find data for container /system.slice/systemd-udev-trigger.service], ["/system.slice/docker.service": RecentStats: unable to find data for container /system.slice/docker.service], ["/system.slice": RecentStats: unable to find data for container /system.slice], ["/docker": RecentStats: unable to find data for container /docker], ["/system.slice/systemd-tmpfiles-setup-dev.service": RecentStats: unable to find data for container /system.slice/systemd-tmpfiles-setup-dev.service], ["/system.slice/nfs-config.service": RecentStats: unable to find data for container /system.slice/nfs-config.service], ["/system.slice/dev-disk-by\\x2dpath-pci\\x2d0000:00:01.1\\x2data\\x2d1\\x2dpart2.swap": RecentStats: unable to find data for container /system.slice/dev-disk-by\x2dpath-pci\x2d0000:00:01.1\x2data\x2d1\x2dpart2.swap], ["/system.slice/minikube-automount.service": RecentStats: unable to find data for container /system.slice/minikube-automount.service]
E0319 14:44:58.140638 3263 eviction_manager.go:238] eviction manager: unexpected err: failed GetNode: node 'localhost' not found
I0319 14:44:58.188720 3263 openshift_apiserver.go:681] Created default security context constraint nonroot
E0319 14:44:58.189532 3263 autoregister_controller.go:195] v1beta1.apiextensions.k8s.io failed with : Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.apiextensions.k8s.io": the object has been modified; please apply your changes to the latest version and try again
I0319 14:44:58.209699 3263 kubelet_node_status.go:85] Successfully registered node localhost
I0319 14:44:58.217719 3263 openshift_apiserver.go:681] Created default security context constraint hostmount-anyuid
E0319 14:44:58.218702 3263 available_controller.go:289] v1.authorization.openshift.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.authorization.openshift.io": the object has been modified; please apply your changes to the latest version and try again
I0319 14:44:58.254860 3263 openshift_apiserver.go:681] Created default security context constraint hostaccess
I0319 14:44:58.255227 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/cluster-admin
E0319 14:44:58.256279 3263 autoregister_controller.go:195] v1.authentication.k8s.io failed with : Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.authentication.k8s.io": the object has been modified; please apply your changes to the latest version and try again
I0319 14:44:58.306165 3263 openshift_apiserver.go:681] Created default security context constraint restricted
I0319 14:44:58.351370 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/sudoer
I0319 14:44:58.355017 3263 openshift_apiserver.go:681] Created default security context constraint anyuid
I0319 14:44:58.369899 3263 openshift_apiserver.go:681] Created default security context constraint hostnetwork
W0319 14:44:58.370190 3263 lease_endpoint_reconciler.go:176] Resetting endpoints for master service "kubernetes" to [192.168.122.211]
I0319 14:44:58.373239 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:scope-impersonation
I0319 14:44:58.442634 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/cluster-reader
I0319 14:44:58.496089 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/cluster-debugger
I0319 14:44:58.584369 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:build-strategy-docker
I0319 14:44:58.593742 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:build-strategy-custom
I0319 14:44:58.597806 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:build-strategy-source
I0319 14:44:58.602991 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:build-strategy-jenkinspipeline
I0319 14:44:58.610182 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/storage-admin
I0319 14:44:58.616558 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/admin
I0319 14:44:58.659273 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/edit
I0319 14:44:58.674722 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/view
E0319 14:44:58.772474 3263 reflector.go:216] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:72: Failed to list *api.Endpoints: User "system:node:localhost" cannot list endpoints at the cluster scope: User "system:node:localhost" cannot list all endpoints in the cluster (get endpoints)
I0319 14:44:58.803523 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/basic-user
I0319 14:44:58.808713 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/self-access-reviewer
I0319 14:44:58.821897 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/self-provisioner
I0319 14:44:58.828608 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/cluster-status
I0319 14:44:58.833160 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:image-auditor
I0319 14:44:58.838615 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:image-puller
E0319 14:44:58.846412 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
I0319 14:44:58.861954 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:image-pusher
I0319 14:44:58.869721 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:image-builder
I0319 14:44:58.875625 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:image-pruner
I0319 14:44:58.881517 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:image-signer
I0319 14:44:58.888432 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:deployer
I0319 14:44:58.896364 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:master
I0319 14:44:58.944669 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:oauth-token-deleter
I0319 14:44:58.950438 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:router
I0319 14:44:58.957111 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:registry
I0319 14:44:58.961625 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0319 14:44:58.966891 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:node-admin
I0319 14:44:58.972865 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:node-reader
I0319 14:44:58.981727 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:node
I0319 14:44:59.004544 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:sdn-reader
I0319 14:44:59.012363 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:sdn-manager
I0319 14:44:59.017466 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:webhook
I0319 14:44:59.024229 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0319 14:44:59.031878 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0319 14:44:59.038023 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/registry-admin
I0319 14:44:59.069981 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/registry-editor
I0319 14:44:59.075025 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/registry-viewer
I0319 14:44:59.083154 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:templateservicebroker-client
I0319 14:44:59.088419 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:replication-controller
I0319 14:44:59.095360 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:endpoint-controller
I0319 14:44:59.113222 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:replicaset-controller
I0319 14:44:59.123717 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:garbage-collector-controller
I0319 14:44:59.138276 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:job-controller
I0319 14:44:59.144440 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:image-pullers in openshift
I0319 14:44:59.148645 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:image-pullers in openshift-infra
I0319 14:44:59.151435 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:hpa-controller
I0319 14:44:59.201193 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:daemonset-controller
I0319 14:44:59.212079 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:image-builders in openshift-infra
I0319 14:44:59.214978 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:image-builders in openshift
I0319 14:44:59.215231 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:disruption-controller
I0319 14:44:59.223564 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:namespace-controller
I0319 14:44:59.228144 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:deployers in openshift-infra
I0319 14:44:59.242950 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:image-pullers in default
I0319 14:44:59.247113 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:deployers in openshift
I0319 14:44:59.247276 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:gc-controller
I0319 14:44:59.264284 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:certificate-signing-controller
I0319 14:44:59.269020 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:image-builders in default
I0319 14:44:59.271281 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:statefulset-controller
I0319 14:44:59.291488 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:build-controller
I0319 14:44:59.299192 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:deploymentconfig-controller
I0319 14:44:59.300149 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:deployers in default
I0319 14:44:59.334910 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:deployment-controller
I0319 14:44:59.343197 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:build-controller
I0319 14:44:59.360826 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:build-config-change-controller
I0319 14:44:59.366789 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:deployer-controller
I0319 14:44:59.372016 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:deploymentconfig-controller
I0319 14:44:59.380331 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:template-instance-controller
I0319 14:44:59.386154 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:origin-namespace-controller
I0319 14:44:59.391995 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:serviceaccount-controller
I0319 14:44:59.410050 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:serviceaccount-pull-secrets-controller
I0319 14:44:59.417290 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:image-trigger-controller
I0319 14:44:59.422522 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:service-serving-cert-controller
I0319 14:44:59.429366 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:image-import-controller
I0319 14:44:59.455760 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:sdn-controller
I0319 14:44:59.479379 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:cluster-quota-reconciliation-controller
I0319 14:44:59.485304 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:unidling-controller
I0319 14:44:59.490776 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:service-ingress-ip-controller
I0319 14:44:59.497471 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:pv-recycler-controller
I0319 14:44:59.503149 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:resourcequota-controller
I0319 14:44:59.524316 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:horizontal-pod-autoscaler
I0319 14:44:59.532654 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:openshift:controller:template-service-broker
I0319 14:44:59.537650 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0319 14:44:59.545584 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0319 14:44:59.550761 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0319 14:44:59.593768 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0319 14:44:59.600251 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0319 14:44:59.604856 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0319 14:44:59.611752 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0319 14:44:59.620797 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0319 14:44:59.644153 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0319 14:44:59.652981 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0319 14:44:59.659020 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0319 14:44:59.666315 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0319 14:44:59.675179 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0319 14:44:59.713019 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0319 14:44:59.720399 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0319 14:44:59.726881 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0319 14:44:59.733085 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0319 14:44:59.739121 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0319 14:44:59.744286 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0319 14:44:59.759573 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0319 14:44:59.765432 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0319 14:44:59.772968 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
E0319 14:44:59.773732 3263 reflector.go:216] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:72: Failed to list *api.Endpoints: User "system:node:localhost" cannot list endpoints at the cluster scope: User "system:node:localhost" cannot list all endpoints in the cluster (get endpoints)
I0319 14:44:59.778292 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0319 14:44:59.783486 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0319 14:44:59.789303 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0319 14:44:59.841553 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0319 14:44:59.845493 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
E0319 14:44:59.847083 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
I0319 14:44:59.854768 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0319 14:44:59.864335 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0319 14:44:59.871233 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0319 14:44:59.876863 3263 storage_rbac.go:192] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0319 14:44:59.899644 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:masters
I0319 14:44:59.908043 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:node-admins
I0319 14:44:59.912374 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admins
I0319 14:44:59.919317 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/cluster-readers
I0319 14:44:59.926102 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/basic-users
I0319 14:44:59.930693 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/self-access-reviewers
I0319 14:44:59.937573 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/self-provisioners
I0319 14:44:59.984947 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:oauth-token-deleters
I0319 14:44:59.991374 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/cluster-status-binding
I0319 14:44:59.999490 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxiers
I0319 14:45:00.004985 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:sdn-readers
I0319 14:45:00.011501 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:webhooks
I0319 14:45:00.017226 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery-binding
I0319 14:45:00.025959 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:build-strategy-docker-binding
I0319 14:45:00.053983 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:build-strategy-source-binding
I0319 14:45:00.065771 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:build-strategy-jenkinspipeline-binding
I0319 14:45:00.080442 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:node-bootstrapper
I0319 14:45:00.106820 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:scope-impersonation
I0319 14:45:00.120689 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:nodes
I0319 14:45:00.128158 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0319 14:45:00.145126 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0319 14:45:00.161783 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0319 14:45:00.285482 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0319 14:45:00.291348 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0319 14:45:00.296984 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0319 14:45:00.302532 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0319 14:45:00.324032 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0319 14:45:00.331927 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0319 14:45:00.353405 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0319 14:45:00.360170 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0319 14:45:00.401821 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0319 14:45:00.420519 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0319 14:45:00.460947 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0319 14:45:00.474442 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0319 14:45:00.485329 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0319 14:45:00.495790 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0319 14:45:00.558563 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0319 14:45:00.563488 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0319 14:45:00.588297 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0319 14:45:00.594475 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0319 14:45:00.603829 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0319 14:45:00.609511 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:build-controller
I0319 14:45:00.625901 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:build-config-change-controller
I0319 14:45:00.701032 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:deployer-controller
I0319 14:45:00.705191 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:deploymentconfig-controller
I0319 14:45:00.710047 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:template-instance-controller
I0319 14:45:00.719529 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/admin
I0319 14:45:00.725043 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:origin-namespace-controller
I0319 14:45:00.730864 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:serviceaccount-controller
I0319 14:45:00.741286 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:serviceaccount-pull-secrets-controller
I0319 14:45:00.760965 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:image-trigger-controller
I0319 14:45:00.766166 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:service-serving-cert-controller
I0319 14:45:00.771867 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:image-import-controller
I0319 14:45:00.778348 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:sdn-controller
I0319 14:45:00.784171 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-quota-reconciliation-controller
I0319 14:45:00.813266 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:unidling-controller
I0319 14:45:00.835187 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:service-ingress-ip-controller
I0319 14:45:00.841255 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:pv-recycler-controller
E0319 14:45:00.846372 3263 controllers.go:118] Server isn't healthy yet. Waiting a little while.
I0319 14:45:00.849751 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:resourcequota-controller
I0319 14:45:00.855144 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:horizontal-pod-autoscaler
I0319 14:45:00.857540 3263 controller_utils.go:1032] Caches are synced for endpoints config controller
I0319 14:45:00.879350 3263 storage_rbac.go:218] updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler with additional subjects: [{ServiceAccount horizontal-pod-autoscaler openshift-infra}]
I0319 14:45:00.903300 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:openshift:controller:template-service-broker
I0319 14:45:00.946379 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0319 14:45:00.958114 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0319 14:45:00.971920 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0319 14:45:00.991090 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0319 14:45:01.004342 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0319 14:45:01.011419 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0319 14:45:01.072055 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
E0319 14:45:01.078541 3263 conntrack.go:42] conntrack returned error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
I0319 14:45:01.101948 3263 storage_rbac.go:220] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0319 14:45:01.130913 3263 storage_rbac.go:251] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0319 14:45:01.162891 3263 storage_rbac.go:251] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0319 14:45:01.275094 3263 storage_rbac.go:251] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0319 14:45:01.300868 3263 storage_rbac.go:251] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0319 14:45:01.324553 3263 storage_rbac.go:251] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0319 14:45:01.384879 3263 storage_rbac.go:251] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0319 14:45:01.408837 3263 storage_rbac.go:251] created role.rbac.authorization.k8s.io/shared-resource-viewer in openshift
I0319 14:45:01.458642 3263 storage_rbac.go:251] created role.rbac.authorization.k8s.io/system:node-config-reader in openshift-node
I0319 14:45:01.471683 3263 storage_rbac.go:251] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0319 14:45:01.511146 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0319 14:45:01.537064 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0319 14:45:01.552163 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0319 14:45:01.569561 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0319 14:45:01.592929 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0319 14:45:01.662420 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0319 14:45:01.680316 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/shared-resource-viewers in openshift
I0319 14:45:01.757411 3263 storage_rbac.go:281] created rolebinding.rbac.authorization.k8s.io/system:node-config-reader in openshift-node
I0319 14:45:01.853557 3263 start_master.go:635] Started serviceaccount-token controller
I0319 14:45:01.854697 3263 controllermanager.go:108] Version: v1.7.6+a08f5eeb62
E0319 14:45:01.854772 3263 controllermanager.go:116] unable to register configz: register config "componentconfig" twice
I0319 14:45:01.859413 3263 leaderelection.go:179] attempting to acquire leader lease...
I0319 14:45:01.859617 3263 controller_utils.go:1025] Waiting for caches to sync for tokens controller
I0319 14:45:01.910334 3263 leaderelection.go:189] successfully acquired lease kube-system/kube-controller-manager
I0319 14:45:01.910617 3263 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"kube-controller-manager", UID:"1abbd9f9-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"307", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minishift became leader
I0319 14:45:01.959786 3263 controller_utils.go:1032] Caches are synced for tokens controller
I0319 14:45:02.116035 3263 plugins.go:101] No cloud provider specified.
W0319 14:45:02.116150 3263 controllermanager.go:481] "serviceaccount-token" is disabled
W0319 14:45:02.116173 3263 controllermanager.go:450] "ttl" is disabled
I0319 14:45:02.245132 3263 controllermanager.go:466] Started "job"
I0319 14:45:02.245305 3263 jobcontroller.go:134] Starting job controller
I0319 14:45:02.245342 3263 controller_utils.go:1025] Waiting for caches to sync for job controller
I0319 14:45:02.437220 3263 controllermanager.go:466] Started "deployment"
I0319 14:45:02.437593 3263 deployment_controller.go:152] Starting deployment controller
I0319 14:45:02.437722 3263 controller_utils.go:1025] Waiting for caches to sync for deployment controller
I0319 14:45:02.486003 3263 controllermanager.go:466] Started "csrapproving"
I0319 14:45:02.486237 3263 certificate_controller.go:110] Starting certificate controller
I0319 14:45:02.486266 3263 controller_utils.go:1025] Waiting for caches to sync for certificate controller
I0319 14:45:02.559209 3263 controller_utils.go:1025] Waiting for caches to sync for scheduler controller
I0319 14:45:02.598260 3263 imagestream_controller.go:59] Starting image stream controller
I0319 14:45:02.604830 3263 start_master.go:698] Started "openshift.io/image-import"
I0319 14:45:02.605055 3263 scheduled_image_controller.go:59] Starting scheduled import controller
I0319 14:45:02.630032 3263 controllermanager.go:466] Started "namespace"
I0319 14:45:02.630286 3263 controller_utils.go:1025] Waiting for caches to sync for namespace controller
I0319 14:45:02.659447 3263 controller_utils.go:1032] Caches are synced for scheduler controller
I0319 14:45:02.659546 3263 leaderelection.go:179] attempting to acquire leader lease...
I0319 14:45:02.721386 3263 leaderelection.go:189] successfully acquired lease kube-system/kube-scheduler
I0319 14:45:02.721604 3263 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"kube-scheduler", UID:"1b311598-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minishift became leader
I0319 14:45:02.804915 3263 start_master.go:698] Started "openshift.io/cluster-quota-reconciliation"
I0319 14:45:02.805032 3263 clusterquotamapping.go:160] Starting ClusterQuotaMappingController controller
I0319 14:45:02.809644 3263 controllermanager.go:466] Started "daemonset"
I0319 14:45:02.811770 3263 daemoncontroller.go:222] Starting daemon sets controller
I0319 14:45:02.811796 3263 controller_utils.go:1025] Waiting for caches to sync for daemon sets controller
I0319 14:45:02.935880 3263 start_master.go:698] Started "openshift.io/origin-namespace"
I0319 14:45:02.937345 3263 controllermanager.go:466] Started "replicaset"
I0319 14:45:02.937488 3263 replica_set.go:157] Starting replica set controller
I0319 14:45:02.937520 3263 controller_utils.go:1025] Waiting for caches to sync for replica set controller
I0319 14:45:03.011741 3263 controllermanager.go:466] Started "disruption"
W0319 14:45:03.011758 3263 controllermanager.go:450] "bootstrapsigner" is disabled
I0319 14:45:03.012226 3263 disruption.go:297] Starting disruption controller
I0319 14:45:03.012238 3263 controller_utils.go:1025] Waiting for caches to sync for disruption controller
I0319 14:45:03.061136 3263 controllermanager.go:466] Started "endpoint"
I0319 14:45:03.061312 3263 endpoints_controller.go:144] Starting endpoint controller
I0319 14:45:03.061344 3263 controller_utils.go:1025] Waiting for caches to sync for endpoint controller
W0319 14:45:03.067793 3263 shared_informer.go:298] resyncPeriod 120000000000 is smaller than resyncCheckPeriod 600000000000 and the informer has already started. Changing it to 600000000000
I0319 14:45:03.129889 3263 start_master.go:698] Started "openshift.io/service-serving-cert"
I0319 14:45:03.186176 3263 controllermanager.go:466] Started "replicationcontroller"
I0319 14:45:03.186687 3263 replication_controller.go:152] Starting RC controller
I0319 14:45:03.187280 3263 controller_utils.go:1025] Waiting for caches to sync for RC controller
I0319 14:45:03.459757 3263 controllermanager.go:466] Started "podgc"
I0319 14:45:03.459991 3263 gc_controller.go:76] Starting GC controller
I0319 14:45:03.460033 3263 controller_utils.go:1025] Waiting for caches to sync for GC controller
I0319 14:45:03.588718 3263 start_master.go:698] Started "openshift.io/build"
E0319 14:45:03.613815 3263 core.go:68] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
W0319 14:45:03.613851 3263 controllermanager.go:463] Skipping "service"
I0319 14:45:03.759546 3263 controllermanager.go:466] Started "attachdetach"
I0319 14:45:03.759797 3263 attach_detach_controller.go:242] Starting attach detach controller
I0319 14:45:03.759828 3263 controller_utils.go:1025] Waiting for caches to sync for attach detach controller
I0319 14:45:03.970397 3263 start_master.go:698] Started "openshift.io/image-trigger"
I0319 14:45:03.970773 3263 image_trigger_controller.go:214] Starting trigger controller
I0319 14:45:04.001791 3263 controllermanager.go:466] Started "resourcequota"
W0319 14:45:04.001815 3263 controllermanager.go:463] Skipping "csrsigning"
W0319 14:45:04.001823 3263 core.go:78] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
W0319 14:45:04.001831 3263 core.go:82] Unsuccessful parsing of service CIDR : invalid CIDR address:
I0319 14:45:04.002173 3263 resource_quota_controller.go:237] Starting resource quota controller
I0319 14:45:04.002206 3263 controller_utils.go:1025] Waiting for caches to sync for resource quota controller
I0319 14:45:04.172556 3263 nodecontroller.go:224] Sending events to api server.
I0319 14:45:04.172831 3263 taint_controller.go:159] Sending events to api server.
I0319 14:45:04.172974 3263 controllermanager.go:466] Started "node"
I0319 14:45:04.173211 3263 nodecontroller.go:481] Starting node controller
I0319 14:45:04.173259 3263 controller_utils.go:1025] Waiting for caches to sync for node controller
I0319 14:45:04.177151 3263 start_master.go:698] Started "openshift.io/resourcequota"
I0319 14:45:04.177372 3263 resource_quota_controller.go:237] Starting resource quota controller
I0319 14:45:04.177432 3263 controller_utils.go:1025] Waiting for caches to sync for resource quota controller
I0319 14:45:04.234702 3263 start_master.go:698] Started "openshift.io/serviceaccount-pull-secrets"
I0319 14:45:04.344684 3263 start_master.go:698] Started "openshift.io/deployer"
I0319 14:45:04.344832 3263 factory.go:76] Starting deployer controller
I0319 14:45:04.372098 3263 controllermanager.go:466] Started "statefulset"
I0319 14:45:04.372143 3263 stateful_set.go:151] Starting stateful set controller
I0319 14:45:04.372208 3263 controller_utils.go:1025] Waiting for caches to sync for stateful set controller
I0319 14:45:04.393914 3263 start_master.go:698] Started "openshift.io/deploymentconfig"
I0319 14:45:04.393936 3263 factory.go:79] Starting deploymentconfig controller
I0319 14:45:04.537092 3263 start_master.go:698] Started "openshift.io/horizontalpodautoscaling"
I0319 14:45:04.537259 3263 horizontal.go:145] Starting HPA controller
I0319 14:45:04.537275 3263 controller_utils.go:1025] Waiting for caches to sync for HPA controller
I0319 14:45:04.787572 3263 controllermanager.go:466] Started "cronjob"
W0319 14:45:04.787682 3263 controllermanager.go:450] "tokencleaner" is disabled
W0319 14:45:04.787706 3263 core.go:116] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
I0319 14:45:04.787740 3263 core.go:132] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0319 14:45:04.787762 3263 controllermanager.go:463] Skipping "route"
I0319 14:45:04.787912 3263 cronjob_controller.go:99] Starting CronJob Manager
E0319 14:45:05.109760 3263 util.go:45] Metric for serviceaccount_controller already registered
I0319 14:45:05.109825 3263 start_master.go:698] Started "openshift.io/serviceaccount"
I0319 14:45:05.109945 3263 serviceaccounts_controller.go:113] Starting service account controller
I0319 14:45:05.109972 3263 controller_utils.go:1025] Waiting for caches to sync for service account controller
I0319 14:45:05.254976 3263 controllermanager.go:466] Started "persistentvolume-binder"
I0319 14:45:05.255107 3263 pv_controller_base.go:270] Starting persistent volume controller
I0319 14:45:05.255131 3263 controller_utils.go:1025] Waiting for caches to sync for persistent volume controller
E0319 14:45:05.623528 3263 util.go:45] Metric for serviceaccount_controller already registered
I0319 14:45:05.623597 3263 controllermanager.go:466] Started "serviceaccount"
I0319 14:45:05.624025 3263 serviceaccounts_controller.go:113] Starting service account controller
I0319 14:45:05.624085 3263 controller_utils.go:1025] Waiting for caches to sync for service account controller
I0319 14:45:05.645803 3263 start_master.go:698] Started "openshift.io/build-config-change"
W0319 14:45:05.645827 3263 start_master.go:695] Skipping "openshift.io/sdn"
I0319 14:45:05.906847 3263 start_master.go:698] Started "openshift.io/ingress-ip"
I0319 14:45:05.978081 3263 start_master.go:698] Started "openshift.io/image-signature-import"
I0319 14:45:06.314531 3263 controllermanager.go:466] Started "garbagecollector"
W0319 14:45:06.314557 3263 controllermanager.go:450] "horizontalpodautoscaling" is disabled
I0319 14:45:06.314566 3263 garbagecollector.go:126] Starting garbage collector controller
I0319 14:45:06.314737 3263 controller_utils.go:1025] Waiting for caches to sync for garbage collector controller
I0319 14:45:06.350363 3263 start_master.go:698] Started "openshift.io/templateinstance"
I0319 14:45:06.685815 3263 start_master.go:698] Started "openshift.io/unidling"
I0319 14:45:06.685862 3263 start_master.go:701] Started Origin Controllers
I0319 14:45:06.729886 3263 controller_utils.go:1032] Caches are synced for service account controller
I0319 14:45:06.740206 3263 controller_utils.go:1032] Caches are synced for service account controller
I0319 14:45:06.740620 3263 controller_utils.go:1032] Caches are synced for namespace controller
E0319 14:45:06.744168 3263 actual_state_of_world.go:478] Failed to set statusUpdateNeeded to needed true because nodeName="localhost" does not exist
E0319 14:45:06.744193 3263 actual_state_of_world.go:492] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true because nodeName="localhost" does not exist
I0319 14:45:06.756083 3263 controller_utils.go:1032] Caches are synced for replica set controller
I0319 14:45:06.756127 3263 controller_utils.go:1032] Caches are synced for deployment controller
I0319 14:45:06.756161 3263 factory.go:83] Deployer controller caches are synced. Starting workers.
I0319 14:45:06.756258 3263 controller_utils.go:1032] Caches are synced for job controller
I0319 14:45:06.767256 3263 controller_utils.go:1032] Caches are synced for attach detach controller
I0319 14:45:06.767430 3263 controller_utils.go:1032] Caches are synced for GC controller
I0319 14:45:06.767781 3263 controller_utils.go:1032] Caches are synced for endpoint controller
I0319 14:45:06.783137 3263 controller_utils.go:1032] Caches are synced for stateful set controller
I0319 14:45:06.783166 3263 controller_utils.go:1032] Caches are synced for node controller
I0319 14:45:06.784239 3263 nodecontroller.go:542] Initializing eviction metric for zone:
W0319 14:45:06.784304 3263 nodecontroller.go:877] Missing timestamp for Node localhost. Assuming now as a timestamp.
I0319 14:45:06.784330 3263 nodecontroller.go:793] NodeController detected that zone is now in state Normal.
I0319 14:45:06.784553 3263 taint_controller.go:182] Starting NoExecuteTaintManager
I0319 14:45:06.785080 3263 event.go:218] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"187f297a-2b84-11e8-90e3-525400721618", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node localhost event: Registered Node localhost in NodeController
I0319 14:45:06.801122 3263 controller_utils.go:1032] Caches are synced for certificate controller
I0319 14:45:06.801400 3263 build_controller.go:243] Starting build controller
I0319 14:45:06.801449 3263 factory.go:86] deploymentconfig controller caches are synced. Starting workers.
I0319 14:45:06.803860 3263 controller_utils.go:1032] Caches are synced for RC controller
I0319 14:45:06.814237 3263 controller_utils.go:1032] Caches are synced for resource quota controller
I0319 14:45:06.825100 3263 controller_utils.go:1032] Caches are synced for daemon sets controller
I0319 14:45:06.825191 3263 controller_utils.go:1032] Caches are synced for disruption controller
I0319 14:45:06.825212 3263 disruption.go:305] Sending events to api server.
I0319 14:45:06.837387 3263 controller_utils.go:1032] Caches are synced for HPA controller
I0319 14:45:06.846218 3263 buildconfig_controller.go:185] Starting buildconfig controller
I0319 14:45:06.855243 3263 controller_utils.go:1032] Caches are synced for persistent volume controller
I0319 14:45:06.881456 3263 controller_utils.go:1032] Caches are synced for resource quota controller
I0319 14:45:06.935974 3263 controller_utils.go:1032] Caches are synced for garbage collector controller
I0319 14:45:06.936004 3263 garbagecollector.go:135] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0319 14:45:07.324827 3263 trace.go:76] Trace[1580600570]: "Create /api/v1/namespaces/default/pods" (started: 2018-03-19 14:45:06.78069866 +0000 UTC) (total time: 544.100747ms):
Trace[1580600570]: [532.834715ms] [532.756551ms] About to store object in database
I0319 14:45:07.331859 3263 event.go:218] Event(v1.ObjectReference{Kind:"Job", Namespace:"default", Name:"persistent-volume-setup", UID:"1a6754df-2b84-11e8-90e3-525400721618", APIVersion:"batch", ResourceVersion:"291", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: persistent-volume-setup-m5fqm
I0319 14:45:07.368915 3263 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"persistent-volume-setup-m5fqm", UID:"1df5f488-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned persistent-volume-setup-m5fqm to localhost
I0319 14:45:07.415480 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvdir" (UniqueName: "kubernetes.io/host-path/1df5f488-2b84-11e8-90e3-525400721618-pvdir") pod "persistent-volume-setup-m5fqm" (UID: "1df5f488-2b84-11e8-90e3-525400721618")
I0319 14:45:07.415529 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvinstaller-token-hgd6b" (UniqueName: "kubernetes.io/secret/1df5f488-2b84-11e8-90e3-525400721618-pvinstaller-token-hgd6b") pod "persistent-volume-setup-m5fqm" (UID: "1df5f488-2b84-11e8-90e3-525400721618")
I0319 14:45:07.472619 3263 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"docker-registry-1-deploy", UID:"1e01fafd-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned docker-registry-1-deploy to localhost
I0319 14:45:07.472925 3263 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"router-1-deploy", UID:"1e0210b5-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned router-1-deploy to localhost
E0319 14:45:07.507742 3263 jobcontroller.go:348] Error syncing job: Operation cannot be fulfilled on jobs.batch "persistent-volume-setup": the object has been modified; please apply your changes to the latest version and try again
I0319 14:45:07.522582 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "deployer-token-zrld4" (UniqueName: "kubernetes.io/secret/1e01fafd-2b84-11e8-90e3-525400721618-deployer-token-zrld4") pod "docker-registry-1-deploy" (UID: "1e01fafd-2b84-11e8-90e3-525400721618")
I0319 14:45:07.522630 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "deployer-token-zrld4" (UniqueName: "kubernetes.io/secret/1e0210b5-2b84-11e8-90e3-525400721618-deployer-token-zrld4") pod "router-1-deploy" (UID: "1e0210b5-2b84-11e8-90e3-525400721618")
W0319 14:45:07.595671 3263 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-rc6ebb2ed405b444c90f1463754a005d6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-rc6ebb2ed405b444c90f1463754a005d6.scope: no such file or directory
W0319 14:45:07.624960 3263 container.go:354] Failed to create summary reader for "/system.slice/run-rc6ebb2ed405b444c90f1463754a005d6.scope": none of the resources are being tracked.
I0319 14:45:08.485188 3263 trace.go:76] Trace[82686402]: "Get /api/v1/namespaces/default/pods/docker-registry-1-deploy" (started: 2018-03-19 14:45:07.64674915 +0000 UTC) (total time: 838.416472ms):
Trace[82686402]: [838.350472ms] [838.341901ms] About to write a response
I0319 14:45:08.487357 3263 trace.go:76] Trace[2037486506]: "Get /api/v1/namespaces/kube-system/secrets/builder-token-t56t4" (started: 2018-03-19 14:45:07.882600727 +0000 UTC) (total time: 604.729639ms):
Trace[2037486506]: [604.667382ms] [604.658606ms] About to write a response
I0319 14:45:08.487462 3263 trace.go:76] Trace[1315356330]: "Create /api/v1/namespaces/default/events" (started: 2018-03-19 14:45:07.574017436 +0000 UTC) (total time: 913.427605ms):
Trace[1315356330]: [913.303734ms] [913.147505ms] Object stored in database
I0319 14:45:08.488700 3263 trace.go:76] Trace[413767256]: "GuaranteedUpdate etcd3: *api.Secret" (started: 2018-03-19 14:45:07.57331999 +0000 UTC) (total time: 915.35896ms):
Trace[413767256]: [298.689971ms] [298.689971ms] initial value restored
Trace[413767256]: [915.304942ms] [616.384245ms] Transaction committed
I0319 14:45:08.489017 3263 trace.go:76] Trace[1420910093]: "Create /api/v1/namespaces/kube-public/secrets" (started: 2018-03-19 14:45:07.880216189 +0000 UTC) (total time: 608.766193ms):
Trace[1420910093]: [608.729899ms] [606.593528ms] Object stored in database
I0319 14:45:08.489109 3263 trace.go:76] Trace[639088434]: "Get /api/v1/namespaces/openshift-infra/serviceaccounts/deployer" (started: 2018-03-19 14:45:07.627213749 +0000 UTC) (total time: 861.878095ms):
Trace[639088434]: [861.790416ms] [861.762519ms] About to write a response
I0319 14:45:08.490146 3263 trace.go:76] Trace[2108325572]: "Get /api/v1/namespaces/default/secrets/builder-token-v7pfv" (started: 2018-03-19 14:45:07.87620021 +0000 UTC) (total time: 613.925039ms):
Trace[2108325572]: [613.892691ms] [613.880857ms] About to write a response
I0319 14:45:08.490350 3263 trace.go:76] Trace[349078733]: "Get /api/v1/namespaces/openshift-node/serviceaccounts/deployer" (started: 2018-03-19 14:45:07.877926284 +0000 UTC) (total time: 612.406877ms):
Trace[349078733]: [612.317026ms] [612.309337ms] About to write a response
I0319 14:45:08.492230 3263 trace.go:76] Trace[1950029568]: "Get /api/v1/namespaces/default/secrets/default-token-gl49z" (started: 2018-03-19 14:45:07.895175069 +0000 UTC) (total time: 597.027669ms):
Trace[1950029568]: [595.697649ms] [595.68853ms] About to write a response
I0319 14:45:08.493452 3263 trace.go:76] Trace[1325475946]: "Get /api/v1/namespaces/default/secrets/pvinstaller-token-hgd6b" (started: 2018-03-19 14:45:07.631694997 +0000 UTC) (total time: 861.736749ms):
Trace[1325475946]: [856.483881ms] [856.476125ms] About to write a response
I0319 14:45:08.493764 3263 trace.go:76] Trace[100691707]: "Get /api/v1/namespaces/default/secrets/deployer-token-zrld4" (started: 2018-03-19 14:45:07.744730619 +0000 UTC) (total time: 749.012761ms):
Trace[100691707]: [743.650524ms] [743.642506ms] About to write a response
I0319 14:45:08.494174 3263 trace.go:76] Trace[1095842206]: "Update /api/v1/namespaces/openshift-infra/secrets/unidling-controller-token-bn44g" (started: 2018-03-19 14:45:07.5731578 +0000 UTC) (total time: 920.99286ms):
Trace[1095842206]: [915.567098ms] [915.434173ms] Object stored in database
I0319 14:45:08.494288 3263 trace.go:76] Trace[2090912424]: "Create /api/v1/namespaces/openshift-infra/secrets" (started: 2018-03-19 14:45:07.885533434 +0000 UTC) (total time: 608.734677ms):
Trace[2090912424]: [603.730582ms] [603.491833ms] Object stored in database
I0319 14:45:08.505809 3263 trace.go:76] Trace[1749054516]: "GuaranteedUpdate etcd3: *api.Namespace" (started: 2018-03-19 14:45:07.872653387 +0000 UTC) (total time: 633.133093ms):
Trace[1749054516]: [613.130557ms] [613.130557ms] initial value restored
I0319 14:45:08.506034 3263 trace.go:76] Trace[1065075006]: "Update /api/v1/namespaces/kube-system" (started: 2018-03-19 14:45:07.872579949 +0000 UTC) (total time: 633.434702ms):
Trace[1065075006]: [633.301225ms] [633.252512ms] Object stored in database
I0319 14:45:08.510306 3263 trace.go:76] Trace[447802206]: "GuaranteedUpdate etcd3: *api.ServiceAccount" (started: 2018-03-19 14:45:07.884802308 +0000 UTC) (total time: 625.481538ms):
Trace[447802206]: [605.735949ms] [605.735949ms] initial value restored
I0319 14:45:08.510507 3263 trace.go:76] Trace[1448571167]: "Update /api/v1/namespaces/kube-system/serviceaccounts/service-account-controller" (started: 2018-03-19 14:45:07.884715848 +0000 UTC) (total time: 625.77135ms):
Trace[1448571167]: [625.652207ms] [625.596985ms] Object stored in database
I0319 14:45:08.511104 3263 trace.go:76] Trace[1090275838]: "GuaranteedUpdate etcd3: *apps.DeploymentConfig" (started: 2018-03-19 14:45:07.624510011 +0000 UTC) (total time: 886.57486ms):
Trace[1090275838]: [863.206158ms] [863.206158ms] initial value restored
I0319 14:45:08.511338 3263 trace.go:76] Trace[1364587696]: "Update /apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/router/status" (started: 2018-03-19 14:45:07.591334068 +0000 UTC) (total time: 919.982502ms):
Trace[1364587696]: [919.819734ms] [886.673969ms] Object stored in database
I0319 14:45:08.512377 3263 trace.go:76] Trace[931687527]: "GuaranteedUpdate etcd3: *api.ServiceAccount" (started: 2018-03-19 14:45:07.881942689 +0000 UTC) (total time: 630.410847ms):
Trace[931687527]: [606.594012ms] [606.594012ms] initial value restored
I0319 14:45:08.512543 3263 trace.go:76] Trace[1198353244]: "Update /api/v1/namespaces/kube-system/serviceaccounts/deployer" (started: 2018-03-19 14:45:07.880536664 +0000 UTC) (total time: 631.987891ms):
Trace[1198353244]: [631.894632ms] [630.51588ms] Object stored in database
I0319 14:45:08.512816 3263 trace.go:76] Trace[1211497468]: "GuaranteedUpdate etcd3: *api.ServiceAccount" (started: 2018-03-19 14:45:07.88089006 +0000 UTC) (total time: 631.90906ms):
Trace[1211497468]: [608.500569ms] [608.500569ms] initial value restored
I0319 14:45:08.512951 3263 trace.go:76] Trace[1463340448]: "Update /api/v1/namespaces/openshift/serviceaccounts/builder" (started: 2018-03-19 14:45:07.879392606 +0000 UTC) (total time: 633.542042ms):
Trace[1463340448]: [633.466177ms] [631.997ms] Object stored in database
I0319 14:45:08.513368 3263 trace.go:76] Trace[1549222956]: "GuaranteedUpdate etcd3: *api.ServiceAccount" (started: 2018-03-19 14:45:07.882990231 +0000 UTC) (total time: 630.361437ms):
Trace[1549222956]: [607.769757ms] [607.769757ms] initial value restored
I0319 14:45:08.513515 3263 trace.go:76] Trace[1291778181]: "Update /api/v1/namespaces/openshift-infra/serviceaccounts/builder" (started: 2018-03-19 14:45:07.881778136 +0000 UTC) (total time: 631.670612ms):
Trace[1291778181]: [631.631229ms] [630.448762ms] Object stored in database
I0319 14:45:08.513791 3263 trace.go:76] Trace[1898827014]: "GuaranteedUpdate etcd3: *api.ServiceAccount" (started: 2018-03-19 14:45:07.883255063 +0000 UTC) (total time: 630.517341ms):
Trace[1898827014]: [606.447969ms] [606.447969ms] initial value restored
I0319 14:45:08.513921 3263 trace.go:76] Trace[520036673]: "Update /api/v1/namespaces/openshift/serviceaccounts/default" (started: 2018-03-19 14:45:07.881563864 +0000 UTC) (total time: 632.305287ms):
Trace[520036673]: [632.26731ms] [630.600604ms] Object stored in database
I0319 14:45:08.514196 3263 trace.go:76] Trace[113937528]: "GuaranteedUpdate etcd3: *api.ServiceAccount" (started: 2018-03-19 14:45:07.879568124 +0000 UTC) (total time: 634.612002ms):
Trace[113937528]: [610.365377ms] [610.365377ms] initial value restored
I0319 14:45:08.514336 3263 trace.go:76] Trace[1648062716]: "Update /api/v1/namespaces/default/serviceaccounts/deployer" (started: 2018-03-19 14:45:07.879489089 +0000 UTC) (total time: 634.783891ms):
Trace[1648062716]: [634.747817ms] [634.696209ms] Object stored in database
I0319 14:45:08.514807 3263 trace.go:76] Trace[1208764002]: "GuaranteedUpdate etcd3: *apps.DeploymentConfig" (started: 2018-03-19 14:45:07.624747561 +0000 UTC) (total time: 890.039584ms):
Trace[1208764002]: [861.024396ms] [861.024396ms] initial value restored
I0319 14:45:08.515067 3263 trace.go:76] Trace[551245534]: "Update /apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/docker-registry/status" (started: 2018-03-19 14:45:07.59164079 +0000 UTC) (total time: 923.391359ms):
Trace[551245534]: [923.218509ms] [890.124376ms] Object stored in database
I0319 14:45:11.080245 3263 trace.go:76] Trace[988474884]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:06.810487303 +0000 UTC) (total time: 4.269733795s):
Trace[988474884]: [4.267883254s] [4.255013892s] Object stored in database
I0319 14:45:11.091509 3263 trace.go:76] Trace[1802367431]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:06.810187614 +0000 UTC) (total time: 4.281300092s):
Trace[1802367431]: [4.280461141s] [4.269291078s] Object stored in database
I0319 14:45:11.993798 3263 trace.go:76] Trace[1152029912]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:06.810337864 +0000 UTC) (total time: 5.183437851s):
Trace[1152029912]: [5.182116261s] [5.169330047s] Object stored in database
2018-03-19 14:45:15.267723 W | etcdserver: apply entries took too long [132.000446ms for 1 entries]
2018-03-19 14:45:15.267749 W | etcdserver: avoid queries with large range/delete range!
W0319 14:45:15.683779 3263 pod_container_deletor.go:77] Container "6b6b63203f02c051a3a571f75f155fb420c861ea82f42cd602d9724b76529a33" not found in pod's containers
W0319 14:45:15.769169 3263 pod_container_deletor.go:77] Container "045e264d515a471b505958134b2160c7bcdf8ff5229704e82b5c5134a63c138c" not found in pod's containers
I0319 14:45:16.332382 3263 trace.go:76] Trace[1559731015]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:06.810624503 +0000 UTC) (total time: 9.521733027s):
Trace[1559731015]: [9.518633925s] [9.50569934s] Object stored in database
I0319 14:45:16.854502 3263 trace.go:76] Trace[1412629945]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:06.810782771 +0000 UTC) (total time: 10.043674998s):
Trace[1412629945]: [10.040713854s] [10.027749794s] Object stored in database
I0319 14:45:18.494724 3263 trace.go:76] Trace[514494018]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:11.093231867 +0000 UTC) (total time: 7.401465801s):
Trace[514494018]: [7.399337673s] [7.399198173s] Object stored in database
I0319 14:45:19.530258 3263 trace.go:76] Trace[1487671464]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:11.996003769 +0000 UTC) (total time: 7.534235304s):
Trace[1487671464]: [7.531989982s] [7.531732314s] Object stored in database
I0319 14:45:20.338240 3263 trace.go:76] Trace[766256545]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:16.381472783 +0000 UTC) (total time: 3.956738903s):
Trace[766256545]: [3.932581271s] [3.93240454s] Object stored in database
I0319 14:45:20.466592 3263 trace.go:76] Trace[505118125]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:11.082413382 +0000 UTC) (total time: 9.384150196s):
Trace[505118125]: [9.381228185s] [9.381076047s] Object stored in database
I0319 14:45:20.706616 3263 event.go:218] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"default", Name:"router-1", UID:"1df6c686-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: router-1-v8kzg
I0319 14:45:20.755000 3263 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"router-1-v8kzg", UID:"25f07901-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned router-1-v8kzg to localhost
I0319 14:45:20.880586 3263 trace.go:76] Trace[576788321]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:18.508403128 +0000 UTC) (total time: 2.372155427s):
Trace[576788321]: [2.370244369s] [2.370066999s] Object stored in database
I0319 14:45:20.887526 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "server-certificate" (UniqueName: "kubernetes.io/secret/25f07901-2b84-11e8-90e3-525400721618-server-certificate") pod "router-1-v8kzg" (UID: "25f07901-2b84-11e8-90e3-525400721618")
I0319 14:45:20.887572 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "router-token-mvv7l" (UniqueName: "kubernetes.io/secret/25f07901-2b84-11e8-90e3-525400721618-router-token-mvv7l") pod "router-1-v8kzg" (UID: "25f07901-2b84-11e8-90e3-525400721618")
W0319 14:45:21.082056 3263 container.go:354] Failed to create summary reader for "/system.slice/run-rb863521aec50410abc56aaefcb11ca78.scope": none of the resources are being tracked.
I0319 14:45:23.170208 3263 trace.go:76] Trace[1919788794]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:16.860923661 +0000 UTC) (total time: 6.307326843s):
Trace[1919788794]: [6.298488811s] [6.298343325s] Object stored in database
I0319 14:45:23.665163 3263 event.go:218] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"default", Name:"docker-registry-1", UID:"1df6a7eb-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"1021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: docker-registry-1-cnsrm
I0319 14:45:23.682266 3263 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"docker-registry-1-cnsrm", UID:"27b203fe-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned docker-registry-1-cnsrm to localhost
I0319 14:45:23.809478 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "registry-storage" (UniqueName: "kubernetes.io/host-path/27b203fe-2b84-11e8-90e3-525400721618-registry-storage") pod "docker-registry-1-cnsrm" (UID: "27b203fe-2b84-11e8-90e3-525400721618")
I0319 14:45:23.809510 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "registry-token-ww88q" (UniqueName: "kubernetes.io/secret/27b203fe-2b84-11e8-90e3-525400721618-registry-token-ww88q") pod "docker-registry-1-cnsrm" (UID: "27b203fe-2b84-11e8-90e3-525400721618")
I0319 14:45:24.807189 3263 trace.go:76] Trace[452428890]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:20.379050347 +0000 UTC) (total time: 4.428113241s):
Trace[452428890]: [4.424825908s] [4.418062224s] Object stored in database
I0319 14:45:24.842574 3263 trace.go:76] Trace[827002517]: "Create /apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports" (started: 2018-03-19 14:45:19.536499761 +0000 UTC) (total time: 5.30604596s):
Trace[827002517]: [5.299794699s] [5.299482272s] Object stored in database
W0319 14:45:38.189311 3263 conversion.go:110] Could not get instant cpu stats: different number of cpus
2018-03-19 14:45:39.572056 W | wal: sync duration of 1.022793261s, expected less than 1s
I0319 14:45:39.606134 3263 trace.go:76] Trace[1621517563]: "Get /api/v1/persistentvolumes/pv0039" (started: 2018-03-19 14:45:38.765329608 +0000 UTC) (total time: 840.788614ms):
Trace[1621517563]: [840.788614ms] [840.781448ms] END
I0319 14:45:39.606695 3263 trace.go:76] Trace[153202354]: "Get /api/v1/namespaces/kube-system/configmaps/kube-controller-manager" (started: 2018-03-19 14:45:38.781619147 +0000 UTC) (total time: 825.053915ms):
Trace[153202354]: [824.986334ms] [824.979379ms] About to write a response
I0319 14:45:39.610875 3263 trace.go:76] Trace[857544754]: "Get /api/v1/namespaces/default" (started: 2018-03-19 14:45:38.888442644 +0000 UTC) (total time: 722.387661ms):
Trace[857544754]: [722.338915ms] [722.329421ms] About to write a response
I0319 14:45:39.613602 3263 trace.go:76] Trace[1995040559]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:45:38.932948535 +0000 UTC) (total time: 680.534205ms):
Trace[1995040559]: [680.362876ms] [680.356632ms] Listing from storage done
I0319 14:45:41.340890 3263 reconciler.go:186] operationExecutor.UnmountVolume started for volume "deployer-token-zrld4" (UniqueName: "kubernetes.io/secret/1e0210b5-2b84-11e8-90e3-525400721618-deployer-token-zrld4") pod "1e0210b5-2b84-11e8-90e3-525400721618" (UID: "1e0210b5-2b84-11e8-90e3-525400721618")
I0319 14:45:41.437345 3263 operation_generator.go:542] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0210b5-2b84-11e8-90e3-525400721618-deployer-token-zrld4" (OuterVolumeSpecName: "deployer-token-zrld4") pod "1e0210b5-2b84-11e8-90e3-525400721618" (UID: "1e0210b5-2b84-11e8-90e3-525400721618"). InnerVolumeSpecName "deployer-token-zrld4". PluginName "kubernetes.io/secret", VolumeGidValue ""
I0319 14:45:41.441144 3263 reconciler.go:290] Volume detached for volume "deployer-token-zrld4" (UniqueName: "kubernetes.io/secret/1e0210b5-2b84-11e8-90e3-525400721618-deployer-token-zrld4") on node "localhost" DevicePath ""
W0319 14:45:41.483936 3263 status_manager.go:478] Failed to update status for pod "router-1-deploy_default(1e0210b5-2b84-11e8-90e3-525400721618)": Operation cannot be fulfilled on pods "router-1-deploy": StorageError: invalid object, Code: 4, Key: /kubernetes.io/pods/default/router-1-deploy, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1e0210b5-2b84-11e8-90e3-525400721618, UID in object meta:
E0319 14:45:41.836484 3263 kuberuntime_container.go:66] Can't make a ref to pod "router-1-deploy_default(1e0210b5-2b84-11e8-90e3-525400721618)", container deployment: selfLink was empty, can't make reference
W0319 14:45:42.203395 3263 pod_container_deletor.go:77] Container "045e264d515a471b505958134b2160c7bcdf8ff5229704e82b5c5134a63c138c" not found in pod's containers
I0319 14:45:44.464593 3263 reconciler.go:186] operationExecutor.UnmountVolume started for volume "deployer-token-zrld4" (UniqueName: "kubernetes.io/secret/1e01fafd-2b84-11e8-90e3-525400721618-deployer-token-zrld4") pod "1e01fafd-2b84-11e8-90e3-525400721618" (UID: "1e01fafd-2b84-11e8-90e3-525400721618")
I0319 14:45:44.515946 3263 operation_generator.go:542] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e01fafd-2b84-11e8-90e3-525400721618-deployer-token-zrld4" (OuterVolumeSpecName: "deployer-token-zrld4") pod "1e01fafd-2b84-11e8-90e3-525400721618" (UID: "1e01fafd-2b84-11e8-90e3-525400721618"). InnerVolumeSpecName "deployer-token-zrld4". PluginName "kubernetes.io/secret", VolumeGidValue ""
I0319 14:45:44.564972 3263 reconciler.go:290] Volume detached for volume "deployer-token-zrld4" (UniqueName: "kubernetes.io/secret/1e01fafd-2b84-11e8-90e3-525400721618-deployer-token-zrld4") on node "localhost" DevicePath ""
W0319 14:45:45.291184 3263 pod_container_deletor.go:77] Container "6b6b63203f02c051a3a571f75f155fb420c861ea82f42cd602d9724b76529a33" not found in pod's containers
W0319 14:45:48.207258 3263 conversion.go:110] Could not get instant cpu stats: different number of cpus
I0319 14:45:52.448092 3263 trace.go:76] Trace[1205774360]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:45:51.768997631 +0000 UTC) (total time: 679.0644ms):
Trace[1205774360]: [678.975034ms] [678.969647ms] Listing from storage done
I0319 14:45:52.448143 3263 trace.go:76] Trace[741497870]: "Get /api/v1/namespaces/kube-system/configmaps/kube-scheduler" (started: 2018-03-19 14:45:51.747326575 +0000 UTC) (total time: 700.795878ms):
Trace[741497870]: [700.740156ms] [700.734322ms] About to write a response
2018-03-19 14:45:58.006695 W | wal: sync duration of 3.282296971s, expected less than 1s
I0319 14:45:58.032432 3263 trace.go:76] Trace[1485289460]: "Get /api/v1/namespaces/kube-system/secrets/cronjob-controller-token-nc7jc" (started: 2018-03-19 14:45:54.904812298 +0000 UTC) (total time: 3.127586367s):
Trace[1485289460]: [3.127303545s] [3.127298253s] About to write a response
I0319 14:45:58.032506 3263 trace.go:76] Trace[814668223]: "Create /api/v1/persistentvolumes" (started: 2018-03-19 14:45:54.990321934 +0000 UTC) (total time: 3.04216811s):
Trace[814668223]: [3.042071777s] [3.041902724s] Object stored in database
I0319 14:45:58.037579 3263 trace.go:76] Trace[740389399]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:45:55.665730997 +0000 UTC) (total time: 2.371819198s):
Trace[740389399]: [2.371715888s] [2.371709895s] Listing from storage done
I0319 14:45:58.039298 3263 trace.go:76] Trace[907803265]: "Get /api/v1/namespaces/kube-system/configmaps/kube-scheduler" (started: 2018-03-19 14:45:56.675785907 +0000 UTC) (total time: 1.363464487s):
Trace[907803265]: [1.363367297s] [1.363361803s] About to write a response
2018-03-19 14:45:58.473912 W | etcdserver: apply entries took too long [336.501406ms for 1 entries]
2018-03-19 14:45:58.473984 W | etcdserver: avoid queries with large range/delete range!
2018-03-19 14:45:58.942642 W | etcdserver: apply entries took too long [351.552868ms for 1 entries]
2018-03-19 14:45:58.942980 W | etcdserver: avoid queries with large range/delete range!
I0319 14:46:02.548308 3263 trace.go:76] Trace[85287709]: "GuaranteedUpdate etcd3: *api.PersistentVolume" (started: 2018-03-19 14:46:02.043340813 +0000 UTC) (total time: 504.942447ms):
Trace[85287709]: [500.854918ms] [500.854918ms] initial value restored
I0319 14:46:02.548484 3263 trace.go:76] Trace[772254993]: "Update /api/v1/persistentvolumes/pv0065/status" (started: 2018-03-19 14:46:02.043219754 +0000 UTC) (total time: 505.246971ms):
Trace[772254993]: [505.168553ms] [505.076463ms] Object stored in database
I0319 14:46:04.289917 3263 trace.go:76] Trace[840349424]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:46:03.546055264 +0000 UTC) (total time: 743.84067ms):
Trace[840349424]: [743.775003ms] [743.768747ms] Listing from storage done
I0319 14:46:04.290685 3263 trace.go:76] Trace[952462499]: "Get /api/v1/persistentvolumes/pv0067" (started: 2018-03-19 14:46:03.534299937 +0000 UTC) (total time: 756.353928ms):
Trace[952462499]: [756.353928ms] [756.348864ms] END
I0319 14:46:04.296262 3263 trace.go:76] Trace[1473605813]: "GuaranteedUpdate etcd3: *api.PersistentVolume" (started: 2018-03-19 14:46:03.298608973 +0000 UTC) (total time: 997.635418ms):
Trace[1473605813]: [991.615778ms] [991.615778ms] initial value restored
I0319 14:46:04.296320 3263 trace.go:76] Trace[1736657497]: "Update /api/v1/persistentvolumes/pv0066/status" (started: 2018-03-19 14:46:03.298531846 +0000 UTC) (total time: 997.778645ms):
Trace[1736657497]: [997.742012ms] [997.683394ms] Object stored in database
I0319 14:46:04.812977 3263 trace.go:76] Trace[2061750939]: "GuaranteedUpdate etcd3: *api.ConfigMap" (started: 2018-03-19 14:46:04.295014222 +0000 UTC) (total time: 517.945193ms):
Trace[2061750939]: [512.15281ms] [512.15281ms] initial value restored
I0319 14:46:04.813036 3263 trace.go:76] Trace[527624655]: "Update /api/v1/namespaces/kube-system/configmaps/kube-controller-manager" (started: 2018-03-19 14:46:04.294936077 +0000 UTC) (total time: 518.089942ms):
Trace[527624655]: [518.061478ms] [518.003877ms] Object stored in database
I0319 14:46:05.667757 3263 trace.go:76] Trace[1830471681]: "Create /apis/build.openshift.io/v1/namespaces/myproject/buildconfigs/nodejs-ex/instantiate" (started: 2018-03-19 14:46:04.927560459 +0000 UTC) (total time: 740.170993ms):
Trace[1830471681]: [740.170993ms] [694.267361ms] END
E0319 14:46:05.669333 3263 buildconfig_controller.go:137] gave up on Build for BuildConfig myproject/nodejs-ex (0) due to fatal error: the LastVersion(1) on build config myproject/nodejs-ex does not match the build request LastVersion(0)
I0319 14:46:05.682180 3263 trace.go:76] Trace[1365753553]: "Create /apis/build.openshift.io/v1/namespaces/myproject/builds" (started: 2018-03-19 14:46:05.168592107 +0000 UTC) (total time: 513.553793ms):
Trace[1365753553]: [498.248128ms] [498.101484ms] About to store object in database
I0319 14:46:05.697449 3263 trace.go:76] Trace[1416002728]: "Create /apis/build.openshift.io/v1/namespaces/myproject/buildconfigs/nodejs-ex/instantiate" (started: 2018-03-19 14:46:04.970584524 +0000 UTC) (total time: 726.83945ms):
Trace[1416002728]: [726.772159ms] [694.623561ms] Object stored in database
I0319 14:46:05.786007 3263 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"myproject", Name:"nodejs-ex-1-build", UID:"40c80d29-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"1233", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned nodejs-ex-1-build to localhost
I0319 14:46:05.915885 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "crio-socket" (UniqueName: "kubernetes.io/host-path/40c80d29-2b84-11e8-90e3-525400721618-crio-socket") pod "nodejs-ex-1-build" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
I0319 14:46:05.915916 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "docker-socket" (UniqueName: "kubernetes.io/host-path/40c80d29-2b84-11e8-90e3-525400721618-docker-socket") pod "nodejs-ex-1-build" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
I0319 14:46:05.915951 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "buildworkdir" (UniqueName: "kubernetes.io/empty-dir/40c80d29-2b84-11e8-90e3-525400721618-buildworkdir") pod "nodejs-ex-1-build" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
I0319 14:46:05.916042 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "builder-dockercfg-86fjt-push" (UniqueName: "kubernetes.io/secret/40c80d29-2b84-11e8-90e3-525400721618-builder-dockercfg-86fjt-push") pod "nodejs-ex-1-build" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
I0319 14:46:05.916118 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "builder-token-2vv7v" (UniqueName: "kubernetes.io/secret/40c80d29-2b84-11e8-90e3-525400721618-builder-token-2vv7v") pod "nodejs-ex-1-build" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
W0319 14:46:06.125763 3263 container.go:354] Failed to create summary reader for "/system.slice/run-r37aeecca92834376b61ed57e1b4b8d23.scope": none of the resources are being tracked.
I0319 14:46:06.697606 3263 trace.go:76] Trace[351415466]: "Create /api/v1/persistentvolumes" (started: 2018-03-19 14:46:06.016509137 +0000 UTC) (total time: 681.019856ms):
Trace[351415466]: [106.683295ms] [106.683295ms] About to convert to expected version
Trace[351415466]: [680.819135ms] [573.892297ms] Object stored in database
I0319 14:46:06.697783 3263 trace.go:76] Trace[127792680]: "Get /api/v1/namespaces/myproject/secrets/builder-token-2vv7v" (started: 2018-03-19 14:46:06.128285192 +0000 UTC) (total time: 569.46357ms):
Trace[127792680]: [569.074076ms] [569.048466ms] About to write a response
I0319 14:46:06.698222 3263 trace.go:76] Trace[394499678]: "Create /api/v1/namespaces/myproject/events" (started: 2018-03-19 14:46:06.030374534 +0000 UTC) (total time: 667.822483ms):
Trace[394499678]: [667.728466ms] [667.315894ms] Object stored in database
I0319 14:46:07.271929 3263 trace.go:76] Trace[1597278408]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:46:06.6609268 +0000 UTC) (total time: 610.958268ms):
Trace[1597278408]: [610.824899ms] [610.81639ms] Listing from storage done
I0319 14:46:07.272841 3263 trace.go:76] Trace[123392422]: "Get /api/v1/namespaces/myproject/secrets/builder-dockercfg-86fjt" (started: 2018-03-19 14:46:06.160504214 +0000 UTC) (total time: 1.112314131s):
Trace[123392422]: [1.112115254s] [1.112108504s] About to write a response
I0319 14:46:07.296374 3263 trace.go:76] Trace[1814970028]: "Create /api/v1/namespaces/myproject/events" (started: 2018-03-19 14:46:06.700502576 +0000 UTC) (total time: 595.841428ms):
Trace[1814970028]: [595.786576ms] [595.644585ms] Object stored in database
I0319 14:46:07.311933 3263 trace.go:76] Trace[1191672264]: "GuaranteedUpdate etcd3: *api.PersistentVolume" (started: 2018-03-19 14:46:06.707266552 +0000 UTC) (total time: 604.641434ms):
Trace[1191672264]: [583.354504ms] [583.354504ms] initial value restored
I0319 14:46:07.312036 3263 trace.go:76] Trace[671326319]: "Update /api/v1/persistentvolumes/pv0068/status" (started: 2018-03-19 14:46:06.707158599 +0000 UTC) (total time: 604.859575ms):
Trace[671326319]: [604.790004ms] [604.709496ms] Object stored in database
2018-03-19 14:46:12.673595 W | wal: sync duration of 3.025333987s, expected less than 1s
I0319 14:46:12.674771 3263 trace.go:76] Trace[1695516970]: "Create /api/v1/persistentvolumes" (started: 2018-03-19 14:46:09.897891486 +0000 UTC) (total time: 2.776856011s):
Trace[1695516970]: [2.776740849s] [2.772998862s] Object stored in database
I0319 14:46:12.845228 3263 trace.go:76] Trace[24395308]: "GuaranteedUpdate etcd3: *api.ConfigMap" (started: 2018-03-19 14:46:09.651329494 +0000 UTC) (total time: 3.193874667s):
Trace[24395308]: [3.023123164s] [3.023123164s] initial value restored
Trace[24395308]: [3.193849695s] [170.562307ms] Transaction committed
I0319 14:46:12.845420 3263 trace.go:76] Trace[1075491785]: "Update /api/v1/namespaces/kube-system/configmaps/kube-scheduler" (started: 2018-03-19 14:46:09.651198278 +0000 UTC) (total time: 3.19420317s):
Trace[1075491785]: [3.194056915s] [3.193968796s] Object stored in database
I0319 14:46:12.845762 3263 trace.go:76] Trace[1474839111]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:46:10.307622547 +0000 UTC) (total time: 2.538125094s):
Trace[1474839111]: [2.53807809s] [2.538069245s] Listing from storage done
I0319 14:46:12.845952 3263 trace.go:76] Trace[367672623]: "Get /api/v1/namespaces/kube-system/configmaps/kube-controller-manager" (started: 2018-03-19 14:46:10.819864055 +0000 UTC) (total time: 2.026075663s):
Trace[367672623]: [2.026046982s] [2.026040397s] About to write a response
I0319 14:46:12.846248 3263 trace.go:76] Trace[2039480585]: "Get /api/v1/namespaces/default" (started: 2018-03-19 14:46:09.906740976 +0000 UTC) (total time: 2.93948632s):
Trace[2039480585]: [2.939455941s] [2.939450175s] About to write a response
I0319 14:46:15.318869 3263 trace.go:76] Trace[1901408639]: "Get /api/v1/persistentvolumes/pv0076" (started: 2018-03-19 14:46:14.724863598 +0000 UTC) (total time: 593.982858ms):
Trace[1901408639]: [593.982858ms] [593.977443ms] END
I0319 14:46:15.319582 3263 trace.go:76] Trace[987852825]: "GuaranteedUpdate etcd3: *api.PersistentVolume" (started: 2018-03-19 14:46:14.478624281 +0000 UTC) (total time: 840.930539ms):
Trace[987852825]: [824.242177ms] [824.242177ms] initial value restored
I0319 14:46:15.319709 3263 trace.go:76] Trace[1263433955]: "Update /api/v1/persistentvolumes/pv0075/status" (started: 2018-03-19 14:46:14.478437928 +0000 UTC) (total time: 841.25263ms):
Trace[1263433955]: [841.163432ms] [841.009879ms] Object stored in database
I0319 14:46:27.044091 3263 kuberuntime_manager.go:489] Container {Name:sti-build Image:openshift/origin-sti-builder:v3.7.1 Command:[openshift-sti-build] Args:[--loglevel=0] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:BUILD Value:{"kind":"Build","apiVersion":"v1","metadata":{"name":"nodejs-ex-1","namespace":"myproject","selfLink":"/apis/build.openshift.io/v1/namespaces/myproject/builds/nodejs-ex-1","uid":"40be0392-2b84-11e8-90e3-525400721618","resourceVersion":"1231","creationTimestamp":"2018-03-19T14:46:05Z","labels":{"app":"nodejs-ex","buildconfig":"nodejs-ex","name":"myapp","openshift.io/build-config.name":"nodejs-ex","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"nodejs-ex","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"nodejs-ex","uid":"4049d773-2b84-11e8-90e3-525400721618","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"centos/nodejs-6-centos7@sha256:1f23d374ca052fd024c0a6df2dad234daa843b6ebce000e57d3b3a397dd11460"}}},"output":{"to":{"kind":"DockerImage","name":"172.30.1.1:5000/myproject/nodejs-ex:latest"},"pushSecret":{"name":"builder-dockercfg-86fjt"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"centos/nodejs-6-centos7@sha256:1f23d374ca052fd024c0a6df2dad234daa843b6ebce000e57d3b3a397dd11460","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:6"}}}]},"status":{"phase":"New","outputDockerImageReference":"172.30.1.1:5000/myproject/nodejs-ex:latest","config":{"kind":"BuildConfig","namespace":"myproject","name":"nodejs-ex"},"output":{}}}
ValueFrom:nil} {Name:SOURCE_REPOSITORY Value:https://github.com/openshift/nodejs-ex ValueFrom:nil} {Name:SOURCE_URI Value:https://github.com/openshift/nodejs-ex ValueFrom:nil} {Name:ORIGIN_VERSION Value:v3.7.1+282e43f-42 ValueFrom:nil} {Name:ALLOWED_UIDS Value:1- ValueFrom:nil} {Name:DROP_CAPS Value:KILL,MKNOD,SETGID,SETUID ValueFrom:nil} {Name:PUSH_DOCKERCFG_PATH Value:/var/run/secrets/openshift.io/push ValueFrom:nil}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:buildworkdir ReadOnly:false MountPath:/tmp/build SubPath:} {Name:docker-socket ReadOnly:false MountPath:/var/run/docker.sock SubPath:} {Name:crio-socket ReadOnly:false MountPath:/var/run/crio.sock SubPath:} {Name:builder-dockercfg-86fjt-push ReadOnly:true MountPath:/var/run/secrets/openshift.io/push SubPath:} {Name:builder-token-2vv7v ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:FallbackToLogsOnError ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E0319 14:46:31.117825 3263 kuberuntime_logs.go:254] Container "2449b91d96562b75b366dad81edacffa4975b01ed27fb485580c307aee514cf7" is not running (state="CONTAINER_EXITED")
I0319 14:46:32.264339 3263 trace.go:76] Trace[1195621669]: "GuaranteedUpdate etcd3: *api.ConfigMap" (started: 2018-03-19 14:46:31.704961867 +0000 UTC) (total time: 559.168895ms):
Trace[1195621669]: [554.623586ms] [554.623586ms] initial value restored
I0319 14:46:32.264532 3263 trace.go:76] Trace[158553810]: "Update /api/v1/namespaces/kube-system/configmaps/kube-scheduler" (started: 2018-03-19 14:46:31.704909529 +0000 UTC) (total time: 559.605682ms):
Trace[158553810]: [559.45591ms] [559.42024ms] Object stored in database
2018-03-19 14:46:33.316008 W | etcdserver: apply entries took too long [115.256169ms for 1 entries]
2018-03-19 14:46:33.316040 W | etcdserver: avoid queries with large range/delete range!
I0319 14:46:33.318102 3263 trace.go:76] Trace[443105144]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:46:32.706040404 +0000 UTC) (total time: 612.03811ms):
Trace[443105144]: [611.957002ms] [611.935262ms] Listing from storage done
I0319 14:46:33.320090 3263 trace.go:76] Trace[2036669016]: "Create /api/v1/persistentvolumes" (started: 2018-03-19 14:46:32.527329765 +0000 UTC) (total time: 792.739518ms):
Trace[2036669016]: [792.665726ms] [792.231523ms] Object stored in database
2018-03-19 14:46:33.588303 W | etcdserver: apply entries took too long [134.41506ms for 1 entries]
2018-03-19 14:46:33.588322 W | etcdserver: avoid queries with large range/delete range!
I0319 14:46:38.186620 3263 reconciler.go:186] operationExecutor.UnmountVolume started for volume "pvinstaller-token-hgd6b" (UniqueName: "kubernetes.io/secret/1df5f488-2b84-11e8-90e3-525400721618-pvinstaller-token-hgd6b") pod "1df5f488-2b84-11e8-90e3-525400721618" (UID: "1df5f488-2b84-11e8-90e3-525400721618")
I0319 14:46:38.186682 3263 reconciler.go:186] operationExecutor.UnmountVolume started for volume "pvdir" (UniqueName: "kubernetes.io/host-path/1df5f488-2b84-11e8-90e3-525400721618-pvdir") pod "1df5f488-2b84-11e8-90e3-525400721618" (UID: "1df5f488-2b84-11e8-90e3-525400721618")
I0319 14:46:38.186740 3263 operation_generator.go:542] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df5f488-2b84-11e8-90e3-525400721618-pvdir" (OuterVolumeSpecName: "pvdir") pod "1df5f488-2b84-11e8-90e3-525400721618" (UID: "1df5f488-2b84-11e8-90e3-525400721618"). InnerVolumeSpecName "pvdir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
I0319 14:46:38.286998 3263 reconciler.go:290] Volume detached for volume "pvdir" (UniqueName: "kubernetes.io/host-path/1df5f488-2b84-11e8-90e3-525400721618-pvdir") on node "localhost" DevicePath ""
I0319 14:46:38.431962 3263 operation_generator.go:542] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1df5f488-2b84-11e8-90e3-525400721618-pvinstaller-token-hgd6b" (OuterVolumeSpecName: "pvinstaller-token-hgd6b") pod "1df5f488-2b84-11e8-90e3-525400721618" (UID: "1df5f488-2b84-11e8-90e3-525400721618"). InnerVolumeSpecName "pvinstaller-token-hgd6b". PluginName "kubernetes.io/secret", VolumeGidValue ""
I0319 14:46:38.487468 3263 reconciler.go:290] Volume detached for volume "pvinstaller-token-hgd6b" (UniqueName: "kubernetes.io/secret/1df5f488-2b84-11e8-90e3-525400721618-pvinstaller-token-hgd6b") on node "localhost" DevicePath ""
W0319 14:46:39.103537 3263 pod_container_deletor.go:77] Container "33625303669ec8383ea0268271599faa15fe24bb529cd18aeb92388079a3762b" not found in pod's containers
I0319 14:46:41.058523 3263 trace.go:76] Trace[971482282]: "Get /api/v1/namespaces/kube-system/configmaps/kube-scheduler" (started: 2018-03-19 14:46:40.523566958 +0000 UTC) (total time: 534.924184ms):
Trace[971482282]: [534.856266ms] [534.847047ms] About to write a response
I0319 14:46:41.058970 3263 trace.go:76] Trace[1495619627]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:46:40.518181367 +0000 UTC) (total time: 540.76743ms):
Trace[1495619627]: [540.694809ms] [540.683039ms] Listing from storage done
2018-03-19 14:46:44.165479 W | wal: sync duration of 1.098665817s, expected less than 1s
I0319 14:46:44.295522 3263 trace.go:76] Trace[1945309238]: "Get /api/v1/namespaces/kube-system/configmaps/kube-controller-manager" (started: 2018-03-19 14:46:43.171065287 +0000 UTC) (total time: 1.124436321s):
Trace[1945309238]: [1.124392962s] [1.124383195s] About to write a response
I0319 14:46:44.295851 3263 trace.go:76] Trace[714013340]: "Get /api/v1/namespaces/default" (started: 2018-03-19 14:46:43.662306553 +0000 UTC) (total time: 633.530096ms):
Trace[714013340]: [633.480388ms] [633.474855ms] About to write a response
I0319 14:46:48.223831 3263 trace.go:76] Trace[1145173433]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:46:47.401899305 +0000 UTC) (total time: 821.909392ms):
Trace[1145173433]: [821.833593ms] [821.827484ms] Listing from storage done
I0319 14:46:48.224568 3263 trace.go:76] Trace[1450869205]: "Get /api/v1/namespaces/kube-system/secrets/pod-garbage-collector-token-z2hxl" (started: 2018-03-19 14:46:47.433830055 +0000 UTC) (total time: 790.718323ms):
Trace[1450869205]: [790.560807ms] [790.555776ms] About to write a response
I0319 14:46:48.225754 3263 trace.go:76] Trace[228176291]: "Get /api/v1/namespaces/kube-system/configmaps/kube-controller-manager" (started: 2018-03-19 14:46:47.308550163 +0000 UTC) (total time: 917.173699ms):
Trace[228176291]: [917.094038ms] [917.089086ms] About to write a response
2018-03-19 14:46:49.166867 W | etcdserver: apply entries took too long [144.727519ms for 1 entries]
2018-03-19 14:46:49.166881 W | etcdserver: avoid queries with large range/delete range!
I0319 14:46:49.169161 3263 trace.go:76] Trace[867206982]: "Get /api/v1/namespaces/kube-system/secrets/cronjob-controller-token-nc7jc" (started: 2018-03-19 14:46:48.588068013 +0000 UTC) (total time: 581.066255ms):
Trace[867206982]: [580.861214ms] [580.85247ms] About to write a response
I0319 14:46:50.029601 3263 trace.go:76] Trace[1548780326]: "Get /api/v1/namespaces/kube-system/serviceaccounts/cronjob-controller" (started: 2018-03-19 14:46:49.170775985 +0000 UTC) (total time: 858.792125ms):
Trace[1548780326]: [858.737549ms] [858.73102ms] About to write a response
I0319 14:46:50.031177 3263 trace.go:76] Trace[2114743791]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:46:49.224974055 +0000 UTC) (total time: 806.176097ms):
Trace[2114743791]: [805.927687ms] [805.9172ms] Listing from storage done
2018-03-19 14:46:55.123899 W | etcdserver: apply entries took too long [633.418021ms for 1 entries]
2018-03-19 14:46:55.123917 W | etcdserver: avoid queries with large range/delete range!
I0319 14:46:55.124468 3263 trace.go:76] Trace[1440735362]: "Get /api/v1/namespaces/kube-system/configmaps/kube-controller-manager" (started: 2018-03-19 14:46:54.387716535 +0000 UTC) (total time: 736.733612ms):
Trace[1440735362]: [736.694263ms] [736.685778ms] About to write a response
I0319 14:46:55.124949 3263 trace.go:76] Trace[1601029557]: "GuaranteedUpdate etcd3: *api.Endpoints" (started: 2018-03-19 14:46:54.318996867 +0000 UTC) (total time: 805.934764ms):
Trace[1601029557]: [805.909536ms] [803.981473ms] Transaction committed
2018-03-19 14:47:09.603555 W | etcdserver: apply entries took too long [113.194818ms for 1 entries]
2018-03-19 14:47:09.603582 W | etcdserver: avoid queries with large range/delete range!
2018-03-19 14:47:16.181775 W | etcdserver: apply entries took too long [644.064192ms for 1 entries]
2018-03-19 14:47:16.181797 W | etcdserver: avoid queries with large range/delete range!
I0319 14:47:16.183529 3263 trace.go:76] Trace[1308903840]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:47:15.617439879 +0000 UTC) (total time: 566.044471ms):
Trace[1308903840]: [565.951986ms] [565.943486ms] Listing from storage done
I0319 14:47:16.184124 3263 trace.go:76] Trace[822280305]: "Get /api/v1/namespaces/kube-system/configmaps/kube-scheduler" (started: 2018-03-19 14:47:15.623444023 +0000 UTC) (total time: 560.663297ms):
Trace[822280305]: [560.633994ms] [560.62635ms] About to write a response
I0319 14:47:16.184380 3263 trace.go:76] Trace[1075628731]: "GuaranteedUpdate etcd3: *api.Endpoints" (started: 2018-03-19 14:47:15.267542368 +0000 UTC) (total time: 916.824562ms):
Trace[1075628731]: [916.807783ms] [914.957559ms] Transaction committed
2018-03-19 14:47:20.670365 W | etcdserver: apply entries took too long [176.467383ms for 1 entries]
2018-03-19 14:47:20.670681 W | etcdserver: avoid queries with large range/delete range!
2018-03-19 14:47:26.758342 W | etcdserver: apply entries took too long [502.000425ms for 1 entries]
2018-03-19 14:47:26.758374 W | etcdserver: avoid queries with large range/delete range!
I0319 14:47:26.760430 3263 trace.go:76] Trace[835095363]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:47:26.213958132 +0000 UTC) (total time: 546.415967ms):
Trace[835095363]: [546.261157ms] [546.254115ms] Listing from storage done
I0319 14:47:26.761298 3263 trace.go:76] Trace[812848615]: "Get /api/v1/namespaces/kube-system/configmaps/kube-scheduler" (started: 2018-03-19 14:47:26.219636011 +0000 UTC) (total time: 541.262702ms):
Trace[812848615]: [541.136397ms] [541.128758ms] About to write a response
I0319 14:47:26.761807 3263 trace.go:76] Trace[54792603]: "GuaranteedUpdate etcd3: *api.Endpoints" (started: 2018-03-19 14:47:26.19136502 +0000 UTC) (total time: 570.371395ms):
Trace[54792603]: [570.272061ms] [568.592917ms] Transaction committed
I0319 14:47:32.628037 3263 trace.go:76] Trace[2101562692]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:47:31.847634218 +0000 UTC) (total time: 780.376079ms):
Trace[2101562692]: [780.288175ms] [780.282043ms] Listing from storage done
I0319 14:47:35.441812 3263 trace.go:76] Trace[70000999]: "Get /api/v1/namespaces/kube-system/configmaps/kube-scheduler" (started: 2018-03-19 14:47:34.858944542 +0000 UTC) (total time: 582.825137ms):
Trace[70000999]: [582.764627ms] [582.759315ms] About to write a response
2018-03-19 14:47:44.801643 W | wal: sync duration of 1.122514375s, expected less than 1s
I0319 14:47:44.863774 3263 trace.go:76] Trace[2062580427]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:47:43.783689182 +0000 UTC) (total time: 1.080061297s):
Trace[2062580427]: [1.079949425s] [1.079941906s] Listing from storage done
I0319 14:47:44.864103 3263 trace.go:76] Trace[1896250538]: "Get /api/v1/namespaces/kube-system/configmaps/kube-controller-manager" (started: 2018-03-19 14:47:43.734554562 +0000 UTC) (total time: 1.129532679s):
Trace[1896250538]: [1.129488075s] [1.129480427s] About to write a response
2018-03-19 14:48:00.769668 W | wal: sync duration of 1.20090503s, expected less than 1s
I0319 14:48:00.778718 3263 trace.go:76] Trace[116650537]: "Get /api/v1/namespaces/kube-system/configmaps/kube-scheduler" (started: 2018-03-19 14:47:59.896978503 +0000 UTC) (total time: 881.719615ms):
Trace[116650537]: [881.662701ms] [881.64254ms] About to write a response
I0319 14:48:00.778922 3263 trace.go:76] Trace[280591658]: "Get /api/v1/namespaces/kube-system/configmaps/kube-controller-manager" (started: 2018-03-19 14:47:59.902738096 +0000 UTC) (total time: 876.171309ms):
Trace[280591658]: [876.146788ms] [876.138994ms] About to write a response
I0319 14:48:00.779288 3263 trace.go:76] Trace[929537712]: "List /apis/admissionregistration.k8s.io/v1alpha1/externaladmissionhookconfigurations" (started: 2018-03-19 14:48:00.254651487 +0000 UTC) (total time: 524.611371ms):
Trace[929537712]: [524.544244ms] [524.514643ms] Listing from storage done
I0319 14:48:00.780233 3263 trace.go:76] Trace[1038621041]: "Get /api/v1/namespaces/default/secrets/router-token-mvv7l" (started: 2018-03-19 14:47:59.924560892 +0000 UTC) (total time: 855.652398ms):
Trace[1038621041]: [854.421902ms] [854.363521ms] About to write a response
I0319 14:48:00.780326 3263 trace.go:76] Trace[518926714]: "Get /api/v1/namespaces/default/secrets/router-certs" (started: 2018-03-19 14:47:59.914639225 +0000 UTC) (total time: 865.673144ms):
Trace[518926714]: [864.833813ms] [864.766965ms] About to write a response
I0319 14:48:00.780403 3263 trace.go:76] Trace[2143291546]: "Get /api/v1/namespaces/default/secrets/router-dockercfg-4x2lp" (started: 2018-03-19 14:48:00.134489949 +0000 UTC) (total time: 645.894938ms):
Trace[2143291546]: [645.34865ms] [645.267605ms] About to write a response
I0319 14:48:00.780654 3263 trace.go:76] Trace[402518020]: "Get /api/v1/namespaces/kube-system/secrets/cronjob-controller-token-nc7jc" (started: 2018-03-19 14:48:00.184077093 +0000 UTC) (total time: 596.564229ms):
Trace[402518020]: [595.604513ms] [595.597436ms] About to write a response
I0319 14:48:02.151626 3263 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"myproject", Name:"nodejs-ex-1-deploy", UID:"86286dc5-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"1451", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned nodejs-ex-1-deploy to localhost
I0319 14:48:02.217371 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "deployer-token-thrrl" (UniqueName: "kubernetes.io/secret/86286dc5-2b84-11e8-90e3-525400721618-deployer-token-thrrl") pod "nodejs-ex-1-deploy" (UID: "86286dc5-2b84-11e8-90e3-525400721618")
W0319 14:48:02.369805 3263 container.go:367] Failed to get RecentStats("/system.slice/run-r5a005746b4314898987908492a215259.scope") while determining the next housekeeping: unable to find data for container /system.slice/run-r5a005746b4314898987908492a215259.scope
I0319 14:48:03.210551 3263 trace.go:76] Trace[304090936]: "Get /api/v1/namespaces/myproject/secrets/deployer-dockercfg-ks64l" (started: 2018-03-19 14:48:02.464344773 +0000 UTC) (total time: 746.187281ms):
Trace[304090936]: [745.733401ms] [745.725797ms] About to write a response
I0319 14:48:03.321322 3263 reconciler.go:186] operationExecutor.UnmountVolume started for volume "buildworkdir" (UniqueName: "kubernetes.io/empty-dir/40c80d29-2b84-11e8-90e3-525400721618-buildworkdir") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
I0319 14:48:03.321613 3263 reconciler.go:186] operationExecutor.UnmountVolume started for volume "builder-token-2vv7v" (UniqueName: "kubernetes.io/secret/40c80d29-2b84-11e8-90e3-525400721618-builder-token-2vv7v") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
I0319 14:48:03.321773 3263 reconciler.go:186] operationExecutor.UnmountVolume started for volume "docker-socket" (UniqueName: "kubernetes.io/host-path/40c80d29-2b84-11e8-90e3-525400721618-docker-socket") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
I0319 14:48:03.321888 3263 reconciler.go:186] operationExecutor.UnmountVolume started for volume "builder-dockercfg-86fjt-push" (UniqueName: "kubernetes.io/secret/40c80d29-2b84-11e8-90e3-525400721618-builder-dockercfg-86fjt-push") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
I0319 14:48:03.321979 3263 reconciler.go:186] operationExecutor.UnmountVolume started for volume "crio-socket" (UniqueName: "kubernetes.io/host-path/40c80d29-2b84-11e8-90e3-525400721618-crio-socket") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618")
I0319 14:48:03.322112 3263 operation_generator.go:542] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40c80d29-2b84-11e8-90e3-525400721618-crio-socket" (OuterVolumeSpecName: "crio-socket") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618"). InnerVolumeSpecName "crio-socket". PluginName "kubernetes.io/host-path", VolumeGidValue ""
I0319 14:48:03.330481 3263 operation_generator.go:542] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40c80d29-2b84-11e8-90e3-525400721618-docker-socket" (OuterVolumeSpecName: "docker-socket") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618"). InnerVolumeSpecName "docker-socket". PluginName "kubernetes.io/host-path", VolumeGidValue ""
I0319 14:48:03.366144 3263 operation_generator.go:542] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c80d29-2b84-11e8-90e3-525400721618-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
I0319 14:48:03.386880 3263 operation_generator.go:542] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c80d29-2b84-11e8-90e3-525400721618-builder-dockercfg-86fjt-push" (OuterVolumeSpecName: "builder-dockercfg-86fjt-push") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618"). InnerVolumeSpecName "builder-dockercfg-86fjt-push". PluginName "kubernetes.io/secret", VolumeGidValue ""
I0319 14:48:03.408546 3263 operation_generator.go:542] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c80d29-2b84-11e8-90e3-525400721618-builder-token-2vv7v" (OuterVolumeSpecName: "builder-token-2vv7v") pod "40c80d29-2b84-11e8-90e3-525400721618" (UID: "40c80d29-2b84-11e8-90e3-525400721618"). InnerVolumeSpecName "builder-token-2vv7v". PluginName "kubernetes.io/secret", VolumeGidValue ""
I0319 14:48:03.422892 3263 reconciler.go:290] Volume detached for volume "crio-socket" (UniqueName: "kubernetes.io/host-path/40c80d29-2b84-11e8-90e3-525400721618-crio-socket") on node "localhost" DevicePath ""
I0319 14:48:03.422968 3263 reconciler.go:290] Volume detached for volume "docker-socket" (UniqueName: "kubernetes.io/host-path/40c80d29-2b84-11e8-90e3-525400721618-docker-socket") on node "localhost" DevicePath ""
I0319 14:48:03.423000 3263 reconciler.go:290] Volume detached for volume "builder-dockercfg-86fjt-push" (UniqueName: "kubernetes.io/secret/40c80d29-2b84-11e8-90e3-525400721618-builder-dockercfg-86fjt-push") on node "localhost" DevicePath ""
I0319 14:48:03.423047 3263 reconciler.go:290] Volume detached for volume "builder-token-2vv7v" (UniqueName: "kubernetes.io/secret/40c80d29-2b84-11e8-90e3-525400721618-builder-token-2vv7v") on node "localhost" DevicePath ""
I0319 14:48:03.423074 3263 reconciler.go:290] Volume detached for volume "buildworkdir" (UniqueName: "kubernetes.io/empty-dir/40c80d29-2b84-11e8-90e3-525400721618-buildworkdir") on node "localhost" DevicePath ""
E0319 14:48:03.838327 3263 kuberuntime_container.go:66] Can't make a ref to pod "nodejs-ex-1-build_myproject(40c80d29-2b84-11e8-90e3-525400721618)", container sti-build: selfLink was empty, can't make reference
W0319 14:48:04.346969 3263 pod_container_deletor.go:77] Container "09716c504325bc241b184e60021fd84a48d5eedae4f8c54a7d5ac4f9bb125e65" not found in pod's containers
I0319 14:48:04.588478 3263 event.go:218] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"myproject", Name:"nodejs-ex-1", UID:"8612978a-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"1466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nodejs-ex-1-86799
I0319 14:48:04.615540 3263 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"myproject", Name:"nodejs-ex-1-86799", UID:"879f01b6-2b84-11e8-90e3-525400721618", APIVersion:"v1", ResourceVersion:"1468", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned nodejs-ex-1-86799 to localhost
I0319 14:48:04.747113 3263 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-4n87n" (UniqueName: "kubernetes.io/secret/879f01b6-2b84-11e8-90e3-525400721618-default-token-4n87n") pod "nodejs-ex-1-86799" (UID: "879f01b6-2b84-11e8-90e3-525400721618")
W0319 14:48:04.900557 3263 container.go:367] Failed to get RecentStats("/system.slice/run-rf5fa898899a740aa845df4f2d589dec3.scope") while determining the next housekeeping: unable to find data for container /system.slice/run-rf5fa898899a740aa845df4f2d589dec3.scope
W0319 14:48:05.386531 3263 docker_sandbox.go:337] failed to read pod IP from plugin/docker: Couldn't find network status for myproject/nodejs-ex-1-86799 through plugin: invalid network status for
W0319 14:48:05.388493 3263 pod_container_deletor.go:77] Container "efd5b2ba6ce56c32cb71df73b6fbefa4d9e437f7182649f6e5669e9566948bf4" not found in pod's containers
E0319 14:48:07.009154 3263 kuberuntime_logs.go:254] Container "8c8c323e81d9b350088a5840f650c7272d75a72ffb9dffea2ef1132b59bf19db" is not running (state="CONTAINER_EXITED")
I0319 14:48:07.009731 3263 trace.go:76] Trace[1240966953]: "Get /apis/build.openshift.io/v1/namespaces/myproject/builds/nodejs-ex-1/log" (started: 2018-03-19 14:46:25.386102739 +0000 UTC) (total time: 1m41.623601079s):
Trace[1240966953]: [1m41.623601079s] [1m41.591568896s] END
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment