Created
May 12, 2020 16:01
-
-
Save brianpursley/eb35008f94c05e482ae77a43f28f8e41 to your computer and use it in GitHub Desktop.
make test-integration WHAT=./test/integration/scheduler GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestNominatedNodeCleanUp$$$$'
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| +++ [0512 12:00:54] Checking etcd is on PATH | |
| /usr/bin/etcd | |
| +++ [0512 12:00:54] Starting etcd instance | |
| /home/bpursley/go/src/k8s.io/kubernetes/third_party/etcd:/home/bpursley/gems/bin:/home/bpursley/.nvm/versions/node/v12.14.0/bin:/usr/lib/jvm/jdk-13.0.1/bin:/home/bpursley/.local/bin:/home/bpursley/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/bpursley/.dotnet/tools:/usr/share/rvm/bin:/home/bpursley/bin:/usr/local/go/bin:/home/bpursley/go/bin:/home/bpursley/.krew/bin:/home/bpursley/kubectl-plugins | |
| etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.hFmwtBV9wu --listen-client-urls http://127.0.0.1:2379 --debug > "/dev/null" 2>/dev/null | |
| Waiting for etcd to come up. | |
| +++ [0512 12:00:55] On try 1, etcd: : {"health":"true"} | |
| {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}+++ [0512 12:00:55] Running integration test cases | |
| +++ [0512 12:00:58] Running tests without code coverage | |
| I0512 12:01:05.269709 906438 etcd.go:81] etcd already running at http://127.0.0.1:2379 | |
| === RUN TestNominatedNodeCleanUp | |
| W0512 12:01:05.270555 906438 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver. | |
| I0512 12:01:05.270569 906438 services.go:51] Setting service IP to "10.0.0.1" (read-write). | |
| I0512 12:01:05.270581 906438 master.go:315] Node port range unspecified. Defaulting to 30000-32767. | |
| I0512 12:01:05.270609 906438 master.go:271] Using reconciler: | |
| I0512 12:01:05.270745 906438 config.go:628] Not requested to run hook priority-and-fairness-config-consumer | |
| I0512 12:01:05.271932 906438 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.272067 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.272153 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.272666 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.272687 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.273732 906438 etcd3.go:271] Start monitoring storage db size metric for endpoint http://127.0.0.1:2379 with polling interval 30s | |
| I0512 12:01:05.274020 906438 client.go:360] parsed scheme: "passthrough" | |
| I0512 12:01:05.274077 906438 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} | |
| I0512 12:01:05.274099 906438 clientconn.go:933] ClientConn switching balancer to "pick_first" | |
| I0512 12:01:05.274259 906438 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00069c360, {CONNECTING <nil>} | |
| I0512 12:01:05.274611 906438 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00069c360, {READY <nil>} | |
| I0512 12:01:05.275465 906438 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing" | |
| I0512 12:01:05.276090 906438 store.go:1366] Monitoring podtemplates count at <storage-prefix>//podtemplates | |
| I0512 12:01:05.276130 906438 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.276174 906438 reflector.go:243] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates | |
| I0512 12:01:05.276380 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.276398 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.276845 906438 store.go:1366] Monitoring events count at <storage-prefix>//events | |
| I0512 12:01:05.276881 906438 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.276931 906438 reflector.go:243] Listing and watching *core.Event from storage/cacher.go:/events | |
| I0512 12:01:05.276990 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.277007 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.277564 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.277701 906438 store.go:1366] Monitoring limitranges count at <storage-prefix>//limitranges | |
| I0512 12:01:05.277798 906438 reflector.go:243] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges | |
| I0512 12:01:05.277841 906438 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.277966 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.277983 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.278268 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.278475 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.278506 906438 store.go:1366] Monitoring resourcequotas count at <storage-prefix>//resourcequotas | |
| I0512 12:01:05.278563 906438 reflector.go:243] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas | |
| I0512 12:01:05.278640 906438 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.278746 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.278766 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.279348 906438 store.go:1366] Monitoring secrets count at <storage-prefix>//secrets | |
| I0512 12:01:05.279384 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.279386 906438 reflector.go:243] Listing and watching *core.Secret from storage/cacher.go:/secrets | |
| I0512 12:01:05.279461 906438 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.279543 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.279562 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.280310 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.280430 906438 store.go:1366] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes | |
| I0512 12:01:05.280486 906438 reflector.go:243] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes | |
| I0512 12:01:05.280554 906438 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.280637 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.280653 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.281669 906438 store.go:1366] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims | |
| I0512 12:01:05.281679 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.281837 906438 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.281889 906438 reflector.go:243] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims | |
| I0512 12:01:05.281962 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.281985 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.282459 906438 store.go:1366] Monitoring configmaps count at <storage-prefix>//configmaps | |
| I0512 12:01:05.282531 906438 reflector.go:243] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps | |
| I0512 12:01:05.282548 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.282581 906438 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.283683 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.284641 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.284677 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.285335 906438 store.go:1366] Monitoring namespaces count at <storage-prefix>//namespaces | |
| I0512 12:01:05.285412 906438 reflector.go:243] Listing and watching *core.Namespace from storage/cacher.go:/namespaces | |
| I0512 12:01:05.285468 906438 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.285563 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.285580 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.285936 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.286002 906438 store.go:1366] Monitoring endpoints count at <storage-prefix>//services/endpoints | |
| I0512 12:01:05.286053 906438 reflector.go:243] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints | |
| I0512 12:01:05.286117 906438 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.286218 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.286234 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.286652 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.286786 906438 store.go:1366] Monitoring nodes count at <storage-prefix>//minions | |
| I0512 12:01:05.286820 906438 reflector.go:243] Listing and watching *core.Node from storage/cacher.go:/minions | |
| I0512 12:01:05.286925 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.287018 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.287043 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.287356 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.287581 906438 store.go:1366] Monitoring pods count at <storage-prefix>//pods | |
| I0512 12:01:05.287647 906438 reflector.go:243] Listing and watching *core.Pod from storage/cacher.go:/pods | |
| I0512 12:01:05.287723 906438 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.287834 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.287850 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.288188 906438 store.go:1366] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts | |
| I0512 12:01:05.288203 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.288211 906438 reflector.go:243] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts | |
| I0512 12:01:05.288307 906438 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.288399 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.288422 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.289168 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.289714 906438 store.go:1366] Monitoring services count at <storage-prefix>//services/specs | |
| I0512 12:01:05.289755 906438 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.289780 906438 reflector.go:243] Listing and watching *core.Service from storage/cacher.go:/services/specs | |
| I0512 12:01:05.289900 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.289932 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.290305 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.290322 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.290348 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.290797 906438 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.290894 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.290910 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.291372 906438 store.go:1366] Monitoring replicationcontrollers count at <storage-prefix>//controllers | |
| I0512 12:01:05.291385 906438 rest.go:113] the default service ipfamily for this cluster is: IPv4 | |
| I0512 12:01:05.291443 906438 reflector.go:243] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers | |
| I0512 12:01:05.291843 906438 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.291943 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.292008 906438 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.292553 906438 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.293061 906438 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.293524 906438 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.294031 906438 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.294310 906438 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.294406 906438 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.294601 906438 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.294919 906438 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.295292 906438 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.295437 906438 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.295946 906438 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.296137 906438 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.296515 906438 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.296676 906438 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.297106 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.297237 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.297336 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.297430 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.297572 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.297676 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.297804 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.298283 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.298461 906438 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.298997 906438 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.299498 906438 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.299681 906438 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.299865 906438 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.300349 906438 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.300549 906438 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.301020 906438 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.301500 906438 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.301923 906438 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.302426 906438 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.302617 906438 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.302691 906438 master.go:529] Skipping disabled API group "auditregistration.k8s.io". | |
| I0512 12:01:05.302703 906438 master.go:540] Enabling API group "authentication.k8s.io". | |
| I0512 12:01:05.302717 906438 master.go:540] Enabling API group "authorization.k8s.io". | |
| I0512 12:01:05.302833 906438 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.303430 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.303473 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.304490 906438 store.go:1366] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers | |
| I0512 12:01:05.304553 906438 reflector.go:243] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers | |
| I0512 12:01:05.304774 906438 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.304933 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.304962 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.305577 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.305791 906438 store.go:1366] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers | |
| I0512 12:01:05.305817 906438 reflector.go:243] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers | |
| I0512 12:01:05.305955 906438 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.306054 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.306078 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.306425 906438 store.go:1366] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers | |
| I0512 12:01:05.306441 906438 master.go:540] Enabling API group "autoscaling". | |
| I0512 12:01:05.306442 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.306480 906438 reflector.go:243] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers | |
| I0512 12:01:05.306604 906438 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.306707 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.306726 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.307014 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.307342 906438 store.go:1366] Monitoring jobs.batch count at <storage-prefix>//jobs | |
| I0512 12:01:05.307419 906438 reflector.go:243] Listing and watching *batch.Job from storage/cacher.go:/jobs | |
| I0512 12:01:05.307473 906438 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.307550 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.307567 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.307964 906438 store.go:1366] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs | |
| I0512 12:01:05.307977 906438 master.go:540] Enabling API group "batch". | |
| I0512 12:01:05.307977 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.308004 906438 reflector.go:243] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs | |
| I0512 12:01:05.308085 906438 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.308171 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.308187 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.308575 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.308591 906438 store.go:1366] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests | |
| I0512 12:01:05.308611 906438 master.go:540] Enabling API group "certificates.k8s.io". | |
| I0512 12:01:05.308625 906438 reflector.go:243] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests | |
| I0512 12:01:05.308731 906438 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.308828 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.308847 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.309225 906438 store.go:1366] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases | |
| I0512 12:01:05.309288 906438 reflector.go:243] Listing and watching *coordination.Lease from storage/cacher.go:/leases | |
| I0512 12:01:05.309312 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.309345 906438 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.309430 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.309447 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.309767 906438 store.go:1366] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases | |
| I0512 12:01:05.309781 906438 master.go:540] Enabling API group "coordination.k8s.io". | |
| I0512 12:01:05.309793 906438 reflector.go:243] Listing and watching *coordination.Lease from storage/cacher.go:/leases | |
| I0512 12:01:05.309943 906438 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.310039 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.310057 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.310683 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.311213 906438 store.go:1366] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices | |
| I0512 12:01:05.311219 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.311227 906438 master.go:540] Enabling API group "discovery.k8s.io". | |
| I0512 12:01:05.311248 906438 reflector.go:243] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices | |
| I0512 12:01:05.311355 906438 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.311464 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.311484 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.311968 906438 store.go:1366] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress | |
| I0512 12:01:05.311982 906438 master.go:540] Enabling API group "extensions". | |
| I0512 12:01:05.312038 906438 reflector.go:243] Listing and watching *networking.Ingress from storage/cacher.go:/ingress | |
| I0512 12:01:05.312100 906438 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.312232 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.312255 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.312263 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.312670 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.312784 906438 store.go:1366] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies | |
| I0512 12:01:05.312828 906438 reflector.go:243] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies | |
| I0512 12:01:05.312890 906438 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.312973 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.312990 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.313305 906438 store.go:1366] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress | |
| I0512 12:01:05.313310 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.313357 906438 reflector.go:243] Listing and watching *networking.Ingress from storage/cacher.go:/ingress | |
| I0512 12:01:05.313443 906438 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.313535 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.313552 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.313857 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.313959 906438 store.go:1366] Monitoring ingressclasses.networking.k8s.io count at <storage-prefix>//ingressclasses | |
| I0512 12:01:05.313972 906438 master.go:540] Enabling API group "networking.k8s.io". | |
| I0512 12:01:05.314012 906438 reflector.go:243] Listing and watching *networking.IngressClass from storage/cacher.go:/ingressclasses | |
| I0512 12:01:05.314099 906438 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.314218 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.314234 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.314527 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.314620 906438 store.go:1366] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses | |
| I0512 12:01:05.314638 906438 master.go:540] Enabling API group "node.k8s.io". | |
| I0512 12:01:05.314662 906438 reflector.go:243] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses | |
| I0512 12:01:05.314823 906438 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.314932 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.314954 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.315196 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.315407 906438 store.go:1366] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets | |
| I0512 12:01:05.315481 906438 reflector.go:243] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets | |
| I0512 12:01:05.315614 906438 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.315696 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.315711 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.316032 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.316240 906438 store.go:1366] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy | |
| I0512 12:01:05.316254 906438 master.go:540] Enabling API group "policy". | |
| I0512 12:01:05.316305 906438 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.316311 906438 reflector.go:243] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy | |
| I0512 12:01:05.316409 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.316424 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.316808 906438 store.go:1366] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles | |
| I0512 12:01:05.316863 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.316875 906438 reflector.go:243] Listing and watching *rbac.Role from storage/cacher.go:/roles | |
| I0512 12:01:05.316944 906438 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.317050 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.317068 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.317409 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.317454 906438 store.go:1366] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings | |
| I0512 12:01:05.317506 906438 reflector.go:243] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings | |
| I0512 12:01:05.317494 906438 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.317617 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.317632 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.318020 906438 store.go:1366] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles | |
| I0512 12:01:05.318096 906438 reflector.go:243] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles | |
| I0512 12:01:05.318137 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.318139 906438 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.318206 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.318223 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.318669 906438 store.go:1366] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings | |
| I0512 12:01:05.318691 906438 reflector.go:243] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings | |
| I0512 12:01:05.318696 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.318733 906438 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.318825 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.318839 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.319204 906438 store.go:1366] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles | |
| I0512 12:01:05.319242 906438 reflector.go:243] Listing and watching *rbac.Role from storage/cacher.go:/roles | |
| I0512 12:01:05.319329 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.319334 906438 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.319395 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.319410 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.320264 906438 store.go:1366] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings | |
| I0512 12:01:05.320327 906438 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.320450 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.320481 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.320684 906438 reflector.go:243] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings | |
| I0512 12:01:05.321025 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.321295 906438 store.go:1366] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles | |
| I0512 12:01:05.321379 906438 reflector.go:243] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles | |
| I0512 12:01:05.321442 906438 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.321534 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.321540 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.321549 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.321896 906438 store.go:1366] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings | |
| I0512 12:01:05.321929 906438 master.go:540] Enabling API group "rbac.authorization.k8s.io". | |
| I0512 12:01:05.321950 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.321956 906438 reflector.go:243] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings | |
| I0512 12:01:05.322510 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.323418 906438 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.323546 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.323561 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.323980 906438 store.go:1366] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses | |
| I0512 12:01:05.324045 906438 reflector.go:243] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses | |
| I0512 12:01:05.324096 906438 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.324196 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.324224 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.324560 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.324575 906438 store.go:1366] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses | |
| I0512 12:01:05.324585 906438 master.go:540] Enabling API group "scheduling.k8s.io". | |
| I0512 12:01:05.324639 906438 reflector.go:243] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses | |
| I0512 12:01:05.324687 906438 master.go:529] Skipping disabled API group "settings.k8s.io". | |
| I0512 12:01:05.324854 906438 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.324977 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.324998 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.325169 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.325484 906438 store.go:1366] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses | |
| I0512 12:01:05.325570 906438 reflector.go:243] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses | |
| I0512 12:01:05.325609 906438 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.326212 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.326279 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.326901 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.328103 906438 store.go:1366] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments | |
| I0512 12:01:05.328267 906438 reflector.go:243] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments | |
| I0512 12:01:05.328349 906438 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.328584 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.328646 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.329958 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.330039 906438 store.go:1366] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes | |
| I0512 12:01:05.330087 906438 reflector.go:243] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes | |
| I0512 12:01:05.330196 906438 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.330291 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.330314 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.330597 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.330720 906438 store.go:1366] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers | |
| I0512 12:01:05.330751 906438 reflector.go:243] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers | |
| I0512 12:01:05.330839 906438 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.330928 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.330954 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.331274 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.331278 906438 store.go:1366] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses | |
| I0512 12:01:05.331296 906438 reflector.go:243] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses | |
| I0512 12:01:05.331443 906438 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.331559 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.331577 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.331840 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.331863 906438 store.go:1366] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments | |
| I0512 12:01:05.331897 906438 reflector.go:243] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments | |
| I0512 12:01:05.332013 906438 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.332114 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.332135 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.332423 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.332466 906438 store.go:1366] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes | |
| I0512 12:01:05.332498 906438 reflector.go:243] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes | |
| I0512 12:01:05.332628 906438 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.332718 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.332741 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.333032 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.333061 906438 store.go:1366] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers | |
| I0512 12:01:05.333078 906438 master.go:540] Enabling API group "storage.k8s.io". | |
| I0512 12:01:05.333094 906438 master.go:529] Skipping disabled API group "flowcontrol.apiserver.k8s.io". | |
| I0512 12:01:05.333105 906438 reflector.go:243] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers | |
| I0512 12:01:05.333245 906438 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.333380 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.333404 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.333678 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.333884 906438 store.go:1366] Monitoring deployments.apps count at <storage-prefix>//deployments | |
| I0512 12:01:05.333938 906438 reflector.go:243] Listing and watching *apps.Deployment from storage/cacher.go:/deployments | |
| I0512 12:01:05.334001 906438 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.334092 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.334108 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.334534 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.334564 906438 store.go:1366] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets | |
| I0512 12:01:05.334669 906438 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.334710 906438 reflector.go:243] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets | |
| I0512 12:01:05.334747 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.334761 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.335243 906438 store.go:1366] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets | |
| I0512 12:01:05.335247 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.335276 906438 reflector.go:243] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets | |
| I0512 12:01:05.335382 906438 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.335459 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.335477 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.336169 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.337264 906438 store.go:1366] Monitoring replicasets.apps count at <storage-prefix>//replicasets | |
| I0512 12:01:05.337360 906438 reflector.go:243] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets | |
| I0512 12:01:05.337385 906438 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.337458 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.337472 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.337876 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.337940 906438 store.go:1366] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions | |
| I0512 12:01:05.337955 906438 master.go:540] Enabling API group "apps". | |
| I0512 12:01:05.337984 906438 reflector.go:243] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions | |
| I0512 12:01:05.338069 906438 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.338159 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.338183 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.338575 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.338674 906438 store.go:1366] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations | |
| I0512 12:01:05.338751 906438 reflector.go:243] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations | |
| I0512 12:01:05.338791 906438 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.338895 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.338917 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.339263 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.378664 906438 store.go:1366] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations | |
| I0512 12:01:05.378740 906438 reflector.go:243] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations | |
| I0512 12:01:05.378875 906438 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.378977 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.378998 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.379446 906438 store.go:1366] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations | |
| I0512 12:01:05.379529 906438 reflector.go:243] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations | |
| I0512 12:01:05.379541 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.379551 906438 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.379627 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.379642 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.379973 906438 store.go:1366] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations | |
| I0512 12:01:05.379986 906438 master.go:540] Enabling API group "admissionregistration.k8s.io". | |
| I0512 12:01:05.380019 906438 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.380044 906438 reflector.go:243] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations | |
| I0512 12:01:05.380127 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.380196 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:05.380211 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:05.380518 906438 store.go:1366] Monitoring events count at <storage-prefix>//events | |
| I0512 12:01:05.380533 906438 master.go:540] Enabling API group "events.k8s.io". | |
| I0512 12:01:05.380555 906438 reflector.go:243] Listing and watching *core.Event from storage/cacher.go:/events | |
| I0512 12:01:05.380619 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.380729 906438 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.380875 906438 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.380992 906438 watch_cache.go:523] Replace watchCache (rev: 2) | |
| I0512 12:01:05.381084 906438 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.381180 906438 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.381279 906438 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.381363 906438 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.381501 906438 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.381587 906438 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.381660 906438 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.381742 906438 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.382341 906438 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.382547 906438 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.383107 906438 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.383346 906438 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.383901 906438 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.384091 906438 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.384595 906438 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.384797 906438 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.385216 906438 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.385366 906438 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| W0512 12:01:05.385397 906438 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. | |
| I0512 12:01:05.385834 906438 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.385936 906438 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.386605 906438 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.387076 906438 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.387505 906438 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.388003 906438 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| W0512 12:01:05.388051 906438 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. | |
| I0512 12:01:05.388453 906438 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.388631 906438 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.389109 906438 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.389427 906438 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.389763 906438 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.389896 906438 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.390327 906438 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| W0512 12:01:05.390376 906438 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. | |
| I0512 12:01:05.390847 906438 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.391047 906438 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.391408 906438 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.391879 906438 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.392175 906438 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.392526 906438 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.392890 906438 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.393184 906438 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.393426 906438 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.393775 906438 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.394158 906438 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| W0512 12:01:05.394208 906438 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. | |
| I0512 12:01:05.394530 906438 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.394843 906438 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| W0512 12:01:05.394900 906438 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. | |
| I0512 12:01:05.395198 906438 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.395483 906438 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.395776 906438 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.396066 906438 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.396199 906438 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.396492 906438 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.396720 906438 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.396968 906438 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.397232 906438 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| W0512 12:01:05.397265 906438 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. | |
| I0512 12:01:05.397792 906438 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.398262 906438 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.398424 906438 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.398820 906438 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.399009 906438 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.399148 906438 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.399518 906438 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.399692 906438 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.399840 906438 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.400253 906438 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.400401 906438 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.400540 906438 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| W0512 12:01:05.400571 906438 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. | |
| W0512 12:01:05.400577 906438 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. | |
| I0512 12:01:05.401006 906438 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.401335 906438 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.401690 906438 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.402091 906438 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| I0512 12:01:05.402483 906438 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2cd5ce9-75a8-4971-87ef-34bd6cc871d5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000} | |
| W0512 12:01:05.404877 906438 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. | |
| I0512 12:01:05.404971 906438 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller | |
| I0512 12:01:05.404978 906438 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller | |
| I0512 12:01:05.405091 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.405117 906438 healthz.go:186] healthz check poststarthook/bootstrap-controller failed: not finished | |
| I0512 12:01:05.405126 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.405137 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.405148 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [-]poststarthook/bootstrap-controller failed: reason withheld | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.405221 906438 httplog.go:90] verb="GET" URI="/healthz" latency=267.914µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:05.405251 906438 reflector.go:207] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444 | |
| I0512 12:01:05.405261 906438 reflector.go:243] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444 | |
| I0512 12:01:05.406018 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0" latency=498.695µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:05.406173 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.217294ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41306": | |
| I0512 12:01:05.407253 906438 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=2 labels= fields= timeout=8m34s | |
| I0512 12:01:05.408022 906438 httplog.go:90] verb="GET" URI="/api/v1/services" latency=748.63µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:05.418731 906438 httplog.go:90] verb="GET" URI="/api/v1/services" latency=668.635µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:05.420174 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.420200 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.420226 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.420251 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.420288 906438 httplog.go:90] verb="GET" URI="/healthz" latency=154.347µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:05.420791 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=506.424µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.421286 906438 httplog.go:90] verb="GET" URI="/api/v1/services" latency=523.701µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:05.421498 906438 httplog.go:90] verb="GET" URI="/api/v1/services" latency=518.967µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.422503 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.440084ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41310": | |
| I0512 12:01:05.423203 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=479.309µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.424088 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=671.949µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.424811 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency=526.601µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.425671 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=654.277µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.505098 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:05.505121 906438 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller | |
| I0512 12:01:05.511062 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.511089 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.511105 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.511113 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.511167 906438 httplog.go:90] verb="GET" URI="/healthz" latency=192.851µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.520894 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.520930 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.520936 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.520941 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.521052 906438 httplog.go:90] verb="GET" URI="/healthz" latency=194.875µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.606347 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.606372 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.606381 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.606387 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.606433 906438 httplog.go:90] verb="GET" URI="/healthz" latency=168.596µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.621868 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.621966 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.622014 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.622055 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.622205 906438 httplog.go:90] verb="GET" URI="/healthz" latency=518.532µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.706364 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.706427 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.706457 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.706481 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.706609 906438 httplog.go:90] verb="GET" URI="/healthz" latency=427.26µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.720912 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.720930 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.720936 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.720940 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.720980 906438 httplog.go:90] verb="GET" URI="/healthz" latency=131.233µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.806055 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.806071 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.806077 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.806081 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.806124 906438 httplog.go:90] verb="GET" URI="/healthz" latency=221.52µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.820750 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.820802 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.820807 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.820811 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.820875 906438 httplog.go:90] verb="GET" URI="/healthz" latency=150.349µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.905881 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.905905 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.905939 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.905945 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.906026 906438 httplog.go:90] verb="GET" URI="/healthz" latency=247.822µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:05.920778 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:05.920789 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:05.920794 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:05.920798 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:05.920820 906438 httplog.go:90] verb="GET" URI="/healthz" latency=86.746µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.005921 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:06.005951 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.005980 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:06.005984 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.006015 906438 httplog.go:90] verb="GET" URI="/healthz" latency=181.89µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.020762 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:06.020790 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.020794 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:06.020821 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.020889 906438 httplog.go:90] verb="GET" URI="/healthz" latency=168.473µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.106266 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:06.106315 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.106334 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:06.106347 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.106440 906438 httplog.go:90] verb="GET" URI="/healthz" latency=327.231µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.120741 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:06.120753 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.120758 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:06.120761 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.120780 906438 httplog.go:90] verb="GET" URI="/healthz" latency=68.589µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.206522 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:06.206575 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.206594 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:06.206607 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.206717 906438 httplog.go:90] verb="GET" URI="/healthz" latency=351.424µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.221472 906438 healthz.go:186] healthz check etcd failed: etcd client connection not yet established | |
| I0512 12:01:06.221530 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.221550 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:06.221563 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [-]etcd failed: reason withheld | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.221678 906438 httplog.go:90] verb="GET" URI="/healthz" latency=412.744µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.271235 906438 client.go:360] parsed scheme: "endpoint" | |
| I0512 12:01:06.271385 906438 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}] | |
| I0512 12:01:06.307846 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.307906 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:06.307933 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.308038 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.872709ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.321265 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.321277 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:06.321283 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.321367 906438 httplog.go:90] verb="GET" URI="/healthz" latency=600.384µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.405732 906438 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical" latency=1.030975ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.405800 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.126675ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.406428 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.406462 906438 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished | |
| I0512 12:01:06.406496 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.406542 906438 httplog.go:90] verb="GET" URI="/healthz" latency=678.144µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41314": | |
| I0512 12:01:06.407697 906438 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=1.514882ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.407866 906438 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 | |
| I0512 12:01:06.415473 906438 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical" latency=7.473254ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.415571 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=8.465316ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41314": | |
| I0512 12:01:06.416926 906438 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=977.461µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.417029 906438 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 | |
| I0512 12:01:06.417041 906438 storage_scheduling.go:143] all system priority classes are created successfully or already exist. | |
| I0512 12:01:06.419562 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=3.092898ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.420269 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=483.139µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.420984 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.420998 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.421038 906438 httplog.go:90] verb="GET" URI="/healthz" latency=455.219µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.421073 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=549.729µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.421721 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=413.761µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.422436 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=487.937µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.423963 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=484.886µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.424650 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=445.789µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.425273 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin" latency=406.728µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.427068 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.554978ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.427215 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/cluster-admin | |
| I0512 12:01:06.428698 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery" latency=1.359487ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.429822 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=848.812µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.429953 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:discovery | |
| I0512 12:01:06.430616 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user" latency=533.526µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.434954 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=4.094818ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.435053 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:basic-user | |
| I0512 12:01:06.435589 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer" latency=419.062µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.436494 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=674.696µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.436589 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer | |
| I0512 12:01:06.437122 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=412.143µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.438100 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=755.347µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.438212 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/admin | |
| I0512 12:01:06.438764 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=431.582µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.439746 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=762.052µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.445868 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/edit | |
| I0512 12:01:06.446815 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=744.554µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.448165 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=980.515µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.448286 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/view | |
| I0512 12:01:06.449808 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=1.397645ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.450907 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=839.107µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.451040 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin | |
| I0512 12:01:06.451617 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=449.613µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.452795 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=921.644µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.452968 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit | |
| I0512 12:01:06.453476 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=393.357µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.454702 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=987.02µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.456304 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view | |
| I0512 12:01:06.463736 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster" latency=807.894µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.465025 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=905.874µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.465166 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:heapster | |
| I0512 12:01:06.466790 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node" latency=1.495147ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.467941 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=874.756µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.468106 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node | |
| I0512 12:01:06.470386 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector" latency=2.170851ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.471366 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=717.871µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.471496 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector | |
| I0512 12:01:06.472028 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin" latency=418.551µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.473141 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=878.29µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.473285 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin | |
| I0512 12:01:06.473887 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper" latency=449.676µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.474925 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=795.15µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.475048 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper | |
| I0512 12:01:06.475658 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator" latency=490.267µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.477069 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.138384ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.477251 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator | |
| I0512 12:01:06.479466 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator" latency=2.05479ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.480730 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=847.299µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.480846 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator | |
| I0512 12:01:06.481548 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager" latency=567.539µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.482867 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.064723ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.483062 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager | |
| I0512 12:01:06.483633 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns" latency=438.639µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.484559 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=695.206µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.484671 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-dns | |
| I0512 12:01:06.485255 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner" latency=388.098µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.486220 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=733.48µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.486386 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner | |
| I0512 12:01:06.488812 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient" latency=2.296736ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.490375 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.275536ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.490510 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient | |
| I0512 12:01:06.491206 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" latency=527.846µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.492420 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=926.4µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.492640 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient | |
| I0512 12:01:06.493958 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler" latency=1.179421ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.495236 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=983.774µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.495388 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler | |
| I0512 12:01:06.496022 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:legacy-unknown-approver" latency=487.98µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.497112 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=845.834µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.497240 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver | |
| I0512 12:01:06.497877 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kubelet-serving-approver" latency=466.775µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.498871 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=750.136µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.499004 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver | |
| I0512 12:01:06.501578 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver" latency=2.442605ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.502618 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=765.99µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.502722 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-approver | |
| I0512 12:01:06.503271 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver" latency=429.752µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.504196 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=705.842µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.504315 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver | |
| I0512 12:01:06.504883 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier" latency=439.014µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.505920 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=795.063µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.506106 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-proxier | |
| I0512 12:01:06.506596 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.506835 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.506891 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.283265ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.506790 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler" latency=529.188µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.508863 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.527286ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.509104 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler | |
| I0512 12:01:06.510986 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller" latency=1.713982ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.512600 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.198909ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.512749 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller | |
| I0512 12:01:06.513352 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller" latency=478.215µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.514411 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=809.04µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.514530 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller | |
| I0512 12:01:06.515178 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller" latency=513.037µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.516194 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=769.793µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.516325 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller | |
| I0512 12:01:06.518157 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller" latency=1.712688ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.519217 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=793.21µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.519365 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller | |
| I0512 12:01:06.519866 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller" latency=385.922µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.520821 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=723.809µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.520942 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller | |
| I0512 12:01:06.521466 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.521486 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.521487 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller" latency=423.019µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.521520 906438 httplog.go:90] verb="GET" URI="/healthz" latency=948.49µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.522498 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=782.204µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.522674 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller | |
| I0512 12:01:06.524921 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller" latency=2.101504ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.526331 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.114661ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.526499 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller | |
| I0512 12:01:06.527149 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller" latency=509.614µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.528562 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.114877ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.528717 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller | |
| I0512 12:01:06.529366 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller" latency=511.497µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.530540 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=908.597µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.530714 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller | |
| I0512 12:01:06.531470 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector" latency=602.74µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.532462 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=726.729µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.532569 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector | |
| I0512 12:01:06.537909 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler" latency=5.194837ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.539033 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=863.558µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.539177 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler | |
| I0512 12:01:06.539897 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller" latency=585.296µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.541222 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.026026ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.541358 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller | |
| I0512 12:01:06.542116 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller" latency=590.365µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.543378 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=983.21µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.543656 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller | |
| I0512 12:01:06.544328 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller" latency=529.017µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.545592 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=977.456µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.545743 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller | |
| I0512 12:01:06.547604 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder" latency=1.713898ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.549134 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.181761ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.549296 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder | |
| I0512 12:01:06.549966 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector" latency=500.658µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.550978 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=723.872µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.551127 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector | |
| I0512 12:01:06.551651 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller" latency=407.656µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.552637 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=753.713µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.552757 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller | |
| I0512 12:01:06.553317 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller" latency=445.876µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.554566 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=991.907µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.554725 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller | |
| I0512 12:01:06.557196 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller" latency=2.309667ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.558624 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.072711ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.558771 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller | |
| I0512 12:01:06.559491 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller" latency=564.944µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.560798 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.016354ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.560943 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller | |
| I0512 12:01:06.565361 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller" latency=624.49µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.586047 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.108853ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.586183 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller | |
| I0512 12:01:06.605680 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller" latency=773.618µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.606359 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.606374 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.606460 906438 httplog.go:90] verb="GET" URI="/healthz" latency=752.507µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.621628 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.621652 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.621704 906438 httplog.go:90] verb="GET" URI="/healthz" latency=823.134µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.625741 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.017789ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.625884 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller | |
| I0512 12:01:06.645710 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller" latency=798.031µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.666249 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.284567ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.666430 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller | |
| I0512 12:01:06.687580 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller" latency=1.953564ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.707856 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.707917 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.708048 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.964913ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.708479 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.174176ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.708707 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller | |
| I0512 12:01:06.721797 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.721818 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.721874 906438 httplog.go:90] verb="GET" URI="/healthz" latency=710.43µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.725380 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller" latency=559.491µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.746006 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.164661ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.746157 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller | |
| I0512 12:01:06.766031 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller" latency=774.112µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.786815 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.734802ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.787039 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller | |
| I0512 12:01:06.805865 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller" latency=881.932µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.806289 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.806309 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.806352 906438 httplog.go:90] verb="GET" URI="/healthz" latency=642.424µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.823034 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.823097 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.823225 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.909249ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.827643 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.596092ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.827829 906438 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller | |
| I0512 12:01:06.849807 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin" latency=4.777489ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.866586 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.60325ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.866759 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin | |
| I0512 12:01:06.886780 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=1.951954ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.907573 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.907637 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.907777 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.856172ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:06.908383 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.043415ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.908821 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery | |
| I0512 12:01:06.922928 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:06.922984 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:06.923112 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.80448ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.926415 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user" latency=1.457826ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.948718 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.130824ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.949165 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user | |
| I0512 12:01:06.966480 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" latency=1.485612ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.987662 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.480585ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:06.988042 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer | |
| I0512 12:01:07.006899 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier" latency=1.755277ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.007225 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.007269 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.007393 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.419396ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.021941 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.021979 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.022021 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.054178ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.025725 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=974.685µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.025894 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier | |
| I0512 12:01:07.045641 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager" latency=785.964µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.070103 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=5.313988ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.070300 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager | |
| I0512 12:01:07.086539 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns" latency=1.394983ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.107515 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.107582 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.107716 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.678348ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.107957 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.664272ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.108377 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns | |
| I0512 12:01:07.121782 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.121809 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.121858 906438 httplog.go:90] verb="GET" URI="/healthz" latency=672.304µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.126770 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler" latency=1.616151ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.147970 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.672796ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.148419 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler | |
| I0512 12:01:07.165743 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler" latency=786.641µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.186357 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.251444ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.186555 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler | |
| I0512 12:01:07.206500 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node" latency=1.437747ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.207268 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.207312 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.207412 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.389309ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.221571 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.221600 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.221659 906438 httplog.go:90] verb="GET" URI="/healthz" latency=830.417µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.225658 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=912.812µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.225785 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node | |
| I0512 12:01:07.245816 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller" latency=918.321µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.270702 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.309672ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.270903 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller | |
| I0512 12:01:07.285650 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller" latency=722.064µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.306495 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.550851ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.306653 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller | |
| I0512 12:01:07.307992 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.308008 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.308051 906438 httplog.go:90] verb="GET" URI="/healthz" latency=2.384511ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.321796 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.321821 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.321865 906438 httplog.go:90] verb="GET" URI="/healthz" latency=885.522µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.326198 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller" latency=811.819µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.345811 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.039766ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.345940 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller | |
| I0512 12:01:07.370590 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller" latency=5.454444ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.387489 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.211288ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.387737 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller | |
| I0512 12:01:07.405721 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller" latency=926.57µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.406115 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.406134 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.406170 906438 httplog.go:90] verb="GET" URI="/healthz" latency=541.105µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.422529 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.422581 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.422700 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.709094ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.426261 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.361248ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.426423 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller | |
| I0512 12:01:07.445562 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller" latency=731.037µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.468065 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.874924ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.468564 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller | |
| I0512 12:01:07.485454 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller" latency=674.162µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.505809 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=993.205µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.505955 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller | |
| I0512 12:01:07.507435 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.507445 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.507494 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.814993ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.521636 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.521657 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.521709 906438 httplog.go:90] verb="GET" URI="/healthz" latency=939.375µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.525773 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller" latency=867.581µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.545894 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.059941ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.546041 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller | |
| I0512 12:01:07.565649 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller" latency=747.493µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.585965 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.071278ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.586193 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller | |
| I0512 12:01:07.605768 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector" latency=793.102µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.606415 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.606429 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.606489 906438 httplog.go:90] verb="GET" URI="/healthz" latency=760.913µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.621566 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.621591 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.621658 906438 httplog.go:90] verb="GET" URI="/healthz" latency=844.056µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.625994 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.24057ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.626115 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector | |
| I0512 12:01:07.647244 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler" latency=1.867472ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.666227 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.251749ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.666376 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler | |
| I0512 12:01:07.687165 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller" latency=1.777835ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.706240 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.250241ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.706395 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller | |
| I0512 12:01:07.707741 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.707758 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.707818 906438 httplog.go:90] verb="GET" URI="/healthz" latency=2.101831ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.721557 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.721579 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.721632 906438 httplog.go:90] verb="GET" URI="/healthz" latency=725.625µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.725390 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller" latency=668.363µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.748317 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.992913ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.748823 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller | |
| I0512 12:01:07.767263 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller" latency=1.849792ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.787142 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.069302ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.787351 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller | |
| I0512 12:01:07.805977 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder" latency=1.046015ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.806328 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.806345 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.806414 906438 httplog.go:90] verb="GET" URI="/healthz" latency=703.174µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.821414 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.821435 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.821484 906438 httplog.go:90] verb="GET" URI="/healthz" latency=778.284µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.833082 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=6.765865ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.833246 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder | |
| I0512 12:01:07.845629 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector" latency=732.478µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.866698 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.734745ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.867036 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector | |
| I0512 12:01:07.885333 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller" latency=613.708µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.905938 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=964.559µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:07.906120 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller | |
| I0512 12:01:07.907596 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.907624 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.907665 906438 httplog.go:90] verb="GET" URI="/healthz" latency=2.012738ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.921482 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:07.921500 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:07.921566 906438 httplog.go:90] verb="GET" URI="/healthz" latency=713.578µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.925251 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller" latency=503.452µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.947945 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.825536ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.948429 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller | |
| I0512 12:01:07.965503 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller" latency=619.146µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.988128 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.917046ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:07.988562 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller | |
| I0512 12:01:08.005626 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller" latency=803.776µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.006244 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.006262 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.006310 906438 httplog.go:90] verb="GET" URI="/healthz" latency=574.341µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.021435 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.021457 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.021500 906438 httplog.go:90] verb="GET" URI="/healthz" latency=693.555µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.025659 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=888.039µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.025770 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller | |
| I0512 12:01:08.047108 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller" latency=1.87633ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.068285 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.938912ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.068658 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller | |
| I0512 12:01:08.085785 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller" latency=791.212µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.105820 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.012135ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.105964 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller | |
| I0512 12:01:08.107381 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.107392 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.107427 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.773747ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.122843 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.122904 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.123046 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.90654ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.125356 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller" latency=601.729µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.148335 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.09582ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.148654 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller | |
| I0512 12:01:08.165532 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller" latency=674.845µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.188275 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.945038ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.188771 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller | |
| I0512 12:01:08.205787 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller" latency=845.328µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.206072 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.206086 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.206120 906438 httplog.go:90] verb="GET" URI="/healthz" latency=453.289µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.222529 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.222582 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.222708 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.704999ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.225653 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=912.938µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.225826 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller | |
| I0512 12:01:08.245504 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller" latency=663.978µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.267873 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.692353ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.268013 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller | |
| I0512 12:01:08.285497 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller" latency=611.164µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.306084 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.282034ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.306211 906438 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller | |
| I0512 12:01:08.309632 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.309649 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.309683 906438 httplog.go:90] verb="GET" URI="/healthz" latency=4.017051ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.321251 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.321264 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.321332 906438 httplog.go:90] verb="GET" URI="/healthz" latency=580.683µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.325261 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader" latency=491.557µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.326036 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=530.784µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.346449 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.584794ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.346575 906438 storage_rbac.go:279] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system | |
| I0512 12:01:08.365611 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer" latency=807.722µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.366665 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=665.925µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.386190 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.304737ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.386374 906438 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system | |
| I0512 12:01:08.405550 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider" latency=703.61µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.406124 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.406140 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.406171 906438 httplog.go:90] verb="GET" URI="/healthz" latency=489.461µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.406436 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=588.708µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.421556 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.421573 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.421618 906438 httplog.go:90] verb="GET" URI="/healthz" latency=812.786µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.425740 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.012386ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.425854 906438 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system | |
| I0512 12:01:08.445601 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner" latency=761.078µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.446640 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=694.541µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.465686 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=883.183µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.465837 906438 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system | |
| I0512 12:01:08.487143 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager" latency=1.957484ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.490031 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.609109ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.508497 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=3.405371ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.508681 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.508695 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.508738 906438 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system | |
| I0512 12:01:08.508754 906438 httplog.go:90] verb="GET" URI="/healthz" latency=2.86965ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.521711 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.521734 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.521779 906438 httplog.go:90] verb="GET" URI="/healthz" latency=962.473µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.525305 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler" latency=542.466µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.526325 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=638.564µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.548335 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=3.021522ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.548782 906438 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system | |
| I0512 12:01:08.565510 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer" latency=637.67µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.566630 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=823.483µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.588635 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles" latency=3.190703ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.589087 906438 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public | |
| I0512 12:01:08.607192 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader" latency=1.890196ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.607495 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.607547 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.607672 906438 httplog.go:90] verb="GET" URI="/healthz" latency=1.603402ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.612555 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=4.589962ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.621342 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.621364 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.621421 906438 httplog.go:90] verb="GET" URI="/healthz" latency=709.073µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.625915 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=1.166477ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.626075 906438 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system | |
| I0512 12:01:08.647299 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager" latency=2.002769ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.648136 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=514.964µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.667998 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.772023ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.668414 906438 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system | |
| I0512 12:01:08.685803 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler" latency=847.74µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.687209 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=941.023µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.705810 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=963.719µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.705967 906438 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system | |
| I0512 12:01:08.719539 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.719555 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.719617 906438 httplog.go:90] verb="GET" URI="/healthz" latency=13.972596ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.721326 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.721345 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.721426 906438 httplog.go:90] verb="GET" URI="/healthz" latency=756.34µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.725230 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer" latency=475.181µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.726222 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=645.016µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.745988 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=1.146492ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.746157 906438 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system | |
| I0512 12:01:08.765704 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider" latency=749.335µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.766722 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=700.072µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.785609 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=807.363µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.785801 906438 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system | |
| I0512 12:01:08.805795 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner" latency=927.642µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.806315 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.806332 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.806364 906438 httplog.go:90] verb="GET" URI="/healthz" latency=631.464µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.806936 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=728.566µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.827803 906438 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished | |
| I0512 12:01:08.827826 906438 healthz.go:200] [+]ping ok | |
| [+]log ok | |
| [+]etcd ok | |
| [+]poststarthook/generic-apiserver-start-informers ok | |
| [+]poststarthook/bootstrap-controller ok | |
| [-]poststarthook/rbac/bootstrap-roles failed: reason withheld | |
| [+]poststarthook/scheduling/bootstrap-system-priority-classes ok | |
| [+]poststarthook/start-cluster-authentication-info-controller ok | |
| healthz check failed | |
| I0512 12:01:08.827876 906438 httplog.go:90] verb="GET" URI="/healthz" latency=7.131121ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.827932 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=3.233592ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.828086 906438 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system | |
| I0512 12:01:08.845562 906438 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer" latency=766.93µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.846602 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=716.882µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.865846 906438 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings" latency=1.087507ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.866033 906438 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public | |
| I0512 12:01:08.906532 906438 httplog.go:90] verb="GET" URI="/healthz" latency=759.047µs resp=200 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:41304": | |
| W0512 12:01:08.906932 906438 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. | |
| W0512 12:01:08.906990 906438 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. | |
| W0512 12:01:08.907000 906438 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. | |
| W0512 12:01:08.907040 906438 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. | |
| I0512 12:01:08.907282 906438 factory.go:221] Creating scheduler from algorithm provider 'DefaultProvider' | |
| I0512 12:01:08.907298 906438 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
| I0512 12:01:08.907310 906438 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
| W0512 12:01:08.907372 906438 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. | |
| W0512 12:01:08.908107 906438 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. | |
| W0512 12:01:08.908151 906438 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. | |
| W0512 12:01:08.908242 906438 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. | |
| I0512 12:01:08.908281 906438 shared_informer.go:240] Waiting for caches to sync for scheduler | |
| I0512 12:01:08.908441 906438 reflector.go:207] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/util/util.go:404 | |
| I0512 12:01:08.908451 906438 reflector.go:243] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/util/util.go:404 | |
| I0512 12:01:08.909005 906438 httplog.go:90] verb="GET" URI="/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0" latency=387.539µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:08.913895 906438 get.go:251] Starting watch for /api/v1/pods, rv=2 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m29s | |
| I0512 12:01:08.921527 906438 httplog.go:90] verb="GET" URI="/healthz" latency=879.194µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.922537 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default" latency=677.564µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.923531 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=751.224µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.926385 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/services/kubernetes" latency=2.642708ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.932788 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/default/services" latency=6.122553ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.933482 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=455.903µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.935085 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/default/endpoints" latency=1.376634ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.935829 906438 httplog.go:90] verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=510.529µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:08.937510 906438 httplog.go:90] verb="POST" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices" latency=1.454483ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:09.008444 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:09.008478 906438 shared_informer.go:247] Caches are synced for scheduler | |
| I0512 12:01:09.008904 906438 reflector.go:207] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.008916 906438 reflector.go:207] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.008919 906438 reflector.go:243] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.008925 906438 reflector.go:243] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.008998 906438 reflector.go:207] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009002 906438 reflector.go:207] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009010 906438 reflector.go:243] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009016 906438 reflector.go:243] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009030 906438 reflector.go:207] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009035 906438 reflector.go:207] Starting reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009045 906438 reflector.go:243] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009049 906438 reflector.go:243] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009061 906438 reflector.go:207] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009073 906438 reflector.go:243] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:09.009503 906438 httplog.go:90] verb="GET" URI="/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0" latency=335.15µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:09.009638 906438 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0" latency=306.309µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41330": | |
| I0512 12:01:09.009646 906438 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0" latency=317.585µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41324": | |
| I0512 12:01:09.009675 906438 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumes?limit=500&resourceVersion=0" latency=341.96µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41328": | |
| I0512 12:01:09.009716 906438 httplog.go:90] verb="GET" URI="/api/v1/services?limit=500&resourceVersion=0" latency=407.797µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41320": | |
| I0512 12:01:09.009731 906438 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0" latency=406.29µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41326": | |
| I0512 12:01:09.009798 906438 httplog.go:90] verb="GET" URI="/api/v1/nodes?limit=500&resourceVersion=0" latency=495.125µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41322": | |
| I0512 12:01:09.010231 906438 get.go:251] Starting watch for /api/v1/services, rv=119 labels= fields= timeout=9m30s | |
| I0512 12:01:09.010606 906438 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=2 labels= fields= timeout=5m1s | |
| I0512 12:01:09.010931 906438 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=2 labels= fields= timeout=6m22s | |
| I0512 12:01:09.011061 906438 get.go:251] Starting watch for /apis/storage.k8s.io/v1/csinodes, rv=2 labels= fields= timeout=5m32s | |
| I0512 12:01:09.011425 906438 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=2 labels= fields= timeout=9m16s | |
| I0512 12:01:09.011822 906438 get.go:251] Starting watch for /api/v1/nodes, rv=2 labels= fields= timeout=6m17s | |
| I0512 12:01:09.012747 906438 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=2 labels= fields= timeout=9m56s | |
| I0512 12:01:09.108714 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:09.108731 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:09.108734 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:09.108737 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:09.108741 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:09.108745 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:09.108748 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:09.108789 906438 shared_informer.go:270] caches populated | |
| I0512 12:01:09.111442 906438 httplog.go:90] verb="POST" URI="/api/v1/nodes" latency=2.091131ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.112007 906438 node_tree.go:86] Added node "node1" in group "" to NodeTree | |
| I0512 12:01:09.112020 906438 eventhandlers.go:104] add event for node "node1" | |
| I0512 12:01:09.116308 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods" latency=4.516744ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.116648 906438 eventhandlers.go:173] add event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0 | |
| I0512 12:01:09.116671 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0 | |
| I0512 12:01:09.116679 906438 scheduler.go:575] Attempting to schedule pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0 | |
| I0512 12:01:09.116774 906438 scheduler_binder.go:323] AssumePodVolumes for pod "preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0", node "node1" | |
| I0512 12:01:09.116784 906438 scheduler_binder.go:333] AssumePodVolumes for pod "preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0", node "node1": all PVCs bound and nothing to do | |
| I0512 12:01:09.116826 906438 default_binder.go:51] Attempting to bind preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0 to node1 | |
| I0512 12:01:09.119232 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-0/binding" latency=1.961757ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.119275 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods" latency=2.610267ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.119444 906438 scheduler.go:737] pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0 is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible. | |
| I0512 12:01:09.119542 906438 eventhandlers.go:173] add event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1 | |
| I0512 12:01:09.119563 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1 | |
| I0512 12:01:09.119574 906438 scheduler.go:575] Attempting to schedule pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1 | |
| I0512 12:01:09.119585 906438 eventhandlers.go:205] delete event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0 | |
| I0512 12:01:09.119654 906438 eventhandlers.go:229] add event for scheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0 | |
| I0512 12:01:09.119671 906438 scheduler_binder.go:323] AssumePodVolumes for pod "preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1", node "node1" | |
| I0512 12:01:09.119685 906438 scheduler_binder.go:333] AssumePodVolumes for pod "preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1", node "node1": all PVCs bound and nothing to do | |
| I0512 12:01:09.119726 906438 default_binder.go:51] Attempting to bind preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1 to node1 | |
| I0512 12:01:09.121562 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods" latency=1.890101ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.121564 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-1/binding" latency=1.547722ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41348": | |
| I0512 12:01:09.121708 906438 scheduler.go:737] pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1 is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible. | |
| I0512 12:01:09.121857 906438 eventhandlers.go:173] add event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2 | |
| I0512 12:01:09.121863 906438 eventhandlers.go:229] add event for scheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1 | |
| I0512 12:01:09.121879 906438 eventhandlers.go:205] delete event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1 | |
| I0512 12:01:09.121881 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2 | |
| I0512 12:01:09.121890 906438 scheduler.go:575] Attempting to schedule pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2 | |
| I0512 12:01:09.121957 906438 scheduler_binder.go:323] AssumePodVolumes for pod "preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2", node "node1" | |
| I0512 12:01:09.121970 906438 scheduler_binder.go:333] AssumePodVolumes for pod "preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2", node "node1": all PVCs bound and nothing to do | |
| I0512 12:01:09.122007 906438 default_binder.go:51] Attempting to bind preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2 to node1 | |
| I0512 12:01:09.122158 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=2.378727ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.124211 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=2.272679ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.124265 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-2/binding" latency=1.925273ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41350": | |
| I0512 12:01:09.124292 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods" latency=2.387618ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41348": | |
| I0512 12:01:09.124379 906438 scheduler.go:737] pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2 is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible. | |
| I0512 12:01:09.124444 906438 eventhandlers.go:173] add event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3 | |
| I0512 12:01:09.124457 906438 eventhandlers.go:229] add event for scheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2 | |
| I0512 12:01:09.124462 906438 eventhandlers.go:205] delete event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2 | |
| I0512 12:01:09.124472 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3 | |
| I0512 12:01:09.124481 906438 scheduler.go:575] Attempting to schedule pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3 | |
| I0512 12:01:09.124563 906438 scheduler_binder.go:323] AssumePodVolumes for pod "preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3", node "node1" | |
| I0512 12:01:09.124580 906438 scheduler_binder.go:333] AssumePodVolumes for pod "preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3", node "node1": all PVCs bound and nothing to do | |
| I0512 12:01:09.124623 906438 default_binder.go:51] Attempting to bind preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3 to node1 | |
| I0512 12:01:09.126976 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=2.439023ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.127007 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-3/binding" latency=2.234566ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.127145 906438 scheduler.go:737] pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3 is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible. | |
| I0512 12:01:09.127188 906438 eventhandlers.go:205] delete event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3 | |
| I0512 12:01:09.127211 906438 eventhandlers.go:229] add event for scheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3 | |
| I0512 12:01:09.128126 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=791.922µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.225834 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-0" latency=1.014421ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.327396 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-1" latency=1.067042ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.429048 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-2" latency=1.013007ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.531028 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-3" latency=1.294862ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.532597 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods" latency=1.111561ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.532760 906438 eventhandlers.go:173] add event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.532780 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.532791 906438 scheduler.go:575] Attempting to schedule pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.533582 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority" latency=625.935µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.533708 906438 generic_scheduler.go:1043] Node node1 is a potential node for preemption. | |
| I0512 12:01:09.534907 906438 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority/status" latency=957.3µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.537372 906438 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-2" latency=2.07426ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.538823 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=971.852µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.545114 906438 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-1" latency=7.380791ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.546410 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=858.377µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.547216 906438 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-0" latency=1.840705ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.547380 906438 factory.go:457] Unable to schedule preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting | |
| I0512 12:01:09.547402 906438 scheduler.go:782] Updating pod condition for preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority to (PodScheduled==False, Reason=Unschedulable) | |
| I0512 12:01:09.550055 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=2.25852ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41366": | |
| I0512 12:01:09.550055 906438 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority/status" latency=2.474997ms resp=409 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.550068 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=2.28469ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41364": | |
| E0512 12:01:09.550218 906438 scheduler.go:394] Error updating the condition of the pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority: Operation cannot be fulfilled on pods "medium-priority": the object has been modified; please apply your changes to the latest version and try again | |
| I0512 12:01:09.550279 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.550295 906438 scheduler.go:575] Attempting to schedule pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.550683 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority" latency=3.124561ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.551076 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority" latency=601.779µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41366": | |
| I0512 12:01:09.551249 906438 factory.go:457] Unable to schedule preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting | |
| I0512 12:01:09.551283 906438 scheduler.go:782] Updating pod condition for preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority to (PodScheduled==False, Reason=Unschedulable) | |
| I0512 12:01:09.552186 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority" latency=684.884µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| E0512 12:01:09.552352 906438 factory.go:494] pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority is already present in the backoff queue | |
| I0512 12:01:09.552629 906438 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority/status" latency=1.168616ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.552863 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.552875 906438 scheduler.go:575] Attempting to schedule pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.555227 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=3.437918ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| I0512 12:01:09.555320 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority" latency=2.262705ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41346": | |
| I0512 12:01:09.555512 906438 factory.go:457] Unable to schedule preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting | |
| I0512 12:01:09.555538 906438 scheduler.go:782] Updating pod condition for preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority to (PodScheduled==False, Reason=Unschedulable) | |
| I0512 12:01:09.556285 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority" latency=580.835µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.556422 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=712.08µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| I0512 12:01:09.634407 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority" latency=1.272439ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| I0512 12:01:09.636041 906438 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods" latency=1.243673ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| I0512 12:01:09.636262 906438 eventhandlers.go:173] add event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority | |
| I0512 12:01:09.636285 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority | |
| I0512 12:01:09.636292 906438 scheduler.go:575] Attempting to schedule pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority | |
| I0512 12:01:09.638987 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/high-priority" latency=2.505006ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| I0512 12:01:09.639189 906438 generic_scheduler.go:1043] Node node1 is a potential node for preemption. | |
| I0512 12:01:09.640580 906438 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/high-priority/status" latency=1.046382ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| I0512 12:01:09.641534 906438 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-2" latency=684.864µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| I0512 12:01:09.642378 906438 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/lpod-1" latency=649.386µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| I0512 12:01:09.642553 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=713.804µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.643374 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=669.692µs resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.644222 906438 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority/status" latency=1.562505ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| I0512 12:01:09.644399 906438 factory.go:457] Unable to schedule preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting | |
| I0512 12:01:09.644430 906438 scheduler.go:782] Updating pod condition for preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority to (PodScheduled==False, Reason=Unschedulable) | |
| I0512 12:01:09.650660 906438 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/high-priority/status" latency=6.06106ms resp=409 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.650660 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=5.825204ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41376": | |
| I0512 12:01:09.650674 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/high-priority" latency=6.076866ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41370": | |
| E0512 12:01:09.650876 906438 scheduler.go:394] Error updating the condition of the pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority: Operation cannot be fulfilled on pods "high-priority": the object has been modified; please apply your changes to the latest version and try again | |
| I0512 12:01:09.650918 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority | |
| I0512 12:01:09.650936 906438 scheduler.go:575] Attempting to schedule pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority | |
| I0512 12:01:09.651759 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/high-priority" latency=637.903µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41376": | |
| I0512 12:01:09.651917 906438 factory.go:457] Unable to schedule preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting | |
| I0512 12:01:09.651951 906438 scheduler.go:782] Updating pod condition for preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority to (PodScheduled==False, Reason=Unschedulable) | |
| I0512 12:01:09.653067 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/high-priority" latency=852.809µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| E0512 12:01:09.653336 906438 factory.go:494] pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority is already present in unschedulable queue | |
| I0512 12:01:09.653349 906438 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/high-priority/status" latency=1.230789ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41376": | |
| I0512 12:01:09.654219 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=1.711439ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41378": | |
| I0512 12:01:09.739521 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/high-priority" latency=2.607343ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41376": | |
| I0512 12:01:09.843735 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods/medium-priority" latency=2.686428ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41376": | |
| I0512 12:01:09.852607 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority | |
| I0512 12:01:09.852638 906438 scheduler.go:763] Skip schedule deleting pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority | |
| I0512 12:01:09.853929 906438 eventhandlers.go:205] delete event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/high-priority | |
| I0512 12:01:09.854905 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=1.993281ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:09.862867 906438 eventhandlers.go:278] delete event for scheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-0 | |
| I0512 12:01:09.865134 906438 eventhandlers.go:278] delete event for scheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-1 | |
| I0512 12:01:09.869399 906438 eventhandlers.go:278] delete event for scheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-2 | |
| I0512 12:01:09.871850 906438 eventhandlers.go:278] delete event for scheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/lpod-3 | |
| I0512 12:01:09.873173 906438 scheduling_queue.go:808] About to try and schedule pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.873205 906438 scheduler.go:763] Skip schedule deleting pod: preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.874225 906438 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods" latency=29.214937ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41376": | |
| I0512 12:01:09.874233 906438 eventhandlers.go:205] delete event for unscheduled pod preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/medium-priority | |
| I0512 12:01:09.876734 906438 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/events" latency=3.336965ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:10.010196 906438 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync | |
| I0512 12:01:10.010838 906438 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync | |
| I0512 12:01:10.010967 906438 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync | |
| I0512 12:01:10.011360 906438 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync | |
| I0512 12:01:10.011716 906438 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync | |
| I0512 12:01:10.012640 906438 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync | |
| I0512 12:01:10.875721 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona0d51d14-ee22-42ad-a1e4-cf24d4a9348e/pods" latency=952.652µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:10.876092 906438 reflector.go:213] Stopping reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/util/util.go:404 | |
| I0512 12:01:10.876102 906438 reflector.go:213] Stopping reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:10.876093 906438 reflector.go:213] Stopping reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:10.876114 906438 reflector.go:213] Stopping reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:10.876115 906438 reflector.go:213] Stopping reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:10.876113 906438 reflector.go:213] Stopping reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:10.876100 906438 reflector.go:213] Stopping reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:10.876119 906438 reflector.go:213] Stopping reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135 | |
| I0512 12:01:10.876254 906438 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=2&timeout=6m22s&timeoutSeconds=382&watch=true" latency=1.865419921s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41332": | |
| I0512 12:01:10.876280 906438 httplog.go:90] verb="GET" URI="/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=2&timeout=5m1s&timeoutSeconds=301&watch=true" latency=1.865735894s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41308": | |
| I0512 12:01:10.876311 906438 httplog.go:90] verb="GET" URI="/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2&timeout=6m17s&timeoutSeconds=377&watch=true" latency=1.864580392s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41338": | |
| I0512 12:01:10.876409 906438 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=2&timeout=9m56s&timeoutSeconds=596&watch=true" latency=1.863756088s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41340": | |
| I0512 12:01:10.876467 906438 httplog.go:90] verb="GET" URI="/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=2&timeoutSeconds=329&watch=true" latency=1.962676496s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41304": | |
| I0512 12:01:10.876546 906438 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=2&timeout=5m32s&timeoutSeconds=332&watch=true" latency=1.865593502s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41334": | |
| I0512 12:01:10.876604 906438 httplog.go:90] verb="GET" URI="/api/v1/services?allowWatchBookmarks=true&resourceVersion=119&timeout=9m30s&timeoutSeconds=570&watch=true" latency=1.866503105s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41330": | |
| I0512 12:01:10.876685 906438 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=2&timeout=9m16s&timeoutSeconds=556&watch=true" latency=1.865347838s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41336": | |
| I0512 12:01:10.878383 906438 httplog.go:90] verb="DELETE" URI="/api/v1/nodes" latency=2.14253ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:10.878478 906438 controller.go:181] Shutting down kubernetes service endpoint reconciler | |
| I0512 12:01:10.881727 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=3.126263ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:10.883061 906438 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.077816ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:10.883949 906438 httplog.go:90] verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=568.922µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:10.884897 906438 httplog.go:90] verb="PUT" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=723.14µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41344": | |
| I0512 12:01:10.885023 906438 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller | |
| I0512 12:01:10.885031 906438 reflector.go:213] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444 | |
| I0512 12:01:10.885112 906438 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=2&timeout=8m34s&timeoutSeconds=514&watch=true" latency=5.477993614s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:41306": | |
| --- PASS: TestNominatedNodeCleanUp (5.62s) | |
| PASS | |
| ok k8s.io/kubernetes/test/integration/scheduler 5.670s | |
| +++ [0512 12:01:10] Cleaning up etcd | |
| +++ [0512 12:01:10] Integration test cleanup complete |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment